Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building an Apache log analyzer with Pandas
Apache access logs are text files that record the activity of a web site. The analysis of log files provides useful insights for web masters and site owners. This example illustrates how to use the data analysis functions of Pandas to analize log files.
For this exercise we will need to import the following libraries
Step1: Apache log files
Log files come in various formats which differ in the amount mof information registered in the log. The two most common formats are the 'common' and the 'combined'. For more details about apache log formats see <a href="https
Step2: And now we can read the log file into a Pandas <a href="https
Step3: We can check how many rows and columns in the data with
Step4: And take a look at the data with
Step5: In this case the log file is in the common format and for this reason the referer and agent columns are empty.
We may also want to check that the type conversions work as expected
Step6: We can check some basic statistics of the data, for example
The mean transfer size is
Step7: and its standard deviation is
Step8: We can see that in some cases the size was not a valid value
Step9: When doing analysis on files with this kind of problems, be aware that there may be cases like this that introduce error in the results. For example if we count the number of size records we obtain
Step10: Much lower than the number of rows in the table.
Data analysis
Grouping
Using a grouping criteria we can obtain statistics about groups of records that share certain characteristic. In this example, we are going to characterize the visits to the website by day of the week. So, we start by grouping the dataframe by weekday for counting the number is hits. In this case, any column will do for the counting.
Step11: Observe that the groupby operation returns a DataFrameGroupBy object
Step12: The indices attribute give us a good intuition of how groups are represented
Step13: each weekday (0,..,6) is an array that containts the rows of the dataframe that belong to the group. This is quite important because when computing any metric, we have to use functions that operate over the arrays of the corresponding rows.
For example, counting the number of elements in the array gives us the total hits of the respective day
Step14: Of course, it would be nicer if we give the days of the week the corresponding names and the column the name 'Hits'
Step15: The method describe gives a basic statistical description of the 'Hits' metric
Step16: Sometimes a picture is work a thousand words
Step17: One may want a characterize several of the metrics at the same time. Let us characterize the behavior of hits and transferred bytes per weekday
Step18: In this case, we want to count the number of hits and sum the number of bytes. The aggregate method allows passing the fuctions to apply to each column
Step19: A couple of notes
Step20: Now let us plot both metrics
Step21: Post-processing of the dataset
Sometimes, we can obtain additional information by doing some operations over the data. One good example is identifying the resources of the web site. Currently, the dataset containts a request column that gives the method, the resource and the protocol version
Step22: The HTTP protocol dictates that the three fields are separated by spaces and the request cannot contain any additional spaces. Thus a simple split would give us the three fields. Lets grab the resource and add an extra column to the dataframe
Step23: Here we use the apply method, which invokes a function to each element of the array. We define a lambda function that takes the request string, makes a split and returns the element at position 1.
Well then, let us find the number of hits and bytes for each resource
Step24: It is commonly the case that we are interested in the top hitters. So, let us sort the information and grab the top ten hitters in descending order
Step25: Observe that the top hits are not responsible for the most transferred bytes. Let us see this from the perspective of transferred bytes
Step26: Now we now what to optimize to improve the bandwidth utilization of the website. How about plotting some of this information | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from dateutil.parser import *
Explanation: Building an Apache log analyzer with Pandas
Apache access logs are text files that record the activity of a web site. The analysis of log files provides useful insights for web masters and site owners. This example illustrates how to use the data analysis functions of Pandas to analize log files.
For this exercise we will need to import the following libraries:
End of explanation
def apacheDateParser(x,y):
return parse(x+' '+y, fuzzy=True)
def myIntParser(x):
try:
# Throws a ValueError exception if is not a valid integer
return int(x)
except ValueError:
return np.nan
Explanation: Apache log files
Log files come in various formats which differ in the amount mof information registered in the log. The two most common formats are the 'common' and the 'combined'. For more details about apache log formats see <a href="https://httpd.apache.org/docs/2.2/en/logs.html">Apache's Log Files</a>. Here are a couple of examples:
Common format:<BR>
<PRE>local - - [30/Mar/1995:20:28:05 -0700] "GET 6224.html HTTP/1.0" 200 2644</PRE>
Combined format:<BR>
<PRE>192.168.0.1 - - [02/Jan/2018:07:20:02 -0500] "GET /index.html HTTP/1.1" 200 8813 "-" "Mozilla/5.0 (Linux; Android 7.0; PRA-LX3 Build/HUAWEIPRA-LX3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.91 Mobile Safari/537.36"</PRE>
The first element of a log line is the address or hostname of the client.<BR>
Next come the id and the user, usually not used and register as a '-'.<BR>
Then, we have the date and time enclosed between '[' and ']'.<BR>
The next element is the request, which comes enclosed in quotes.<BR>
Then we have a couple numbers, the http status code and the response size. In some cases, they may be registered as '-'.<BR>
In the combined format we have two additional elements, the referer and the user agent, both enclosed in quotes.
Reading the log file
At first, the log lines may seem difficult to parse. Pandas simplifies thet task with the <a href="https://pandas.pydata.org/pandas-docs/stable/io.html#io-read-csv-table">read_csv function</a>, which supports many customization options. We are going to need the following:
<UL>
<li><em>delim_whitespace=True</em>. Use whitespaces instead of commas as field separators.</li>
<li><em>header=None</em>. Log files do no include a headers line.</li>
<li><em>parse_dates={ 'dt': [3,4] }</em>. We are going to use columns 3 and 4 to parse the date.</li>
<li><em>date_parser=apacheDateParser</em>. Our custom function to parse dates in the log's format.</li>
<li><em>index_col=0</em>. Use the date-time field as the dataframe index</li>
<li><em>names=['client','id','user','datetime','tz','request','status','size','referer','agent']</em>. Name the columns to make it easier to access and remember their meaning.</li>
<li><em>converters={ 'status':myIntParser, 'size': myIntParser }</em>. We want to handle the status code and the document size as integers, but we need a custon function to handle the cases where the field is '-'.</li>
<li><em>encoding='iso-8859-1'</em>. What encoding uses the log file. Most cases are either 'iso-8859-1' or 'utf-8'.</li>
<li><em>dtype={ 'referer': object, 'agent': object }.</em>Specify the data type of the last columns. Important in the case of the common format, which does not contain this information.</li>
</UL>
So, lets define our custom functions for parsing integers and dates:
End of explanation
data = pd.read_csv('<Path to your access_log file>', encoding='iso-8859-1',
delim_whitespace=True,
header=None,
parse_dates={ 'dt': [3,4] },
date_parser=apacheDateParser,
index_col=0,
names=['client','id','user','datetime','tz','request','status','size','referer','agent'],
converters={ 'status':myIntParser, 'size': myIntParser },
dtype={ 'referer': object, 'agent': object } )
Explanation: And now we can read the log file into a Pandas <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html">Dataframe</a>.
End of explanation
print(data.shape)
Explanation: We can check how many rows and columns in the data with
End of explanation
data.head()
Explanation: And take a look at the data with
End of explanation
data.dtypes
Explanation: In this case the log file is in the common format and for this reason the referer and agent columns are empty.
We may also want to check that the type conversions work as expected
End of explanation
data['size'].mean()
Explanation: We can check some basic statistics of the data, for example
The mean transfer size is
End of explanation
data['size'].std()
Explanation: and its standard deviation is
End of explanation
data['size'][data['size'].isnull()].head()
Explanation: We can see that in some cases the size was not a valid value
End of explanation
data['size'].count()
Explanation: When doing analysis on files with this kind of problems, be aware that there may be cases like this that introduce error in the results. For example if we count the number of size records we obtain
End of explanation
grpHitsWDay = data[['id']].groupby(data.index.weekday, sort=False)
Explanation: Much lower than the number of rows in the table.
Data analysis
Grouping
Using a grouping criteria we can obtain statistics about groups of records that share certain characteristic. In this example, we are going to characterize the visits to the website by day of the week. So, we start by grouping the dataframe by weekday for counting the number is hits. In this case, any column will do for the counting.
End of explanation
grpHitsWDay
Explanation: Observe that the groupby operation returns a DataFrameGroupBy object
End of explanation
grpHitsWDay.indices
Explanation: The indices attribute give us a good intuition of how groups are represented
End of explanation
grpHitsWDay.count()
Explanation: each weekday (0,..,6) is an array that containts the rows of the dataframe that belong to the group. This is quite important because when computing any metric, we have to use functions that operate over the arrays of the corresponding rows.
For example, counting the number of elements in the array gives us the total hits of the respective day
End of explanation
hits = grpHitsWDay.count()
hits.index = [ 'Mon','Tue','Wed','Thu','Fri','Sat','Sun' ]
hits.columns = [ 'Hits' ]
hits
Explanation: Of course, it would be nicer if we give the days of the week the corresponding names and the column the name 'Hits'
End of explanation
hits.describe()
Explanation: The method describe gives a basic statistical description of the 'Hits' metric
End of explanation
hits.plot(kind='bar', figsize=(8,6), colormap='summer', title='Hits per weekday', legend=False)
plt.show()
Explanation: Sometimes a picture is work a thousand words
End of explanation
grpWDay = data[ ['id','size'] ].groupby(data.index.weekday)
Explanation: One may want a characterize several of the metrics at the same time. Let us characterize the behavior of hits and transferred bytes per weekday
End of explanation
stats = grpWDay.aggregate({ 'id':lambda x: x.count(), 'size':np.sum })
stats
Explanation: In this case, we want to count the number of hits and sum the number of bytes. The aggregate method allows passing the fuctions to apply to each column
End of explanation
stats = grpWDay.aggregate({ 'id':lambda x: x.count(), 'size':np.sum }).rename(columns={'size':'Bytes', 'id':'Hits'})
stats.index=[ 'Mon','Tue','Wed','Thu','Fri','Sat','Sun' ]
stats
Explanation: A couple of notes:
<ul>
<li>The columns of a dataframe are NumPy arrays, therefore standard NumPy functions (such as np.sum) can be used.</li>
<li>A did not find a function that takes an array as argument and count the number of elements, but no problem, I created a lambda expression that takes the array as input and returns the result of invoking its count() method.</li>
</ul>
Of course, everything looks better with nicer names, so let us rename the columns and the indices
End of explanation
stats.plot(kind='bar', figsize=(8,6), colormap='summer', title='Hits & bytes per weekday', subplots=True)
plt.show()
Explanation: Now let us plot both metrics
End of explanation
data['request'].head(10)
Explanation: Post-processing of the dataset
Sometimes, we can obtain additional information by doing some operations over the data. One good example is identifying the resources of the web site. Currently, the dataset containts a request column that gives the method, the resource and the protocol version
End of explanation
data['resource'] = data['request'].apply(lambda x: x.split()[1])
data['resource'].head(10)
Explanation: The HTTP protocol dictates that the three fields are separated by spaces and the request cannot contain any additional spaces. Thus a simple split would give us the three fields. Lets grab the resource and add an extra column to the dataframe
End of explanation
grpRsc = data[ ['id','size'] ].groupby(data['resource'])
stats = grpRsc.aggregate({ 'id':lambda x: x.count(), 'size':np.sum }).rename(columns={'size':'XferBytes', 'id':'Hits'})
stats
Explanation: Here we use the apply method, which invokes a function to each element of the array. We define a lambda function that takes the request string, makes a split and returns the element at position 1.
Well then, let us find the number of hits and bytes for each resource
End of explanation
sortedh = stats.sort_values(by='Hits', ascending=False)
sortedh.head(10)
Explanation: It is commonly the case that we are interested in the top hitters. So, let us sort the information and grab the top ten hitters in descending order
End of explanation
sortedb = stats.sort_values(by='XferBytes', ascending=False)
sortedb.head(10)
Explanation: Observe that the top hits are not responsible for the most transferred bytes. Let us see this from the perspective of transferred bytes
End of explanation
sortedb.head(10).plot(kind='bar', figsize=(8,5), colormap='summer', title='Xfer & Hits (sorted by Xfer)', subplots=True)
plt.show()
Explanation: Now we now what to optimize to improve the bandwidth utilization of the website. How about plotting some of this information
End of explanation |
201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Queuing Simulation
We present in this notebook a simple bank customer arrival model.
We use a Poisson distribution to model the arrival and departure times.
Step1: Next we initialize two arrays to keep track of the customers who arrive and who depart
Step2: We simulate the arrivals and departures in a bank branch using a while loop | Python Code:
import random
#define and initialize the parameters of the Poisson distributions
lambd_in = 0.5
lambd_out = 0.4
#bank variables
closing_time = 100 #initialize the bank closing time
overtime = 0 #overtime the employees need to be paid for
#queue variables
num_arrivals = 0 #number of people in the que
num_departures = 0 #number of people who have been served
n = 0 #length of the queue
max_line_length = 0 #the maximum length of the waiting line:
#time variables
t = 0 #set the time of first arrival to 0
time_depart = float('inf') #set the first time of departure to infinity
time_arrive = random.expovariate(lambd_in) #generate the first arrival
Explanation: Queuing Simulation
We present in this notebook a simple bank customer arrival model.
We use a Poisson distribution to model the arrival and departure times.
End of explanation
departures = []
arrivals = []
Explanation: Next we initialize two arrays to keep track of the customers who arrive and who depart:
End of explanation
while t < closing_time or n >= 0:
# case 1 - within business hours, a customer arrives before any customer leaves the queue
if time_arrive <= time_depart and time_arrive <= closing_time:
t = time_arrive # move time along to the time of the new arrival
num_arrivals += 1 # increase the number of customers with the additional arrival
n += 1 # we have an additional customer, increase the size of the waiting line by 1
# generate time of next arrival
time_arrive = random.expovariate(lambd_in) + t
#append the new customer to the arrival list
arrivals.append(t)
print("Arrival ", num_arrivals, "at time ", t)
# generate time of departure
if n == 1:
Y = random.expovariate(lambd_out)
time_depart = t + Y
'''
print('Arrivals', arrivals)
print('Departures', departures)
'''
# case 2 - within business hours, a customer departs before the next arrival
elif time_depart < time_arrive and time_depart <= closing_time:
# advance time to the next departure time
t = time_depart
# one more person served -> increase the count of clients who have been served
num_departures += 1
#update the departure list
departures.append(t)
print("Departure ", num_departures, "at time ", t)
# one less person in line -> decrease the size of the waiting line
n -= 1
# if the queue is empt -> set the time of the next departure to infinity
if n == 0:
time_depart = float('inf')
# if the queue isn't empty, generate the next time of departure
else:
Y = random.expovariate(lambd_out)
time_depart = t + Y
'''
print('Arrivals', arrivals)
print('Departures', departures)
'''
# case 3 - next arrival/departure happens after closing time and there are people still in the queue
elif min(time_arrive, time_depart) > closing_time and n > 0:
# advance time to next departure
t = time_depart
#update the departure list
departures.append(t)
#update the number of departures/clients served
num_departures += 1 # one more person served
print("Departure ", num_departures, "at time ", t)
#update the queue
n -= 1 # one less person in the waiting line
# if line isn't empty, generate the time of the next departure
if n > 0:
Y = random.expovariate(lambd_out)
time_depart = t + Y
'''
print('Arrivals', arrivals)
print('Departures', departures)
'''
# case 4 - next arrival/departure happens after closing time and there is nobody left in the queue
elif min(time_arrive, time_depart) > closing_time and n == 0:
# calculate overtime
overtime = max(t - closing_time, 0)
print('Overtime = ', overtime)
'''
print('Arrivals', arrivals)
print('Departures', departures)
'''
break
Explanation: We simulate the arrivals and departures in a bank branch using a while loop
End of explanation |
202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
欲抓的網頁列表
Step1: 下載網頁並且組成一個檔案
Step2: 我們來看一下實際檔案的內容
請點即此鏈結
載入頁面,並進行 json parsing
Step3: RDD 常用的 Function 如下
map(funct) - 對 RDD 的裡頭的元素進行處理動作
mapValues (func) - 排除 Key,只對 RDD 的裡頭的元素進行處理動作
reduceByKey(func) - 將相同的 Key 裡頭的 Values 給予合併起來
count() - 計算 RDD 裡頭元素的個數
filter(func) - 根據 condition 判斷是否需要保留
first() - 取得 RDD 裡頭的第一個元素
<span style="color
Step4: <span style="color | Python Code:
urllist = ['http://chahabi77.pixnet.net/blog/post/436715527',
'http://chahabi77.pixnet.net/blog/post/403682269',
'http://chahabi77.pixnet.net/blog/post/354943724',
'http://chahabi77.pixnet.net/blog/post/386442944',
'http://chahabi77.pixnet.net/blog/post/235296791',
]
Explanation: 欲抓的網頁列表
End of explanation
import urllib2
import json
f = open('./pixnet.txt',"w")
for u in urllist:
line = {}
response = urllib2.urlopen(u)
html = response.read()
html = html.replace('\r','').replace('\n','')
line['html'] = html
line['url'] =u
line_str = json.dumps(line)
f.write(line_str+"\r\n")
f.close()
Explanation: 下載網頁並且組成一個檔案
End of explanation
import json
pixnet = sc.textFile('./pixnet.txt',use_unicode=False).map(
lambda x : json.loads(x)).map(lambda x : (x['url'],x['html']))
print "URL:", pixnet.first()[0]
print "資料筆數: ", pixnet.count()
print "HTML 前 200 字元:", pixnet.first()[1][:200]
Explanation: 我們來看一下實際檔案的內容
請點即此鏈結
載入頁面,並進行 json parsing
End of explanation
count_nummber = pixnet.filter(lambda x : u"好吃" in x[1] ).count()
if count_nummber == 4 : print "你答對了"
Explanation: RDD 常用的 Function 如下
map(funct) - 對 RDD 的裡頭的元素進行處理動作
mapValues (func) - 排除 Key,只對 RDD 的裡頭的元素進行處理動作
reduceByKey(func) - 將相同的 Key 裡頭的 Values 給予合併起來
count() - 計算 RDD 裡頭元素的個數
filter(func) - 根據 condition 判斷是否需要保留
first() - 取得 RDD 裡頭的第一個元素
<span style="color: blue"> 請填入??,來計算"好吃"的次數(多少頁面之中)</span>
End of explanation
def word_count(text):
return text.count(u"好吃")
print "好吃出現了",word_count(u"老師好吃好吃好吃好吃!!!!"),"次"
pixnet.mapValues(word_count).collect()
total_count = pixnet.mapValues(word_count).map(lambda x : x[1]).reduce(lambda x,y: x+y)
if total_count == 23 : print "你答對了"
else : print "答錯了!你的答案是 %d, 正確答案是59" % (total_count)
Explanation: <span style="color: blue">請修改以下的程式碼,並計算"好吃"所有出現次數,注意!!不是頁面數</span>
<span style="color:red">提示:修改 word_count 函式</span>
End of explanation |
203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
The Basics
We begin by importing selfies.
Step1: First, let's try translating between SMILES and SELFIES - as an example, we will use benzaldehyde. To translate from SMILES to SELFIES, use the selfies.encoder function, and to translate from SMILES back to SELFIES, use the selfies.decoder function.
Step2: Note that original_smiles and decoded_smiles are different strings, but they both represent benzaldehyde. Thus, when comparing the two SMILES strings, string equality should not be used. Insead, use RDKit to check whether the SMILES strings represent the same molecule.
Step3: Customizing SELFIES
The SELFIES grammar is derived dynamically from a set of semantic constraints, which assign bonding capacities to various atoms. Let's customize the semantic constraints that selfies operates on. By default, the following constraints are used
Step4: These constraints map atoms (they keys) to their bonding capacities (the values). The special ? key maps to the bonding capacity for all atoms that are not explicitly listed in the constraints. For example, S and Li are constrained to a maximum of 6 and 8 bonds, respectively. Every SELFIES string can be decoded into a molecule that obeys the current constraints.
Step5: But suppose that we instead wanted to constrain S and Li to a maximum of 2 and 1 bond(s), respectively. To do so, we create a new set of constraints, and tell selfies to operate on them using selfies.set_semantic_constraints.
Step6: To check that the update was succesful, we can use selfies.get_semantic_constraints, which returns the semantic constraints that selfies is currently operating on.
Step7: Our previous SELFIES string is now decoded like so. Notice that the specified bonding capacities are met, with every S and Li making only 2 and 1 bonds, respectively.
Step8: Finally, to revert back to the default constraints, simply call
Step9: Please refer to the API reference for more details and more preset constraints.
SELFIES in Practice
Let's use a simple example to show how selfies can be used in practice, as well as highlight some convenient utility functions from the library. We start with a toy dataset of SMILES strings. As before, we can use selfies.encoder to convert the dataset into SELFIES form.
Step10: The function selfies.len_selfies computes the symbol length of a SELFIES string. We can use it to find the maximum symbol length of the SELFIES strings in the dataset.
Step11: To extract the SELFIES symbols that form the dataset, use selfies.get_alphabet_from_selfies. Here, we add [nop] to the alphabet, which is a special padding character that selfies recognizes.
Step12: Then, create a mapping between the alphabet SELFIES symbols and indices.
Step13: SELFIES provides some convenience methods to convert between SELFIES strings and label (integer) and one-hot encodings. Using the first entry of the dataset (dimethyl ether) as an example
Step14: If different encoding strategies are desired, selfies.split_selfies can be used to tokenize a SELFIES string into its individual symbols. | Python Code:
import selfies as sf
Explanation: Tutorial
The Basics
We begin by importing selfies.
End of explanation
original_smiles = "O=Cc1ccccc1" # benzaldehyde
try:
encoded_selfies = sf.encoder(original_smiles) # SMILES -> SELFIES
decoded_smiles = sf.decoder(encoded_selfies) # SELFIES -> SMILES
except sf.EncoderError as err:
pass # sf.encoder error...
except sf.DecoderError as err:
pass # sf.decoder error...
encoded_selfies
decoded_smiles
Explanation: First, let's try translating between SMILES and SELFIES - as an example, we will use benzaldehyde. To translate from SMILES to SELFIES, use the selfies.encoder function, and to translate from SMILES back to SELFIES, use the selfies.decoder function.
End of explanation
from rdkit import Chem
Chem.CanonSmiles(original_smiles) == Chem.CanonSmiles(decoded_smiles)
Explanation: Note that original_smiles and decoded_smiles are different strings, but they both represent benzaldehyde. Thus, when comparing the two SMILES strings, string equality should not be used. Insead, use RDKit to check whether the SMILES strings represent the same molecule.
End of explanation
sf.get_preset_constraints("default")
Explanation: Customizing SELFIES
The SELFIES grammar is derived dynamically from a set of semantic constraints, which assign bonding capacities to various atoms. Let's customize the semantic constraints that selfies operates on. By default, the following constraints are used:
End of explanation
sf.decoder("[Li][=C][C][S][=C][C][#S]")
Explanation: These constraints map atoms (they keys) to their bonding capacities (the values). The special ? key maps to the bonding capacity for all atoms that are not explicitly listed in the constraints. For example, S and Li are constrained to a maximum of 6 and 8 bonds, respectively. Every SELFIES string can be decoded into a molecule that obeys the current constraints.
End of explanation
new_constraints = sf.get_preset_constraints("default")
new_constraints['Li'] = 1
new_constraints['S'] = 2
sf.set_semantic_constraints(new_constraints)
Explanation: But suppose that we instead wanted to constrain S and Li to a maximum of 2 and 1 bond(s), respectively. To do so, we create a new set of constraints, and tell selfies to operate on them using selfies.set_semantic_constraints.
End of explanation
sf.get_semantic_constraints()
Explanation: To check that the update was succesful, we can use selfies.get_semantic_constraints, which returns the semantic constraints that selfies is currently operating on.
End of explanation
sf.decoder("[Li][=C][C][S][=C][C][#S]")
Explanation: Our previous SELFIES string is now decoded like so. Notice that the specified bonding capacities are met, with every S and Li making only 2 and 1 bonds, respectively.
End of explanation
sf.set_semantic_constraints()
Explanation: Finally, to revert back to the default constraints, simply call:
End of explanation
smiles_dataset = ["COC", "FCF", "O=O", "O=Cc1ccccc1"]
selfies_dataset = list(map(sf.encoder, smiles_dataset))
selfies_dataset
Explanation: Please refer to the API reference for more details and more preset constraints.
SELFIES in Practice
Let's use a simple example to show how selfies can be used in practice, as well as highlight some convenient utility functions from the library. We start with a toy dataset of SMILES strings. As before, we can use selfies.encoder to convert the dataset into SELFIES form.
End of explanation
max_len = max(sf.len_selfies(s) for s in selfies_dataset)
max_len
Explanation: The function selfies.len_selfies computes the symbol length of a SELFIES string. We can use it to find the maximum symbol length of the SELFIES strings in the dataset.
End of explanation
alphabet = sf.get_alphabet_from_selfies(selfies_dataset)
alphabet.add("[nop]")
alphabet = list(sorted(alphabet))
alphabet
Explanation: To extract the SELFIES symbols that form the dataset, use selfies.get_alphabet_from_selfies. Here, we add [nop] to the alphabet, which is a special padding character that selfies recognizes.
End of explanation
vocab_stoi = {symbol: idx for idx, symbol in enumerate(alphabet)}
vocab_itos = {idx: symbol for symbol, idx in vocab_stoi.items()}
vocab_stoi
Explanation: Then, create a mapping between the alphabet SELFIES symbols and indices.
End of explanation
dimethyl_ether = selfies_dataset[0]
label, one_hot = sf.selfies_to_encoding(dimethyl_ether, vocab_stoi, pad_to_len=max_len)
label
one_hot
dimethyl_ether = sf.encoding_to_selfies(one_hot, vocab_itos, enc_type="one_hot")
dimethyl_ether
sf.decoder(dimethyl_ether) # sf.decoder ignores [nop]
Explanation: SELFIES provides some convenience methods to convert between SELFIES strings and label (integer) and one-hot encodings. Using the first entry of the dataset (dimethyl ether) as an example:
End of explanation
list(sf.split_selfies("[C][O][C]"))
Explanation: If different encoding strategies are desired, selfies.split_selfies can be used to tokenize a SELFIES string into its individual symbols.
End of explanation |
204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the Google cloud-storage library as well.
Step2: Restart the Kernel
Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
Google Cloud SDK is already installed in Google Cloud Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your GCP account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
Note
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex SDK
Import the Vertex SDK into our Python environment.
Step11: Vertex AI constants
Setup up the following constants for Vertex AI
Step12: Clients
The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).
You will use several clients in this tutorial, so set them all up upfront.
Dataset Service for managed datasets.
Model Service for managed models.
Pipeline Service for training.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving. Note
Step13: Prepare a trainer script
Package assembly
Step14: Task.py contents
Step15: Store training script on your Cloud Storage bucket
Step16: Train a model
projects.locations.customJobs.create
Request
Step17: Example output
Step18: Response
Step19: Example output
Step20: projects.locations.customJobs.get
Call
Step21: Response
Step22: Example output
Step23: Deploy the model
Load the saved model
Step24: Serving function for image data
Step25: Get the serving function signature
Step26: Example output
Step27: Example output
Step28: Response
Step29: Example output
Step30: Make batch predictions
Make the batch input file
Let's now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be JSONL.
Step31: Example output
Step32: Example output
Step33: Response
Step34: Example output
Step35: projects.locations.batchPredictionJobs.get
Call
Step36: Response
Step38: Example output
Step39: Example output
Step40: Example output
Step41: Response
Step42: Example output
Step43: projects.locations.endpoints.deployModel
Request
Step44: Example output
Step45: Response
Step46: Example output
Step47: projects.locations.endpoints.predict
Prepare file for online prediction
Request
Step48: Example output
Step49: Response
Step50: Example output
Step51: Response
Step52: Example output | Python Code:
! pip3 install -U google-cloud-aiplatform --user
Explanation: Vertex SDK: Train & deploy a TensorFlow model with hosted runtimes (aka pre-built containers)
Installation
Install the latest (preview) version of Vertex SDK.
End of explanation
! pip3 install google-cloud-storage
Explanation: Install the Google cloud-storage library as well.
End of explanation
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
Google Cloud SDK is already installed in Google Cloud Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
Explanation: Authenticate your GCP account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex SDK
Import the Vertex SDK into our Python environment.
End of explanation
# API Endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex AI location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex AI constants
Setup up the following constants for Vertex AI:
API_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.
API_PREDICT_ENDPOINT: The Vertex AI API service endpoint for prediction.
PARENT: The Vertex AI location root path for dataset, model and endpoint resources.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
Explanation: Clients
The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).
You will use several clients in this tutorial, so set them all up upfront.
Dataset Service for managed datasets.
Model Service for managed models.
Pipeline Service for training.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving. Note: Prediction has a different service endpoint.
End of explanation
! rm -rf cifar
! mkdir cifar
! touch cifar/README.md
setup_cfg = "[egg_info]\n\
tag_build =\n\
tag_date = 0"
! echo "$setup_cfg" > cifar/setup.cfg
setup_py = "import setuptools\n\
# Requires TensorFlow Datasets\n\
setuptools.setup(\n\
install_requires=[\n\
'tensorflow_datasets==1.3.0',\n\
],\n\
packages=setuptools.find_packages())"
! echo "$setup_py" > cifar/setup.py
pkg_info = "Metadata-Version: 1.0\n\
Name: Custom Training CIFAR-10\n\
Version: 0.0.0\n\
Summary: Demonstration training script\n\
Home-page: www.google.com\n\
Author: Google\n\
Author-email: aferlitsch@google.com\n\
License: Public\n\
Description: Demo\n\
Platform: Vertex AI"
! echo "$pkg_info" > cifar/PKG-INFO
! mkdir cifar/trainer
! touch cifar/trainer/__init__.py
Explanation: Prepare a trainer script
Package assembly
End of explanation
%%writefile cifar/trainer/task.py
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default='/tmp/saved_model', type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
NUM_WORKERS = strategy.num_replicas_in_sync
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
Explanation: Task.py contents
End of explanation
! rm -f cifar.tar cifar.tar.gz
! tar cvf cifar.tar cifar
! gzip cifar.tar
! gsutil cp cifar.tar.gz gs://$BUCKET_NAME/trainer_cifar.tar.gz
Explanation: Store training script on your Cloud Storage bucket
End of explanation
JOB_NAME = "custom_job_TF_" + TIMESTAMP
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest"
TRAIN_NGPU = 1
TRAIN_GPU = aip.AcceleratorType.NVIDIA_TESLA_K80
worker_pool_specs = [
{
"replica_count": 1,
"machine_spec": {
"machine_type": "n1-standard-4",
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
},
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": ["gs://" + BUCKET_NAME + "/trainer_cifar.tar.gz"],
"python_module": "trainer.task",
"args": [
"--model-dir=" + "gs://{}/{}".format(BUCKET_NAME, JOB_NAME),
"--epochs=" + str(20),
"--steps=" + str(100),
"--distribute=" + "single",
],
},
}
]
training_job = {
"display_name": JOB_NAME,
"job_spec": {"worker_pool_specs": worker_pool_specs},
}
print(
MessageToJson(
aip.CreateCustomJobRequest(parent=PARENT, custom_job=training_job).__dict__[
"_pb"
]
)
)
Explanation: Train a model
projects.locations.customJobs.create
Request
End of explanation
request = clients["job"].create_custom_job(parent=PARENT, custom_job=training_job)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"customJob": {
"displayName": "custom_job_TF_20210227173057",
"jobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4",
"acceleratorType": "NVIDIA_TESLA_K80",
"acceleratorCount": 1
},
"replicaCount": "1",
"pythonPackageSpec": {
"executorImageUri": "gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210227173057/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210227173057/custom_job_TF_20210227173057",
"--epochs=20",
"--steps=100",
"--distribute=single"
]
}
}
]
}
}
}
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the custom training job
custom_training_id = request.name
# The short numeric ID for the custom training job
custom_training_short_id = custom_training_id.split("/")[-1]
print(custom_training_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/customJobs/2970106362064797696",
"displayName": "custom_job_TF_20210227173057",
"jobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4",
"acceleratorType": "NVIDIA_TESLA_K80",
"acceleratorCount": 1
},
"replicaCount": "1",
"diskSpec": {
"bootDiskType": "pd-ssd",
"bootDiskSizeGb": 100
},
"pythonPackageSpec": {
"executorImageUri": "gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210227173057/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210227173057/custom_job_TF_20210227173057",
"--epochs=20",
"--steps=100",
"--distribute=single"
]
}
}
]
},
"state": "JOB_STATE_PENDING",
"createTime": "2021-02-27T17:31:04.494716Z",
"updateTime": "2021-02-27T17:31:04.494716Z"
}
End of explanation
request = clients["job"].get_custom_job(name=custom_training_id)
Explanation: projects.locations.customJobs.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
while True:
response = clients["job"].get_custom_job(name=custom_training_id)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
break
else:
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(20)
# model artifact output directory on Google Cloud Storage
model_artifact_dir = (
response.job_spec.worker_pool_specs[0].python_package_spec.args[0].split("=")[-1]
)
print("artifact location " + model_artifact_dir)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/customJobs/2970106362064797696",
"displayName": "custom_job_TF_20210227173057",
"jobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4",
"acceleratorType": "NVIDIA_TESLA_K80",
"acceleratorCount": 1
},
"replicaCount": "1",
"diskSpec": {
"bootDiskType": "pd-ssd",
"bootDiskSizeGb": 100
},
"pythonPackageSpec": {
"executorImageUri": "gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210227173057/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210227173057/custom_job_TF_20210227173057",
"--epochs=20",
"--steps=100",
"--distribute=single"
]
}
}
]
},
"state": "JOB_STATE_PENDING",
"createTime": "2021-02-27T17:31:04.494716Z",
"updateTime": "2021-02-27T17:31:04.494716Z"
}
End of explanation
import tensorflow as tf
model = tf.keras.models.load_model(model_artifact_dir)
Explanation: Deploy the model
Load the saved model
End of explanation
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
tf.saved_model.save(
model,
model_artifact_dir,
signatures={
"serving_default": serving_fn,
},
)
Explanation: Serving function for image data
End of explanation
loaded = tf.saved_model.load(model_artifact_dir)
input_name = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", input_name)
Explanation: Get the serving function signature
End of explanation
model = {
"display_name": "custom_job_TF" + TIMESTAMP,
"metadata_schema_uri": "",
"artifact_uri": model_artifact_dir,
"container_spec": {
"image_uri": "gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest"
},
}
print(MessageToJson(aip.UploadModelRequest(parent=PARENT, model=model).__dict__["_pb"]))
Explanation: Example output:
Serving function input: bytes_inputs
projects.locations.models.upload
Request
End of explanation
request = clients["model"].upload_model(parent=PARENT, model=model)
Explanation: Example output:
```
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "custom_job_TF20210227173057",
"containerSpec": {
"imageUri": "gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest"
},
"artifactUri": "gs://migration-ucaip-trainingaip-20210227173057/custom_job_TF_20210227173057"
}
}
```
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the model
model_id = result.model
print(model_id)
Explanation: Example output:
{
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264"
}
End of explanation
import base64
import json
import cv2
import numpy as np
import tensorflow as tf
(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
test_image_1, test_label_1 = x_test[0], y_test[0]
test_image_2, test_label_2 = x_test[1], y_test[1]
cv2.imwrite("tmp1.jpg", (test_image_1).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2).astype(np.uint8))
gcs_input_uri = "gs://" + BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
bytes = tf.io.read_file("tmp1.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
f.write(json.dumps({input_name: {"b64": b64str}}) + "\n")
bytes = tf.io.read_file("tmp2.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
f.write(json.dumps({input_name: {"b64": b64str}}) + "\n")
! gsutil cat $gcs_input_uri
Explanation: Make batch predictions
Make the batch input file
Let's now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be JSONL.
End of explanation
batch_prediction_job = aip.BatchPredictionJob(
display_name="custom_job_TF" + TIMESTAMP,
model=model_id,
input_config={
"instances_format": "jsonl",
"gcs_source": {"uris": [gcs_input_uri]},
},
model_parameters=ParseDict(
{"confidenceThreshold": 0.5, "maxPredictions": 2}, Value()
),
output_config={
"predictions_format": "jsonl",
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
},
},
dedicated_resources={
"machine_spec": {"machine_type": "n1-standard-2", "accelerator_type": 0},
"starting_replica_count": 1,
"max_replica_count": 1,
},
)
print(
MessageToJson(
aip.CreateBatchPredictionJobRequest(
parent=PARENT, batch_prediction_job=batch_prediction_job
).__dict__["_pb"]
)
)
Explanation: Example output:
{"bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel1vqmn5n5MuD6MM7qUJzbpxUXazvJSWtmuzTR8iaBoXirx54H1Hxo10mhx2V/8AZltpEE7ByAV8w8YLdRjAHAz1NcSNcXUtev8AwVrE0DajaQ+YZLY4jnXPJXrkjPPTPXGDXvXwi+F3hvwh8Ffip4i1a7GqX7a1b6fp0c84SKO3Wz3FiCdpHnSHDZ2/KAOtfP8A4v8Ah1qOoWul/Efwu4sL+wk8u2IkUi7JRhtwM5RgBkHpz0xXy+F4gzNY6Mqs3NTfvR6a6adj6bGcPZX/AGfKFKEYcqupemurufqP8c9Il/aA8BeHNS+HHh/7Ze634p0rUtMhsFWUJNdsFlR8HAAWWRXBPrmvGvi5+y/B+z1+0ZqHwW+PXx08LaL4VtJI75dOtPEksgfe8krskKIDCZWdCUkyU2MRuVga5X9lr9qAfsk/tCWPjTW9Ol1XwzpurtdXei27gBJTEyJcxBsDcu/OOAwBHBwa8S+JXxltPi3431/x34y8TT/2tqmpy3V1d6h8/mOzFiN46LkgDpgcdOK/HcPxo/qMalONqkn70ei816307I/Xa/C0XjXTrO8EtJdfR/cUfiz4m8aaBJefD/4NXcd4CJ7f/hI7bVXitZ4HkPzSQMvMxRUUTAEqFGCM4EPw/wDAsnhjwZEmrzte6ipKmWeYSbAV+bYTjAJBPTgNjNbOk+HYdL0qPxPcWsN5BK2FaO43q3fHUH8eld34kku/hP4LsvHPiPRtPvZNSkU6fYSFStvED8zsqjLsq5IBwOB1Jri/4iFn2BxSq0Yxulyq8eZLp1f4ms+BMkx2FlRquVm7u0uVvrbRH//Z"}}
{"bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9qIntrti9vhg3KkLwR69Kbc3FrYskd1LGjOjsqNjJCjLH8Mj8xXw3+yr+3v8ABbUZL2/8L/G/4ja2L0raac/xAvEbTmndtyLFKOd5AwcZwCSccV6X8Xv22/jD4K+L2n+BPA/7H+qeP4v7LSb/AISLQNYjW0ieTmWLfIoUBQiksxA6VxwxtN0VOWn4nTPC1Y1XBHpuqftI6BZ+MrDw/FZSw2dyzRyXl3p8g/eblCgbcjBG/k8dPevU1tCWIKj/AL5r5+8aftTfCqx+H9leeM/i1pXw51aWJvtWkWF1b6ldQnkqnmRqyg9c7fXGag/Zm/aY+HL69d6MPjvr/jVNWm32M19pcgSwREyVZygAJO7PbAFZ08TUjNqpt32/AdSiuVOK2PyC/Zs/4LOfs7/s+fAbQvgz4K/Ywu7rw94Bd4op9WsbfUZ1u5CGlupHBBLSMCd2MYAA4Fe0eGf+Dm/4deO9EuvDvhvSLjSWt7MpPaw+DfNiihYgNvRWK4/hyRjn3r8WvjN8MviF4C+LPiPTvhtZ6lDo8l86W6QswDID0IHUA5x7Ve/ZF1f9pX4C/Gq1+Ifw90PV7e6mgms71o7QP58EowyMrgqwJCnB9K3w+UQxleFF4hw52lzSb5Y3aXM7Juy3dtbHRRzrCu0qlKEl17/fc/W6f/gsjpGtX40z4Zadp1280IVYYPAdsv70nO8ZQnPPToK7z4a/tKftD/ETU7TQPEur6nbpdgMmnrFHak5PUwwquPq3Wvk34QwftUfE/GtfE3xmnhm0LAiy0SwhiupgezSxouzPfb+dfdv7DPwl0rQtcivhZx4Ub1eWQtJu6lmZslmPqfWnmXD+DyjESgsSq1usYyjF+a5tWvkh18+w+IXJQpJeZ//Z"}}
projects.locations.batchPredictionJobs.create
Request
End of explanation
request = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"batchPredictionJob": {
"displayName": "custom_job_TF_TF20210227173057",
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210227173057/test.jsonl"
]
}
},
"modelParameters": {
"maxPredictions": 10000.0,
"confidenceThreshold": 0.5
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210227173057/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
}
}
}
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
# The fully qualified ID for the batch job
batch_job_id = request.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/659759753223733248",
"displayName": "custom_job_TF_TF20210227173057",
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210227173057/test.jsonl"
]
}
},
"modelParameters": {
"maxPredictions": 10000.0,
"confidenceThreshold": 0.5
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210227173057/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"manualBatchTuningParameters": {},
"state": "JOB_STATE_PENDING",
"createTime": "2021-02-27T18:00:30.887438Z",
"updateTime": "2021-02-27T18:00:30.887438Z"
}
End of explanation
request = clients["job"].get_batch_prediction_job(name=batch_job_id)
Explanation: projects.locations.batchPredictionJobs.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
def get_latest_predictions(gcs_out_dir):
Get the latest prediction subfolder using the timestamp in the subfolder name
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
response = clients["job"].get_batch_prediction_job(name=batch_job_id)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", response.state)
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
folder = get_latest_predictions(
response.output_config.gcs_destination.output_uri_prefix
)
! gsutil ls $folder/prediction*
! gsutil cat $folder/prediction*
break
time.sleep(60)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/659759753223733248",
"displayName": "custom_job_TF_TF20210227173057",
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210227173057/test.jsonl"
]
}
},
"modelParameters": {
"confidenceThreshold": 0.5,
"maxPredictions": 10000.0
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210227173057/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"manualBatchTuningParameters": {},
"state": "JOB_STATE_RUNNING",
"createTime": "2021-02-27T18:00:30.887438Z",
"startTime": "2021-02-27T18:00:30.938444Z",
"updateTime": "2021-02-27T18:00:30.938444Z"
}
End of explanation
endpoint = {"display_name": "custom_job_TF" + TIMESTAMP}
print(
MessageToJson(
aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__["_pb"]
)
)
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210227173057/batch_output/prediction-custom_job_TF_TF20210227173057-2021_02_27T10_00_30_820Z/prediction.errors_stats-00000-of-00001
gs://migration-ucaip-trainingaip-20210227173057/batch_output/prediction-custom_job_TF_TF20210227173057-2021_02_27T10_00_30_820Z/prediction.results-00000-of-00001
{"instance": {"bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel1vqmn5n5MuD6MM7qUJzbpxUXazvJSWtmuzTR8iaBoXirx54H1Hxo10mhx2V/8AZltpEE7ByAV8w8YLdRjAHAz1NcSNcXUtev8AwVrE0DajaQ+YZLY4jnXPJXrkjPPTPXGDXvXwi+F3hvwh8Ffip4i1a7GqX7a1b6fp0c84SKO3Wz3FiCdpHnSHDZ2/KAOtfP8A4v8Ah1qOoWul/Efwu4sL+wk8u2IkUi7JRhtwM5RgBkHpz0xXy+F4gzNY6Mqs3NTfvR6a6adj6bGcPZX/AGfKFKEYcqupemurufqP8c9Il/aA8BeHNS+HHh/7Ze634p0rUtMhsFWUJNdsFlR8HAAWWRXBPrmvGvi5+y/B+z1+0ZqHwW+PXx08LaL4VtJI75dOtPEksgfe8krskKIDCZWdCUkyU2MRuVga5X9lr9qAfsk/tCWPjTW9Ol1XwzpurtdXei27gBJTEyJcxBsDcu/OOAwBHBwa8S+JXxltPi3431/x34y8TT/2tqmpy3V1d6h8/mOzFiN46LkgDpgcdOK/HcPxo/qMalONqkn70ei816307I/Xa/C0XjXTrO8EtJdfR/cUfiz4m8aaBJefD/4NXcd4CJ7f/hI7bVXitZ4HkPzSQMvMxRUUTAEqFGCM4EPw/wDAsnhjwZEmrzte6ipKmWeYSbAV+bYTjAJBPTgNjNbOk+HYdL0qPxPcWsN5BK2FaO43q3fHUH8eld34kku/hP4LsvHPiPRtPvZNSkU6fYSFStvED8zsqjLsq5IBwOB1Jri/4iFn2BxSq0Yxulyq8eZLp1f4ms+BMkx2FlRquVm7u0uVvrbRH//Z"}}, "prediction": [0.0407731421, 0.125140116, 0.118551917, 0.100501947, 0.128865793, 0.089787662, 0.157575116, 0.121281914, 0.0312845968, 0.0862377882]}
{"instance": {"bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9qIntrti9vhg3KkLwR69Kbc3FrYskd1LGjOjsqNjJCjLH8Mj8xXw3+yr+3v8ABbUZL2/8L/G/4ja2L0raac/xAvEbTmndtyLFKOd5AwcZwCSccV6X8Xv22/jD4K+L2n+BPA/7H+qeP4v7LSb/AISLQNYjW0ieTmWLfIoUBQiksxA6VxwxtN0VOWn4nTPC1Y1XBHpuqftI6BZ+MrDw/FZSw2dyzRyXl3p8g/eblCgbcjBG/k8dPevU1tCWIKj/AL5r5+8aftTfCqx+H9leeM/i1pXw51aWJvtWkWF1b6ldQnkqnmRqyg9c7fXGag/Zm/aY+HL69d6MPjvr/jVNWm32M19pcgSwREyVZygAJO7PbAFZ08TUjNqpt32/AdSiuVOK2PyC/Zs/4LOfs7/s+fAbQvgz4K/Ywu7rw94Bd4op9WsbfUZ1u5CGlupHBBLSMCd2MYAA4Fe0eGf+Dm/4deO9EuvDvhvSLjSWt7MpPaw+DfNiihYgNvRWK4/hyRjn3r8WvjN8MviF4C+LPiPTvhtZ6lDo8l86W6QswDID0IHUA5x7Ve/ZF1f9pX4C/Gq1+Ifw90PV7e6mgms71o7QP58EowyMrgqwJCnB9K3w+UQxleFF4hw52lzSb5Y3aXM7Juy3dtbHRRzrCu0qlKEl17/fc/W6f/gsjpGtX40z4Zadp1280IVYYPAdsv70nO8ZQnPPToK7z4a/tKftD/ETU7TQPEur6nbpdgMmnrFHak5PUwwquPq3Wvk34QwftUfE/GtfE3xmnhm0LAiy0SwhiupgezSxouzPfb+dfdv7DPwl0rQtcivhZx4Ub1eWQtJu6lmZslmPqfWnmXD+DyjESgsSq1usYyjF+a5tWvkh18+w+IXJQpJeZ//Z"}}, "prediction": [0.0406896845, 0.125281364, 0.118567884, 0.100639313, 0.12864624, 0.0898737088, 0.157521054, 0.121037535, 0.0313298739, 0.0864133239]}
Make online predictions
projects.locations.endpoints.create
Request
End of explanation
request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"endpoint": {
"displayName": "custom_job_TF_TF20210227173057"
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/endpoints/6810814827095654400"
}
End of explanation
deployed_model = {
"model": model_id,
"display_name": "custom_job_TF" + TIMESTAMP,
"dedicated_resources": {
"min_replica_count": 1,
"machine_spec": {"machine_type": "n1-standard-4", "accelerator_count": 0},
},
}
print(
MessageToJson(
aip.DeployModelRequest(
endpoint=endpoint_id,
deployed_model=deployed_model,
traffic_split={"0": 100},
).__dict__["_pb"]
)
)
Explanation: projects.locations.endpoints.deployModel
Request
End of explanation
request = clients["endpoint"].deploy_model(
endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={"0": 100}
)
Explanation: Example output:
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/6810814827095654400",
"deployedModel": {
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264",
"displayName": "custom_job_TF_TF20210227173057",
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-4"
},
"minReplicaCount": 1
}
},
"trafficSplit": {
"0": 100
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The unique ID for the deployed model
deployed_model_id = result.deployed_model.id
print(deployed_model_id)
Explanation: Example output:
{
"deployedModel": {
"id": "2064302294823862272"
}
}
End of explanation
import base64
import cv2
import tensorflow as tf
(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
test_image, test_label = x_test[0], y_test[0]
cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8))
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
instances_list = [{"bytes_inputs": {"b64": b64str}}]
prediction_request = aip.PredictRequest(endpoint=endpoint_id)
prediction_request.instances.append(instances_list)
print(MessageToJson(prediction_request.__dict__["_pb"]))
Explanation: projects.locations.endpoints.predict
Prepare file for online prediction
Request
End of explanation
request = clients["prediction"].predict(endpoint=endpoint_id, instances=instances_list)
Explanation: Example output:
```
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/6810814827095654400",
"instances": [
[
{
"bytes_inputs": {
"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD6E1zw/qemaZY669mkdtqsTPZTpMH85Y3KMcKeOR36444NZGj2/ibWPHaaPeHSLXRbq3jSw1O7u3V3u9zb0ZAh+QIFO4EkliCBjnwv9lfxtrviTxBbW974le/0nQ/h5ohms7m4b92bhVlkEfPIDuwJ6gyADgCuWh1fxP8As6/tGad8H5PiRrHjW6tNd1O/iXUr5Z7mx0uSZlinHODiRQCqrgGTGPmwPyqfClGlnM6Em3TSi/N3Wtnto015H6y+MK08kp14QSqScle6tFxel0+6aZ9d6/rvhXwH4407wWtq+uSXth9pa5jcwKUBIbyxzkL0Ock8nHQV2x0NtN0Gw8a6PDOunXc3liO5GZIGxwG6YBxx1x0zkV4L8Xfij4k8X/Gr4V+HdJtDpdgui3GoajJBAXlkuGvNoUEDcD5MYyuN3zEnpX0B4Q+Iunafdap8OPFCG/sL+PzLkGNgbQB1O7Jxh1JOCOvHXNfUYrh/LPqMo0oKDgvdl10117nzGD4izR5hGdWcp8zs4+umisflx8DNXi/Z/wDHviPTfiP4g+x2WieFtV03U5r9miLw2ilonTIySWijZCB6Yr2X4R/tQT/tC/s56f8AGn4C/AvxTrXiq7jksW1G78NxRlNiRxIrzO5EwiVHAePAfeoO1lIrqv2pf2Xz+1t+z3feC9E1GLSvE2paQtraa1cISXiEqu9tKVydrbMZ5Kkg8jIr234a/Bq7+EngjQPAng3wzB/ZOl6ZFa2tpp/yeWiqFB2Hq2ASeuTz15r9ixHBa+vSp1JXpxXuy6vyfpbXuz8jocUyWCVSirTb1j09V95e+E3hnwXr8dn8QPjLaSWZBguP+EcudKSW6gnSMfLHOrcQh2djCSAxY5BxkzfEDx1H4n8ZyvpEC2WnMAwighMe8hvl3gZyQCB15K5xWNq3iKbVNVk8MW91NZzxLllkt9jL2z0I/DrXCeG47T4seNL3wN4c1nULKPTY2GoX8YYNcSkfKisxwis2ASMnk9AK7f8AiHuQ47CulWlKzfM7S5W+vRfgZQ47zvA4qNako3irK8eZLpfVn//Z"
}
}
]
]
}
```
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
request = clients["endpoint"].undeploy_model(
endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}
)
Explanation: Example output:
{
"predictions": [
[
0.0406113081,
0.125313938,
0.118626907,
0.100714684,
0.128500372,
0.0899592042,
0.157601,
0.121072263,
0.0312432405,
0.0863570943
]
],
"deployedModelId": "2064302294823862272"
}
projects.locations.endpoints.undeployModel
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
delete_model = True
delete_endpoint = True
delete_custom_job = True
delete_batchjob = True
delete_bucket = True
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model:
clients["model"].delete_model(name=model_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint:
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the custom training using the Vertex AI fully qualified identifier for the custom training
try:
if delete_custom_job:
clients["job"].delete_custom_job(name=custom_training_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex AI fully qualified identifier for the batch job
try:
if delete_batchjob:
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
Explanation: Example output:
{}
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation |
205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top
Step1: Dataset Description
| Variable Name | Definition |
|------------- |--------------------------------------------------------------------- |
| PassengerId | A unique ID to each Passenger; 1-891 |
| Survived | A boolean variable; 1 - Survived, 0 - Dead |
| Pclass | Ticket Class; 1 - 1st, 2 - 2nd, 3 - 3rd class |
| Name | Passenger Name |
| Sex | Sex of Passenger |
| Age | Age in Years |
| SibSp | Number of Siblings / Spouses Aboard |
| Parch | Number of parents / children aboard the titanic |
| Ticket | Ticket number |
| Fare | Passenger Fare |
| Cabin | Cabin number |
| Embarked | Port of Embarkation; C - Cherbourg, Q - Queenstown, S - Southampton|
<br/>
Some Notes Regarding Dataset
Pclass
Step2: Dataset summary above shows that there are 891 entries.
However, from above we can also see that we have missing values in - Age, Cabin, Embarked columns.
Missing values of Cabin and Embarked will not be fixed - because no questions are based on these factors
Missing values of Age will be fixed now - because involved in various questions and analysis below.
Fix missing ages
To review the data by distributions, and to tackle various questions - we first need to deal with this issue of missing ages.
If we assume that the missing ages will be distributed similarly, to the values that are present - then we can substitue values that represent the existing distribution.
For this we can replace the missing values with the mean.
To have best representative values populated - we will taken mean based on Sex and Pclass.
In other words the mean of ages for Sex within the Pclass, and when replacing the missing age, these two factors will be kept in consideration - to use the related mean of ages.
Step3: From the above we can see that the missing ages have been filled.
Step4: From the above we notice
* Oldest passenger was 80 years old
* Youngest passenger was about 5 months old
* Average age of passengers was 29.32 - but note this also has missing ages
* Mean survival is 0.3838
* Max fare charged was $512.33
* Maximum number of Siblings / Spouses were 8
* Maximum number of Parent / Child were 6
Questions in mind
Did passenger class made any difference to his survival?
Which gender had more survival?
Person travelling with others had more survival possibility?
Which age group had better chance of survival?
What was male and female survival per class and by age?
Question 1 - Did passenger class made any difference to his survival?
Step5: Conclusion
As can be seen from the visualization and also from the dataframe table above - 1st Class passengers had highest rate of survival, then 2nd class passengers, and the least survival rates was of 3rd class passengers. A large number of passengers were travelling in 3rd class (491), but only 24.24% survived.
Question 2 - Which gender had more survival?
Step6: Now lets visualize it
Step7: Conclusion
From the visualization and percentage of survival from the dataframe printout above - we can see that females had very high rate of survival. Female survial rate was 74.3%, and male survival rate was 18.9% - so female survival rate was about 4 times that of males.
It can be concluded that females were given preference in rescue operations, and males must have sacrificed themselves to let the females survive.
Question 3 - Person travelling with others had more survival possibility?
Lets first reivew the distribution of those who were alone, and those who were in company.
Step8: From the above distribution we can see that
* Those in age range of 0-10, that is kids, were not alone - which makes sense
* There however is one kid age 5 who was alone
* There was an 80 year old person also who was alone
* 537 passengers were alone, whereas 354 were in company
* Except for age group 0-10, for all other age groups, those travelling alone outnumbered those travelling in company
Now lets review these by their survival
Step9: Now lets visualize
Step10: Conclusion
Percentage above and visualizations above clearly indicate that people having company had higher survival rate.
Question 4 - Which age group had a better chance of survival?
First lets review gender age distribution
Step11: From above distribution, we can see that
Step12: From the above visualization and percentages we can see that most survivors were from 20-29 age group.
But interestingly survival percentage of 0-9 age group is best - at 61.29%.
Also above we have seen that female had better survial rate - so these survial rates must be mix of male and female survival rates - and hence to have better view, the gender aspect should also be taken into consideration.
Step13: Conclusion
From the proportions above, and the visualization, taking into consideration the gender and age group - it is clearly visible that female and children were given preference in rescue operations by the other male passengers. 0-9 age group both male and female children had very high rate of survival.
Question 5 - What was male and female survival per class and by age?
Male and Female per Pclass
Lets review the males and females, per the passenger classes
Step14: Above we can see that compared to first and second class, there were large number of passengers in third class.
Particularly males were in large number ... by the look of swarm, they were in large number in age 18 to 32.
To understand the age distribution of male and female in different class - the better plot is box plot.
Step15: From above we can make out that mean age of male and female in 3rd class was less than that of males and females in 2nd and 1st class.
Highest mean age of males was in 1st class.
But this plot gives only idea of the distribution of ages of males and females per class.
Now lets try to understand the survival of male and female, per class
Male and Female Survival per Pclass, and by Age
Step16: The above three scatter plots give a view of male and female age and survival in each of the class.
But for better clarity and understanding, we can separate the scatter plots for male and female, per class.
This is what we will do next.
Step17: Conclusion
From the just above scatter plots we have lot of clarity about male female age spread, and survivals.
We can notice the following
* Females in first and second class were mostly all saved/survived.
* In first and second class male and female kids (age group 0-10) almost all survived
* In third class also the survival rate of female was higher than of males - but the female survival was less compared the female survival compared to 1st and 2nd class females
We can confirm the above observation by following barplot - showing the rate of survival by class, by sex | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style(style="darkgrid")
# plt.rcParams['figure.figsize'] = [12.0, 8.0] # make plots size, double of the notebook normal
plt.rcParams['figure.figsize'] = [9.0, 6.0] # make plots size, double of the notebook normal
from IPython.display import display, HTML # to use display() to always have well formatted html table output
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Dataset-Description" data-toc-modified-id="Dataset-Description-0.1"><span class="toc-item-num">0.1 </span>Dataset Description</a></span><ul class="toc-item"><li><span><a href="#Some-Notes-Regarding-Dataset" data-toc-modified-id="Some-Notes-Regarding-Dataset-0.1.1"><span class="toc-item-num">0.1.1 </span>Some Notes Regarding Dataset</a></span></li></ul></li><li><span><a href="#Loading-data-&-Preview" data-toc-modified-id="Loading-data-&-Preview-0.2"><span class="toc-item-num">0.2 </span>Loading data & Preview</a></span></li><li><span><a href="#Fix-missing-ages" data-toc-modified-id="Fix-missing-ages-0.3"><span class="toc-item-num">0.3 </span>Fix missing ages</a></span></li><li><span><a href="#Questions-in-mind" data-toc-modified-id="Questions-in-mind-0.4"><span class="toc-item-num">0.4 </span>Questions in mind</a></span></li><li><span><a href="#Question-1---Did-passenger-class-made-any-difference-to-his-survival?" data-toc-modified-id="Question-1---Did-passenger-class-made-any-difference-to-his-survival?-0.5"><span class="toc-item-num">0.5 </span>Question 1 - Did passenger class made any difference to his survival?</a></span></li><li><span><a href="#Question-2---Which-gender-had-more-survival?" data-toc-modified-id="Question-2---Which-gender-had-more-survival?-0.6"><span class="toc-item-num">0.6 </span>Question 2 - Which gender had more survival?</a></span></li><li><span><a href="#Question-3---Person-travelling-with-others-had-more-survival-possibility?" data-toc-modified-id="Question-3---Person-travelling-with-others-had-more-survival-possibility?-0.7"><span class="toc-item-num">0.7 </span>Question 3 - Person travelling with others had more survival possibility?</a></span></li><li><span><a href="#Question-4---Which-age-group-had-a-better-chance-of-survival?" data-toc-modified-id="Question-4---Which-age-group-had-a-better-chance-of-survival?-0.8"><span class="toc-item-num">0.8 </span>Question 4 - Which age group had a better chance of survival?</a></span></li><li><span><a href="#Question-5---What-was-male-and-female-survival-per-class-and-by-age?" data-toc-modified-id="Question-5---What-was-male-and-female-survival-per-class-and-by-age?-0.9"><span class="toc-item-num">0.9 </span>Question 5 - What was male and female survival per class and by age?</a></span><ul class="toc-item"><li><span><a href="#Male-and-Female-per-Pclass" data-toc-modified-id="Male-and-Female-per-Pclass-0.9.1"><span class="toc-item-num">0.9.1 </span>Male and Female per Pclass</a></span></li><li><span><a href="#Male-and-Female-Survival-per-Pclass,-and-by-Age" data-toc-modified-id="Male-and-Female-Survival-per-Pclass,-and-by-Age-0.9.2"><span class="toc-item-num">0.9.2 </span>Male and Female Survival per Pclass, and by Age</a></span></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-0.9.3"><span class="toc-item-num">0.9.3 </span>Conclusion</a></span></li></ul></li><li><span><a href="#Overall-Conclusion" data-toc-modified-id="Overall-Conclusion-0.10"><span class="toc-item-num">0.10 </span>Overall Conclusion</a></span><ul class="toc-item"><li><span><a href="#Findings" data-toc-modified-id="Findings-0.10.1"><span class="toc-item-num">0.10.1 </span>Findings</a></span></li><li><span><a href="#Limitations" data-toc-modified-id="Limitations-0.10.2"><span class="toc-item-num">0.10.2 </span>Limitations</a></span></li><li><span><a href="#Future-plans" data-toc-modified-id="Future-plans-0.10.3"><span class="toc-item-num">0.10.3 </span>Future plans</a></span></li></ul></li><li><span><a href="#References" data-toc-modified-id="References-0.11"><span class="toc-item-num">0.11 </span>References</a></span></li></ul></li></ul></div>
End of explanation
titanic_df = pd.read_csv("titanic-data.csv", index_col=["PassengerId"])
titanic_df.head()
titanic_df.info()
Explanation: Dataset Description
| Variable Name | Definition |
|------------- |--------------------------------------------------------------------- |
| PassengerId | A unique ID to each Passenger; 1-891 |
| Survived | A boolean variable; 1 - Survived, 0 - Dead |
| Pclass | Ticket Class; 1 - 1st, 2 - 2nd, 3 - 3rd class |
| Name | Passenger Name |
| Sex | Sex of Passenger |
| Age | Age in Years |
| SibSp | Number of Siblings / Spouses Aboard |
| Parch | Number of parents / children aboard the titanic |
| Ticket | Ticket number |
| Fare | Passenger Fare |
| Cabin | Cabin number |
| Embarked | Port of Embarkation; C - Cherbourg, Q - Queenstown, S - Southampton|
<br/>
Some Notes Regarding Dataset
Pclass: A proxy for socio-economic status (SES)
<br/>1st = Upper
<br/>2nd = Middle
<br/>3rd = Lower
<br/><br/>
Age: Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5
<br/><br/>
SibSp: The dataset defines family relations in this way...
<br/>Sibling = brother, sister, stepbrother, stepsister
<br/>Spouse = husband, wife (mistresses and fiancés were ignored)
<br/><br/>
Parch: The dataset defines family relations in this way...
<br/>Parent = mother, father
<br/>Child = daughter, son, stepdaughter, stepson
<br/>Some children travelled only with a nanny, therefore parch=0 for them.
<br/><br/>
<p style="text-align:center">*Source: [Kaggle's - Titanic: Machine Learning from Disaster](https://www.kaggle.com/c/titanic/data)*</p>
<br/><br/>
Loading data & Preview
End of explanation
mean_ages = titanic_df.groupby(['Sex','Pclass'])['Age'].mean()
display(mean_ages)
def replace_nan_age(row):
if pd.isnull(row['Age']):
return mean_ages[row['Sex'], row['Pclass']]
else:
return row['Age']
titanic_df['Age'] = titanic_df.apply(replace_nan_age, axis=1)
titanic_df.info()
Explanation: Dataset summary above shows that there are 891 entries.
However, from above we can also see that we have missing values in - Age, Cabin, Embarked columns.
Missing values of Cabin and Embarked will not be fixed - because no questions are based on these factors
Missing values of Age will be fixed now - because involved in various questions and analysis below.
Fix missing ages
To review the data by distributions, and to tackle various questions - we first need to deal with this issue of missing ages.
If we assume that the missing ages will be distributed similarly, to the values that are present - then we can substitue values that represent the existing distribution.
For this we can replace the missing values with the mean.
To have best representative values populated - we will taken mean based on Sex and Pclass.
In other words the mean of ages for Sex within the Pclass, and when replacing the missing age, these two factors will be kept in consideration - to use the related mean of ages.
End of explanation
titanic_df.describe()
titanic_df.Parch.hist()
plt.xlabel('Parch')
plt.ylabel('Passengers')
plt.title('Number of parents / children aboard')
titanic_df.SibSp.hist()
plt.xlabel('SibSp')
plt.ylabel('Passengers')
plt.title('Number of Siblings / Spouses aboard')
Explanation: From the above we can see that the missing ages have been filled.
End of explanation
## SUBSET DATAFRAME TO JUST THE REQUIRED DATA
survived_plass_df = titanic_df[['Survived', 'Pclass']] # works - just have to say the columns required
survived_plass_df.head()
## GROUP DATA TO CALCULATE SURVIVED & TOTAL BY PCLASS
## calculate survived by pclass
survived_by_pclass = survived_plass_df.groupby(['Pclass']).sum()
total_by_pclass = survived_plass_df.groupby(['Pclass']).count()
# total are showed as survived - so change to column name Total
total_by_pclass.rename(columns = {'Survived':'Total'}, inplace = True)
# merge separate data into one dataframe
survived_total_by_pclass = pd.merge(survived_by_pclass, total_by_pclass, left_index=True, right_index=True) # merge by index
survived_total_by_pclass
percent_survived = (survived_total_by_pclass['Survived'] / survived_total_by_pclass['Total']) * 100
survived_total_by_pclass['Percentage'] = percent_survived
survived_total_by_pclass
x = survived_total_by_pclass.index.values
ht = survived_total_by_pclass.Total
hs = survived_total_by_pclass.Survived
pht = plt.bar(x, ht)
phs = plt.bar(x, hs)
plt.xticks(x, x)
plt.xlabel('Pclass')
plt.ylabel('Passengers')
plt.title('Survivors by Class')
plt.legend([pht,phs],['Died', 'Survived'])
Explanation: From the above we notice
* Oldest passenger was 80 years old
* Youngest passenger was about 5 months old
* Average age of passengers was 29.32 - but note this also has missing ages
* Mean survival is 0.3838
* Max fare charged was $512.33
* Maximum number of Siblings / Spouses were 8
* Maximum number of Parent / Child were 6
Questions in mind
Did passenger class made any difference to his survival?
Which gender had more survival?
Person travelling with others had more survival possibility?
Which age group had better chance of survival?
What was male and female survival per class and by age?
Question 1 - Did passenger class made any difference to his survival?
End of explanation
## CALCULATE SURVIVED AND TOTAL BY SEX
# groupby Sex
group_by_sex = titanic_df.groupby('Sex')
# calculate survived by sex
survived_by_sex = group_by_sex['Survived'].sum()
survived_by_sex.name = 'Survived'
display(survived_by_sex)
# calculate total by sex
total_by_sex = group_by_sex['Survived'].size()
total_by_sex.name = 'Total'
display(total_by_sex)
# concat the separate results into one dataframe
survived_total_by_sex = pd.concat([survived_by_sex, total_by_sex], axis=1)
survived_total_by_sex
percent_survived = (survived_total_by_sex['Survived'] / survived_total_by_sex['Total']) * 100
survived_total_by_sex['Percentage'] = percent_survived
survived_total_by_sex
Explanation: Conclusion
As can be seen from the visualization and also from the dataframe table above - 1st Class passengers had highest rate of survival, then 2nd class passengers, and the least survival rates was of 3rd class passengers. A large number of passengers were travelling in 3rd class (491), but only 24.24% survived.
Question 2 - Which gender had more survival?
End of explanation
x = range(len(survived_total_by_sex.index.values))
ht = survived_total_by_sex.Total
hs = survived_total_by_sex.Survived
pht = plt.bar(x, ht)
phs = plt.bar(x, hs)
plt.xticks(x, survived_total_by_sex.index.values)
plt.xlabel('Sex')
plt.ylabel('Passengers')
plt.title('Survivors by Gender')
plt.legend([pht,phs],['Died', 'Survived'])
Explanation: Now lets visualize it
End of explanation
is_not_alone = (titanic_df.SibSp + titanic_df.Parch) >= 1
passengers_not_alone = titanic_df[is_not_alone]
is_alone = (titanic_df.SibSp + titanic_df.Parch) == 0
passengers_alone = titanic_df[is_alone]
print('Not alone - describe')
display(passengers_not_alone.describe())
print('Alone - describe')
display(passengers_alone.describe())
passengers_not_alone.Age.hist(label='Not alone')
passengers_alone.Age.hist(label='Alone', alpha=0.6)
plt.xlabel('Age')
plt.ylabel('Passengers')
plt.legend(loc='best')
plt.title('Alone & Not Alone Passenger\'s Ages')
Explanation: Conclusion
From the visualization and percentage of survival from the dataframe printout above - we can see that females had very high rate of survival. Female survial rate was 74.3%, and male survival rate was 18.9% - so female survival rate was about 4 times that of males.
It can be concluded that females were given preference in rescue operations, and males must have sacrificed themselves to let the females survive.
Question 3 - Person travelling with others had more survival possibility?
Lets first reivew the distribution of those who were alone, and those who were in company.
End of explanation
notalone = np.where((titanic_df.SibSp + titanic_df.Parch) >= 1, 'Not Alone', 'Alone')
loneliness_summary = titanic_df.groupby(notalone, as_index=False)['Survived'].agg([np.sum, np.size])
loneliness_summary = loneliness_summary.rename(columns={'sum':'Survived', 'size':'Total'})
loneliness_summary
loneliness_summary['Percent survived'] = (loneliness_summary.Survived / loneliness_summary.Total) * 100
loneliness_summary
Explanation: From the above distribution we can see that
* Those in age range of 0-10, that is kids, were not alone - which makes sense
* There however is one kid age 5 who was alone
* There was an 80 year old person also who was alone
* 537 passengers were alone, whereas 354 were in company
* Except for age group 0-10, for all other age groups, those travelling alone outnumbered those travelling in company
Now lets review these by their survival
End of explanation
x = range(len(loneliness_summary.index.values))
ht = loneliness_summary.Total
hs = loneliness_summary.Survived
pht = plt.bar(x, ht)
phs = plt.bar(x, hs)
plt.xticks(x, loneliness_summary.index.values)
plt.xlabel('Alone / Not Alone')
plt.ylabel('Passengers')
plt.title('Survivors by Alone / Not Alone')
plt.legend([pht,phs],['Died', 'Survived'])
Explanation: Now lets visualize
End of explanation
male_ages = (titanic_df[titanic_df.Sex == 'male'])['Age']
male_ages.describe()
female_ages = (titanic_df[titanic_df.Sex == 'female'])['Age']
female_ages.describe()
male_ages.hist(label='Male')
female_ages.hist(label='Female')
plt.xlabel('Age')
plt.ylabel('Passengers')
plt.title('Male & Female passenger ages')
plt.legend(loc='best')
Explanation: Conclusion
Percentage above and visualizations above clearly indicate that people having company had higher survival rate.
Question 4 - Which age group had a better chance of survival?
First lets review gender age distribution
End of explanation
def age_group(age):
if age >= 80:
return '80-89'
if age >= 70:
return '70-79'
if age >= 60:
return '60-69'
if age >= 50:
return '50-59'
if age >= 40:
return '40-49'
if age >= 30:
return '30-39'
if age >= 20:
return '20-29'
if age >= 10:
return '10-19'
if age >= 0:
return '0-9'
titanic_df['AgeGroup'] = titanic_df.Age.apply(age_group)
titanic_df.head()
age_group_summary = titanic_df.groupby(['AgeGroup'], as_index=False)['Survived'].agg([np.sum, np.size])
age_group_summary = age_group_summary.rename(columns={'sum':'Survived', 'size':'Total'})
age_group_summary
x = range(len(age_group_summary.index.values))
ht = age_group_summary.Total
hs = age_group_summary.Survived
pht = plt.bar(x, ht)
phs = plt.bar(x, hs)
plt.xticks(x, age_group_summary.index.values)
plt.xlabel('Age groups')
plt.ylabel('Passengers')
plt.title('Survivors by Age group')
plt.legend([pht,phs],['Died', 'Survived'])
age_group_summary['SurvivedPercent'] = (age_group_summary.Survived / age_group_summary.Total) * 100
age_group_summary['DiedPercent'] = ((age_group_summary.Total - age_group_summary.Survived) / age_group_summary.Total) * 100
age_group_summary
Explanation: From above distribution, we can see that:
* For every age group the number of females was less than number of males
* The age of oldest female was 63, whereas age of oldest male was 80
Now lets do survival analysis by the age group
End of explanation
sex_agegroup_summary = titanic_df.groupby(['Sex','AgeGroup'], as_index=False)['Survived'].mean()
sex_agegroup_summary
male_agegroup_summary = sex_agegroup_summary[sex_agegroup_summary['Sex'] == 'male']
male_agegroup_summary
female_agegroup_summary = sex_agegroup_summary[sex_agegroup_summary['Sex'] == 'female']
female_agegroup_summary
age_group = titanic_df.AgeGroup.unique()
age_labels = sorted(age_group)
print age_labels
ax = sns.barplot(x='AgeGroup', y='Survived', data=titanic_df, hue='Sex', order=age_labels)
ax.set_title('Survivors by Gender by Age groups')
Explanation: From the above visualization and percentages we can see that most survivors were from 20-29 age group.
But interestingly survival percentage of 0-9 age group is best - at 61.29%.
Also above we have seen that female had better survial rate - so these survial rates must be mix of male and female survival rates - and hence to have better view, the gender aspect should also be taken into consideration.
End of explanation
sns.swarmplot(x='Pclass', y='Age', data=titanic_df, hue='Sex', dodge=True).set_title('Male and Female Passenger Ages by Class')
Explanation: Conclusion
From the proportions above, and the visualization, taking into consideration the gender and age group - it is clearly visible that female and children were given preference in rescue operations by the other male passengers. 0-9 age group both male and female children had very high rate of survival.
Question 5 - What was male and female survival per class and by age?
Male and Female per Pclass
Lets review the males and females, per the passenger classes
End of explanation
sns.boxplot(x='Pclass', y='Age', data=titanic_df, hue='Sex').set_title('Comparison of Male and Female Passenger Ages by Class')
Explanation: Above we can see that compared to first and second class, there were large number of passengers in third class.
Particularly males were in large number ... by the look of swarm, they were in large number in age 18 to 32.
To understand the age distribution of male and female in different class - the better plot is box plot.
End of explanation
def scatter(passengers, marker='o', legend_prefix=''):
survived = passengers[passengers.Survived == 1]
died = passengers[passengers.Survived == 0]
x = survived.Age
y = survived.Fare
plt.scatter(x, y, c='blue', alpha=0.5, marker=marker, label=legend_prefix + ' Survived')
x = died.Age
y = died.Fare
plt.scatter(x, y, c='red', alpha=0.5, marker=marker, label=legend_prefix + ' Died')
def scatter_by_class(pclass):
class_passengers = titanic_df[titanic_df.Pclass == pclass]
male_passengers = class_passengers[class_passengers.Sex == 'male']
female_passengers = class_passengers[class_passengers.Sex == 'female']
scatter(male_passengers, marker='o', legend_prefix='Male')
scatter(female_passengers, marker='^', legend_prefix='Female')
plt.legend(bbox_to_anchor=(0,1), loc='best') # bbox - to move legend out of plot/scatter
plt.xlabel('Age')
plt.ylabel('Fare')
plt.title('Gender survival by Age, for Pclass = ' + str(pclass))
scatter_by_class(1)
scatter_by_class(2)
scatter_by_class(3)
Explanation: From above we can make out that mean age of male and female in 3rd class was less than that of males and females in 2nd and 1st class.
Highest mean age of males was in 1st class.
But this plot gives only idea of the distribution of ages of males and females per class.
Now lets try to understand the survival of male and female, per class
Male and Female Survival per Pclass, and by Age
End of explanation
def sns_scatter_by_class(pclass):
fg = sns.FacetGrid(titanic_df[titanic_df['Pclass'] == pclass],
col='Sex',
col_order=['male', 'female'],
hue='Survived',
hue_kws=dict(marker=['v', '^']),
size=6,
palette='Set1')
fg = (fg.map(plt.scatter, 'Age', 'Fare', edgecolor='w', alpha=0.7, s=80).add_legend())
plt.subplots_adjust(top=0.9)
fg.fig.suptitle('Gender survival by Age, for CLASS {}'.format(pclass))
# plotted separately because male and female data in same scatter plot difficult to understand comparitive
sns_scatter_by_class(1)
sns_scatter_by_class(2)
sns_scatter_by_class(3)
Explanation: The above three scatter plots give a view of male and female age and survival in each of the class.
But for better clarity and understanding, we can separate the scatter plots for male and female, per class.
This is what we will do next.
End of explanation
sns.barplot(x='Pclass', y='Survived', data=titanic_df, hue='Sex').set_title('Gender Survival by Class')
Explanation: Conclusion
From the just above scatter plots we have lot of clarity about male female age spread, and survivals.
We can notice the following
* Females in first and second class were mostly all saved/survived.
* In first and second class male and female kids (age group 0-10) almost all survived
* In third class also the survival rate of female was higher than of males - but the female survival was less compared the female survival compared to 1st and 2nd class females
We can confirm the above observation by following barplot - showing the rate of survival by class, by sex
End of explanation |
206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
27/10
Ordenamientos y búsquedas
Funciones anónimas.
Excepciones.
Ordenamiento de listas
Las listas se pueden ordenar fácilmente usando la función sorted
Step1: Pero, ¿y cómo hacemos para ordenarla de mayor a menor?. <br>
Simple, interrogamos un poco a la función
Step2: ¿Y si lo que quiero ordenar es una lista de registros?. <br>
Podemos pasarle una función que sepa cómo comparar esos registros o una que sepa devolver la información que necesita comparar.
Step3: Búsquedas en listas
Para saber si un elemento se encuentra en una lista, alcanza con usar el operador in
Step4: También es muy fácil saber si un elemento no esta en la lista
Step5: En cambio, si lo que queremos es saber es dónde se encuentra el número 3 en la lista es
Step6: Funciones anónimas
Hasta ahora, a todas las funciones que creamos les poníamos un nombre al momento de crearlas, pero cuando tenemos que crear funciones que sólo tienen una línea y no se usen en una gran cantidad de lugares se pueden usar las funciones lambda
Step7: Si bien no son funciones que se usen todos los días, se suelen usar cuando una función recibe otra función como parámetro (las funciones son un tipo de dato, por lo que se las pueden asignar a variables, y por lo tanto, también pueden ser parámetros).
Por ejemplo, para ordenar los alumnos por padrón podríamos usar
Step9: Otro ejemplo podría ser implementar una búsqueda binaria que permita buscar tanto en listas crecientes como decrecientes
Step10: Excepciones
Una excepción es la forma que tiene el intérprete de que indicarle al programador y/o usuario que ha ocurrido un error. Si la excepción no es controlada por el desarrollador ésta llega hasta el usuario y termina abruptamente la ejecución del sistema. <br>
Por ejemplo
Step11: Pero no hay que tenerle miedo a las excepciones, sólo hay que tenerlas en cuenta y controlarlas en el caso de que ocurran
Step12: Pero supongamos que implementamos la regla de tres de la siguiente forma
Step13: En cambio, si le pasamos 0 en el lugar de x
Step14: Acá podemos ver todo el traceback o stacktrace, que son el cómo se fueron llamando las distintas funciones entre sí hasta que llegamos al error. <br>
Pero no es bueno que este tipo de excepciones las vea directamente el usuario, por lo que podemos controlarlas en distintos momentos. Se pueden controlar inmediatamente donde ocurre el error, como mostramos antes, o en cualquier parte de este stacktrace. <br>
En el caso de la regla_de_tres no nos conviene poner el try/except encerrando la línea x/y, ya que en ese punto no tenemos toda la información que necesitamos para informarle correctamente al usuario, por lo que podemos ponerla en
Step15: Pero en este caso igual muestra 0, por lo que si queremos, podemos poner los try/except incluso más arriba en el stacktrace
Step16: Todos los casos son distintos y no hay UN lugar ideal dónde capturar la excepción; es cuestión del desarrollador decidir dónde conviene ponerlo para cada problema. <br>
Incluso, una única línea puede lanzar distintas excepciones, por lo que capturar un tipo de excepción en particular no me asegura que el programa no pueda lanzar un error en esa línea que supuestamente es segura
Step17: En esos casos podemos capturar más de una excepción de la siguiente forma
Step18: Incluso, si queremos que los dos errores muestren el mismo mensaje podemos capturar ambas excepciones juntas
Step19: Jerarquía de excepciones
Existe una <a href="https
Step20: Y también como
Step21: Si bien siempre se puede poner Exception en lugar del tipo de excepción que se espera, no es una buena práctica de programación ya que se pueden esconder errores indeseados. Por ejemplo, un error de sintaxis.
Además, cuando se lanza una excepción en el bloque try, el intérprete comienza a buscar entre todas cláusulas except una que coincida con el error que se produjo, o que sea de mayor jerarquía. Por lo tanto, es recomendable poner siempre las excepciones más específicas al principio y las más generales al final
Step22: Pero entonces, ¿por qué no poner ese código dentro del try-except?. Porque tal vez no queremos capturar con las cláusulas except lo que se ejecute en ese bloque de código
Step23: Lanzar excepciones
Hasta ahora vimos cómo capturar un error y trabajar con él sin que el programa termine abruptamente, pero en algunos casos somos nosotros mismos quienes van a querer lanzar una excepción. Y para eso, usaremos la palabra reservada raise
Step24: Crear excepciones
Pero así como podemos usar las excepciones estándares, también podemos crear nuestras propias excepciones | Python Code:
lista_de_numeros = [1, 6, 3, 9, 5, 2]
lista_ordenada = sorted(lista_de_numeros)
print lista_ordenada
Explanation: 27/10
Ordenamientos y búsquedas
Funciones anónimas.
Excepciones.
Ordenamiento de listas
Las listas se pueden ordenar fácilmente usando la función sorted:
End of explanation
lista_de_numeros = [1, 6, 3, 9, 5, 2]
print sorted(lista_de_numeros, reverse=True)
Explanation: Pero, ¿y cómo hacemos para ordenarla de mayor a menor?. <br>
Simple, interrogamos un poco a la función:
```Python
print sorted.doc
sorted(iterable, cmp=None, key=None, reverse=False) --> new sorted list
``
Entonces, con sólo pasarle el parámetro de *reverse* enTrue` debería alcanzar:
End of explanation
import random
def crear_alumnos(cantidad_de_alumnos=5):
nombres = ['Javier', 'Pablo', 'Ramiro', 'Lucas', 'Carlos']
apellidos = ['Saviola', 'Aimar', 'Funes Mori', 'Alario', 'Sanchez']
alumnos = []
for i in range(cantidad_de_alumnos):
a = {
'nombre': '{}, {}'.format(random.choice(apellidos), random.choice(nombres)),
'padron': random.randint(90000, 100000),
'nota': random.randint(4, 10)
}
alumnos.append(a)
return alumnos
def imprimir_curso(lista):
for idx, x in enumerate(lista, 1):
print ' {pos:2}. {padron} - {nombre}: {nota}'.format(pos=idx, **x)
def obtener_padron(alumno):
return alumno['padron']
def ordenar_por_padron(alumno1, alumno2):
if alumno1['padron'] < alumno2['padron']:
return -1
elif alumno2['padron'] < alumno1['padron']:
return 1
else:
return 0
curso = crear_alumnos()
print 'La lista tiene los alumnos:'
imprimir_curso(curso)
lista_ordenada = sorted(curso, key=obtener_padron)
print 'Y la lista ordenada por padrón:'
imprimir_curso(lista_ordenada)
otra_lista_ordenada = sorted(curso, cmp=ordenar_por_padron)
print 'Y la lista ordenada por padrón:'
imprimir_curso(otra_lista_ordenada)
Explanation: ¿Y si lo que quiero ordenar es una lista de registros?. <br>
Podemos pasarle una función que sepa cómo comparar esos registros o una que sepa devolver la información que necesita comparar.
End of explanation
lista = [11, 4, 6, 1, 3, 5, 7]
if 3 in lista:
print '3 esta en la lista'
else:
print '3 no esta en la lista'
if 15 in lista:
print '15 esta en la lista'
else:
print '15 no esta en la lista'
Explanation: Búsquedas en listas
Para saber si un elemento se encuentra en una lista, alcanza con usar el operador in:
End of explanation
lista = [11, 4, 6, 1, 3, 5, 7]
if 3 not in lista:
print '3 NO esta en la lista'
else:
print '3 SI esta en la lista'
Explanation: También es muy fácil saber si un elemento no esta en la lista:
End of explanation
lista = [11, 4, 6, 1, 3, 5, 7]
pos = lista.index(3)
print 'El 3 se encuentra en la posición', pos
pos = lista.index(15)
print 'El 15 se encuentra en la posición', pos
Explanation: En cambio, si lo que queremos es saber es dónde se encuentra el número 3 en la lista es:
End of explanation
help("lambda")
mi_funcion = lambda x, y: x+y
resultado = mi_funcion(1,2)
print resultado
Explanation: Funciones anónimas
Hasta ahora, a todas las funciones que creamos les poníamos un nombre al momento de crearlas, pero cuando tenemos que crear funciones que sólo tienen una línea y no se usen en una gran cantidad de lugares se pueden usar las funciones lambda:
End of explanation
curso = crear_alumnos(15)
print 'Curso original'
imprimir_curso(curso)
lista_ordenada = sorted(curso, key=lambda x: (-x['nota'], x['padron']))
print 'Curso ordenado'
imprimir_curso(lista_ordenada)
Explanation: Si bien no son funciones que se usen todos los días, se suelen usar cuando una función recibe otra función como parámetro (las funciones son un tipo de dato, por lo que se las pueden asignar a variables, y por lo tanto, también pueden ser parámetros).
Por ejemplo, para ordenar los alumnos por padrón podríamos usar:
Python
sorted(curso, key=lambda x: x['padron'])
Ahora, si quiero ordenar la lista anterior por nota decreciente y, en caso de igualdad, por padrón podríamos usar:
End of explanation
es_mayor = lambda n1, n2: n1 > n2
es_menor = lambda n1, n2: n1 < n2
def binaria(cmp, lista, clave):
Binaria es una función que busca en una lista la clave pasada. Es un requisito
de la búsqueda binaria que la lista se encuentre ordenada, pero no si el orden
es ascendente o descendente. Por este motivo es que también recibe una función
que le indique en que sentido ir.
Si la lista está ordenada en forma ascendente la función que se le pasa tiene
que ser verdadera cuando el primer valor es mayor que la segundo; y falso en
caso contrario.
Si la lista está ordenada en forma descendente la función que se le pasa tiene
que ser verdadera cuando el primer valor es menor que la segundo; y falso en
caso contrario.
min = 0
max = len(lista) - 1
centro = (min + max) / 2
while (lista[centro] != clave) and (min < max):
if cmp(lista[centro], clave):
max = centro - 1
else:
min = centro + 1
centro = (min + max) / 2
if lista[centro] == clave:
return centro
else:
return -1
print binaria(es_mayor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 8)
print binaria(es_menor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 8)
print binaria(es_mayor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 123)
print binaria(es_menor, [9, 8, 7, 6, 5, 4, 3, 2, 1], 6)
Explanation: Otro ejemplo podría ser implementar una búsqueda binaria que permita buscar tanto en listas crecientes como decrecientes:
End of explanation
print 1/0
Explanation: Excepciones
Una excepción es la forma que tiene el intérprete de que indicarle al programador y/o usuario que ha ocurrido un error. Si la excepción no es controlada por el desarrollador ésta llega hasta el usuario y termina abruptamente la ejecución del sistema. <br>
Por ejemplo:
End of explanation
dividendo = 1
divisor = 0
print 'Intentare hacer la división de %d/%d' % (dividendo, divisor)
try:
resultado = dividendo / divisor
print resultado
except ZeroDivisionError:
print 'No se puede hacer la división ya que el divisor es 0.'
Explanation: Pero no hay que tenerle miedo a las excepciones, sólo hay que tenerlas en cuenta y controlarlas en el caso de que ocurran:
End of explanation
def dividir(x, y):
return x/y
def regla_de_tres(x, y, z):
return dividir(z*y, x)
# Si de 28 alumnos, aprobaron 15, el porcentaje de aprobados es de...
porcentaje_de_aprobados = regla_de_tres(28, 15, 100)
print 'Porcentaje de aprobados: %0.2f %%' % porcentaje_de_aprobados
Explanation: Pero supongamos que implementamos la regla de tres de la siguiente forma:
End of explanation
resultado = regla_de_tres(0, 13, 100)
print 'Porcentaje de aprobados: %0.2f %%' % resultado
Explanation: En cambio, si le pasamos 0 en el lugar de x:
End of explanation
def dividir(x, y):
return x/y
def regla_de_tres(x, y, z):
resultado = 0
try:
resultado = dividir(z*y, x)
except ZeroDivisionError:
print 'No se puede calcular la regla de tres porque el divisor es 0'
return resultado
print regla_de_tres(0, 1, 2)
Explanation: Acá podemos ver todo el traceback o stacktrace, que son el cómo se fueron llamando las distintas funciones entre sí hasta que llegamos al error. <br>
Pero no es bueno que este tipo de excepciones las vea directamente el usuario, por lo que podemos controlarlas en distintos momentos. Se pueden controlar inmediatamente donde ocurre el error, como mostramos antes, o en cualquier parte de este stacktrace. <br>
En el caso de la regla_de_tres no nos conviene poner el try/except encerrando la línea x/y, ya que en ese punto no tenemos toda la información que necesitamos para informarle correctamente al usuario, por lo que podemos ponerla en:
End of explanation
def dividir(x, y):
return x/y
def regla_de_tres(x, y, z):
return dividir(z*y, x)
try:
print regla_de_tres(0, 1, 2)
except ZeroDivisionError:
print 'No se puede calcular la regla de tres porque el divisor es 0'
Explanation: Pero en este caso igual muestra 0, por lo que si queremos, podemos poner los try/except incluso más arriba en el stacktrace:
End of explanation
def dividir_numeros(x, y):
try:
resultado = x/y
print 'El resultado es: %s' % resultado
except ZeroDivisionError:
print 'ERROR: Ha ocurrido un error por mezclar tipos de datos'
dividir_numeros(1, 0)
dividir_numeros(10, 2)
dividir_numeros("10", 2)
Explanation: Todos los casos son distintos y no hay UN lugar ideal dónde capturar la excepción; es cuestión del desarrollador decidir dónde conviene ponerlo para cada problema. <br>
Incluso, una única línea puede lanzar distintas excepciones, por lo que capturar un tipo de excepción en particular no me asegura que el programa no pueda lanzar un error en esa línea que supuestamente es segura:
Capturar múltiples excepciones
En algunos casos tenemos en cuenta que el código puede lanzar una excepción como la de ZeroDivisionError, pero eso puede no ser suficiente:
End of explanation
def dividir_numeros(x, y):
try:
resultado = x/y
print 'El resultado es: %s' % resultado
except TypeError:
print 'ERROR: Ha ocurrido un error por mezclar tipos de datos'
except ZeroDivisionError:
print 'ERROR: Ha ocurrido un error de división por cero'
except Exception:
print 'ERROR: Ha ocurrido un error inesperado'
dividir_numeros(1, 0)
dividir_numeros(10, 2)
dividir_numeros("10", 2)
Explanation: En esos casos podemos capturar más de una excepción de la siguiente forma:
End of explanation
def dividir_numeros(x, y):
try:
resultado = x/y
print 'El resultado es: %s' % resultado
except (ZeroDivisionError, TypeError):
print 'ERROR: No se puede calcular la división'
dividir_numeros(1, 0)
dividir_numeros(10, 2)
dividir_numeros("10", 2)
Explanation: Incluso, si queremos que los dos errores muestren el mismo mensaje podemos capturar ambas excepciones juntas:
End of explanation
try:
print 1/0
except ZeroDivisionError:
print 'Ha ocurrido un error de división por cero'
Explanation: Jerarquía de excepciones
Existe una <a href="https://docs.python.org/2/library/exceptions.html">jerarquía de excepciones</a>, de forma que si se sabe que puede venir un tipo de error, pero no se sabe exactamente qué excepción puede ocurrir siempre se puede poner una excepción de mayor jerarquía:
<img src="excepciones.png"/>
Por lo que el error de división por cero se puede evitar como:
End of explanation
try:
print 1/0
except Exception:
print 'Ha ocurrido un error inesperado'
Explanation: Y también como:
End of explanation
def dividir_numeros(x, y):
try:
resultado = x/y
print 'El resultado es {}'.format(resultado)
except ZeroDivisionError:
print 'Error: División por cero'
else:
print 'Este mensaje se mostrará sólo si no ocurre ningún error'
finally:
print 'Este bloque de código se muestra siempre'
dividir_numeros(1, 0)
print '-------------'
dividir_numeros(10, 2)
Explanation: Si bien siempre se puede poner Exception en lugar del tipo de excepción que se espera, no es una buena práctica de programación ya que se pueden esconder errores indeseados. Por ejemplo, un error de sintaxis.
Además, cuando se lanza una excepción en el bloque try, el intérprete comienza a buscar entre todas cláusulas except una que coincida con el error que se produjo, o que sea de mayor jerarquía. Por lo tanto, es recomendable poner siempre las excepciones más específicas al principio y las más generales al final:
Python
def dividir_numeros(x, y):
try:
resultado = x/y
print 'El resultado es: %s' % resultado
except TypeError:
print 'ERROR: Ha ocurrido un error por mezclar tipos de datos'
except ZeroDivisionError:
print 'ERROR: Ha ocurrido un error de división por cero'
except Exception:
print 'ERROR: Ha ocurrido un error inesperado'
Si el error no es capturado por ninguna clausula se propaga de la misma forma que si no se hubiera puesto nada.
Otras cláusulas para el manejo de excepciones
Además de las cláusulas try y except existen otras relacionadas con las excepciones que nos permiten manejar de mejor manera el flujo del programa:
* else: se usa para definir un bloque de código que se ejecutará sólo si no ocurrió ningún error.
* finally: se usa para definir un bloque de código que se ejecutará siempre, independientemente de si se lanzó una excepción o no.
End of explanation
def dividir_numeros(x, y):
try:
resultado = x/y
print 'El resultado es {}'.format(resultado)
except ZeroDivisionError:
print 'Error: División por cero'
else:
print 'Ahora hago que ocurra una excepción'
print 1/0
finally:
print 'Este bloque de código se muestra siempre'
dividir_numeros(1, 0)
print '-------------'
dividir_numeros(10, 2)
Explanation: Pero entonces, ¿por qué no poner ese código dentro del try-except?. Porque tal vez no queremos capturar con las cláusulas except lo que se ejecute en ese bloque de código:
End of explanation
def dividir_numeros(x, y):
if y == 0:
raise Exception('Error de división por cero')
resultado = x/y
print 'El resultado es {0}'.format(resultado)
try:
dividir_numeros(1, 0)
except ZeroDivisionError as e:
print 'ERROR: División por cero'
except Exception as e:
print 'ERROR: ha ocurrido un error del tipo Exception'
print '----------'
dividir_numeros(1, 0)
Explanation: Lanzar excepciones
Hasta ahora vimos cómo capturar un error y trabajar con él sin que el programa termine abruptamente, pero en algunos casos somos nosotros mismos quienes van a querer lanzar una excepción. Y para eso, usaremos la palabra reservada raise:
End of explanation
class ExcepcionDeDivisionPor2(Exception):
def __str__(self):
return 'ERROR: No se puede dividir por dos'
def dividir_numeros(x, y):
if y == 2:
raise ExcepcionDeDivisionPor2()
resultado = x/y
try:
dividir_numeros(1, 2)
except ExcepcionDeDivisionPor2:
print 'No se puede dividir por 2'
dividir_numeros(1, 2)
Explanation: Crear excepciones
Pero así como podemos usar las excepciones estándares, también podemos crear nuestras propias excepciones:
```Python
class MiPropiaExcepcion(Exception):
def __str__(self):
return 'Mensaje del error'
```
Por ejemplo:
End of explanation |
207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mediation Analysis
Written by Jin Cheong & Luke Chang
A mediation analysis is conducted when a researcher is interested in the mechanism underlying how variable X has an effect on variable Y. It attempts to make a causal inference that a direct effect might be better explained by an indirect effect through a mediating variable.
Consider the instance below where X has an effect c on Y
Step1: 1) Test effect of X on Y
Step2: 2) Test effect of X on M
Step3: 3) Test effect of X and M on Y
Step4: Show how the effect is broken down to direct and indirect effects
Recall how the overall effect C was decomposed to indirect effect (a*b) and direct effect (c')
$c = a \cdot b + c' $
Step5: Run a Sobel Test for Significance of Mediation
One way to test the significance of a mediation is to perform a Sobel test, where the indirect effect(a*b) is divided by an estimated standard error of the two. This assumes that the product would be normally distributed which may not always be the case.
An alternative method is to bootstrap with replacement on the observed data to generate a 95% confidence interval. You can try this by writing a for-loop that resamples from the data and generate a distribution of the indirect effects(a*b). If the confidence interval does not include 0, it can be considered as significant.
Step6: Moderation Analysis
In a moderation analysis, the moderator modifies or changes the relationship between two variables, akin to an interaction term. Moderation is slightly different from an interaction due to the additional constraint that there is a causal relationship from X to Y, BUT not from Z to Y. Therefore, a moderation implies an interaction exists but an interaction does not imply a moderation.
Here is a schematic representation of a moderation relationship.
This diagram hypothesize that Stress has a causal relationship to Depression
but the effect of Stress is different for people with high or low Social Support
<img src="Figures/moderator2.gif",width=500,align='center'>
This can be reprsented by an interaction,
<img src="Figures/moderator3.jpeg",width=500,align='center'>
The pictures have been retrieved from here
Here are a few examples of moderations. Can you think of more?
1) The effect of compliments on future grades is moderated by growth mindset (Carol Dweck)
2) Effect of favorability on government behavior is moderated by political affiliation.
3) Effect of pressure on performance is moderated by confidence (choking vs boosting).
For more information look at homepage of Kenny who started all this.
Example
Here we examine whether the effect of buying books (Buy) on enjoyment of reading (Enjoy) is moderated by frequency of reading (Read).
The moderation effect exists if there is an interaction of buying and reading on enjoyment.
Step7: Importance of centering variables for interaction
The interaction effect can be VERY different if you don't center your variables | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
import statsmodels.api as sm
from scipy import stats
def sobel_test(a, b, se_a, se_b):
'''
Sobel test for significance of mediation
Args:
a: coefficient from X to mediator variable, M
b: coefficient from M to Y
se_a: Standard error of A
se_b: Standard error fo B
Returns:
t: Sobel's test statistic
pval : Two-tailed probability assuming normal distribution
'''
SE = np.sqrt( (a**2)*(se_a**2) + (b**2)*(se_b**2))
t = (a*b) / SE
n = 100000000
pval = stats.t.sf(np.abs(t), n-1)*2
return t, pval
# set random seed so everyone gets same results
np.random.seed(1)
# Determine effects
a = -3 # effect of x to M
b = 3 # effect of M to y
cq = .1 # effect of x on y controlling for M
# Create a random data x
x = np.random.rand(100)
m = x * a + np.random.rand(100)
# Create Y
y = np.dot(np.array([x,m]).T,[cq,b]) + np.random.rand(100)
plt.scatter(x,y)
plt.xlabel('X')
plt.ylabel('Y')
plt.title('X -> Y')
plt.scatter(x,m)
plt.xlabel('X')
plt.ylabel('M')
plt.title('X -> M')
plt.scatter(m,y)
plt.xlabel('M')
plt.ylabel('Y')
plt.title('M -> Y')
Explanation: Mediation Analysis
Written by Jin Cheong & Luke Chang
A mediation analysis is conducted when a researcher is interested in the mechanism underlying how variable X has an effect on variable Y. It attempts to make a causal inference that a direct effect might be better explained by an indirect effect through a mediating variable.
Consider the instance below where X has an effect c on Y:
1) $Y = \beta_1 + c \cdot X $
In this model, there may be a third variable M which mediates the effect of X on Y. In other words, the variable M is partially responsible for the effect X has on Y.
To conduct a mediation analysis one estimates two additional models: 1) the effect of X on M, and 2) the effect of X and M on Y.
2) $M = \beta_2 + a \cdot X $
3) $Y = \beta_3 + c' \cdot X + b \cdot M $
Now the direct effect of X on Y, denoted as C,can be broken down into two parts:
$c = a \cdot b + c' $
$ a \cdot b $ is the indirect effect of X on Y via the mediator.
$c'$ is the remaining direct effect of X on Y controlling for M.
This relationship is depicted below. Note that M and Y both also have error included in the model.
Question Why does X not have error included in the model?
Answer Because X is only a regressor not an outcome variable in the models and standard regression does not estimate error on regressors. See orthogonal regression for a technique that models error on both X and Y.
<img src="Figures/mediation1.png",width=500,align='center'>
Here are a few examples of mediations. Can you think of more?
1) The effect of failure on depressed feelings is mediated by internalization of failure.
2) Effect of CBT treatment is mediated by changes in cognition.
3) Effect of food intake on weight gain is mediated by metabolism.
For more information there is a nice tutorial on mediation by David Kenny, one of the author's of the original mediation paper.
Simulate a mediation
In this section we will simulate a mediation.
This is a case in which the true effect of X on Y is positive, but appears negative without testing for mediation.
End of explanation
X = pd.DataFrame({'Intercept':np.ones(len(x)),'X':x})
lm1 = smf.OLS(y,X).fit()
print lm1.summary()
ec = lm1.params[1] # save total effect c to ec
Explanation: 1) Test effect of X on Y
End of explanation
lm2 = smf.OLS(m,X).fit()
print lm2.summary()
ea = lm2.params[1] # Save the effect of X on M, a, to ea
sea = lm2.bse[1]
Explanation: 2) Test effect of X on M
End of explanation
X['M'] = m
lm3 = smf.OLS(y,X).fit()
print lm3.summary()
ecq,eb = lm3.params[1:3]
seb = lm3.bse[2]
Explanation: 3) Test effect of X and M on Y
End of explanation
print('c : %.2f') % ec
print('a : %.2f') % ea
print('b : %.2f') % eb
print('c\' : %.2f') % ecq
print('Total effect C: %.2f') % ec
print('is decomposed into the indirect(mediated) effect a*b: %.2f') % (ea*eb)
print('plus the direct effect c\': %.2f') % ecq
print('which adds up to %.2f') % (ea*eb+ecq)
Explanation: Show how the effect is broken down to direct and indirect effects
Recall how the overall effect C was decomposed to indirect effect (a*b) and direct effect (c')
$c = a \cdot b + c' $
End of explanation
t,p = sobel_test(ea,eb,sea,seb)
print('Sobel\'s test of significance t = %2.2f') % t
print('Two-tailed p-value p = %2.5f ') % p
Explanation: Run a Sobel Test for Significance of Mediation
One way to test the significance of a mediation is to perform a Sobel test, where the indirect effect(a*b) is divided by an estimated standard error of the two. This assumes that the product would be normally distributed which may not always be the case.
An alternative method is to bootstrap with replacement on the observed data to generate a 95% confidence interval. You can try this by writing a for-loop that resamples from the data and generate a distribution of the indirect effects(a*b). If the confidence interval does not include 0, it can be considered as significant.
End of explanation
df = pd.DataFrame({
'Enjoy':[4, 16, 4, 12, 9, 5, 15, 21, 3, 4, 8, 11, 7, 5, 8, 19, 11, 9, 9, 13, 11, 21, 18, 12, 15, 3, 2, 10, 7, 9, 5, 6, 9, 12, 9, 5, 17, 15, 9, 7, 5, 10, 10, 6, 7, 9, 12, 2, 1, 5, 7, 5, 8, 5, 5, 11, 8, 9, 13, 9, 19, 8, 21, 1, 11, 8, 6, 23, 2, 9, 13, 4, 10, 12, 5, 7, 10, 11, 12, 13],
'Buy':[5, 8, 0, 8, 3, 0, 8, 9, 8, 1, 7, 2, 2, 9, 3, 8, 8, 9, 5, 7, 7, 9, 6, 7, 8, 4, 4, 3, 1, 4, 1, 5, 5, 2, 9, 5, 7, 8, 2, 4, 1, 4, 0, 1, 0, 8, 2, 4, 0, 0, 0, 1, 2, 2, 2, 7, 5, 1, 9, 9, 8, 1, 7, 2, 5, 2, 4, 9, 1, 6, 3, 0, 7, 5, 2, 3, 1, 8, 6, 4],
'Read':[0, 8, 0, 5, 4, 9, 6, 9, 1, 3, 3, 3, 8, 0, 9, 8, 6, 0, 5, 6, 1, 7, 7, 5, 7, 2, 0, 5, 1, 7, 7, 4, 6, 5, 3, 1, 7, 6, 0, 4, 0, 9, 5, 9, 2, 3, 5, 2, 5, 2, 9, 1, 1, 7, 9, 3, 0, 4, 4, 3, 8, 8, 8, 2, 3, 7, 1, 8, 6, 1, 7, 0, 3, 2, 5, 3, 8, 6, 9, 7]
})
Explanation: Moderation Analysis
In a moderation analysis, the moderator modifies or changes the relationship between two variables, akin to an interaction term. Moderation is slightly different from an interaction due to the additional constraint that there is a causal relationship from X to Y, BUT not from Z to Y. Therefore, a moderation implies an interaction exists but an interaction does not imply a moderation.
Here is a schematic representation of a moderation relationship.
This diagram hypothesize that Stress has a causal relationship to Depression
but the effect of Stress is different for people with high or low Social Support
<img src="Figures/moderator2.gif",width=500,align='center'>
This can be reprsented by an interaction,
<img src="Figures/moderator3.jpeg",width=500,align='center'>
The pictures have been retrieved from here
Here are a few examples of moderations. Can you think of more?
1) The effect of compliments on future grades is moderated by growth mindset (Carol Dweck)
2) Effect of favorability on government behavior is moderated by political affiliation.
3) Effect of pressure on performance is moderated by confidence (choking vs boosting).
For more information look at homepage of Kenny who started all this.
Example
Here we examine whether the effect of buying books (Buy) on enjoyment of reading (Enjoy) is moderated by frequency of reading (Read).
The moderation effect exists if there is an interaction of buying and reading on enjoyment.
End of explanation
df['Interaction'] = df.Read*df.Buy
np.corrcoef(df.Buy,df.Interaction)
plt.scatter(df.Buy,df.Interaction)
plt.xlabel('Read * Buy')
plt.ylabel('Buy')
center = lambda x: (x - x.mean())
df[['Read_Centered','Buy_Centered']] = df[['Read','Buy']].apply(center)
df['Interaction_Centered'] = df['Read_Centered'] * df['Buy_Centered']
np.corrcoef(df.Buy,df.Interaction_Centered)
plt.scatter(df.Buy,df.Interaction_Centered)
plt.xlabel('Read * Buy')
plt.ylabel('Buy')
mod = smf.ols(formula = "Enjoy ~ Buy + Read + Interaction", data = df).fit()
print mod.summary()
mod = smf.ols(formula = "Enjoy ~ Buy + Read + Interaction_Centered", data = df).fit()
print mod.summary()
Explanation: Importance of centering variables for interaction
The interaction effect can be VERY different if you don't center your variables
End of explanation |
208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright © 2019 The TensorFlow Authors.
Step1: TFX – Introduction to Apache Beam
TFX is designed to be scalable to very large datasets which require substantial resources. Distributed pipeline frameworks such as Apache Beam offer the ability to distribute processing across compute clusters and apply the resources required. Many of the standard TFX components use Apache Beam, and custom components that you may write may also benefit from using Apache Beam for distibuted processing.
This notebook introduces the concepts and code patterns for developing with the Apache Beam Python API.
Setup
First, we install the necessary packages, download data, import modules and set up paths.
Install TensorFlow and Apache Beam
Note
Because of some of the updates to packages you must use the button at the bottom of the output of this cell to restart the runtime. Following restart, you should rerun this cell.
Step2: Import packages
We import necessary packages, including Beam.
Step3: Create a Beam Pipeline
Create a pipeline, including a simple PCollection and a ParDo() transform.
A PCollection<T> is an immutable collection of values of type T. A PCollection can contain either a bounded or unbounded number of elements. Bounded and unbounded PCollections are produced as the output of PTransforms (including root PTransforms like Read and Create), and can be passed as the inputs of other PTransforms.
ParDo is the core element-wise transform in Apache Beam, invoking a user-specified function on each of the elements of the input PCollection to produce zero or more output elements, all of which are collected into the output PCollection.
First, use the .run() method.
Step4: Display the structure of this pipeline.
Step5: Next, invoke run inside a with block.
Step6: Exercise 1 — Creating and Running Your Beam Pipeline
Build a Beam pipeline that creates a PCollection containing integers 0 to 10 and prints them.
Add a step in the pipeline to square each item.
Display the pipeline.
Warning
Step7: Core Transforms
Beam has a set of core transforms on data that is contained in PCollections. In the cells that follow, explore several core transforms and observe the results in order to develop some understanding and intuition for what each transform does.
Map
The Map transform applies a simple 1-to-1 mapping function over each element in the collection. Map accepts a function that returns a single element for every input element in the PCollection. You can pass functions with multiple arguments to Map. They are passed as additional positional arguments or keyword arguments to the function.
First, compare the results of a ParDo transform and a Map transform.
Step8: GroupByKey
GroupByKey takes a keyed collection of elements and produces a collection where each element consists of a key and all values associated with that key.
GroupByKey is a transform for processing collections of key/value pairs. It’s a parallel reduction operation, analogous to the Shuffle phase of a Map/Shuffle/Reduce-style algorithm. The input to GroupByKey is a collection of key/value pairs that represents a multimap, where the collection contains multiple pairs that have the same key, but different values. Given such a collection, you use GroupByKey to collect all of the values associated with each unique key.
GroupByKey is a good way to aggregate data that has something in common. For example, if you have a collection that stores records of customer orders, you might want to group together all the orders from the same postal code (wherein the “key” of the key/value pair is the postal code field, and the “value” is the remainder of the record).
Step9: Exercise 2 — Group Items by Key
Build a Beam pipeline that creates a PCollection containing integers 0 to 10 and prints them.
Add a step in the pipeline to add a key to each item that will indicate whether it is even or odd.
Use GroupByKey to group even items together and odd items together.
Solution
Step10: CoGroupByKey can combine multiple PCollections, assuming every element is a tuple whose first item is the key to join on.
Step11: Combine
Combine is a transform for combining collections of elements or values. Combine has variants that work on entire PCollections, and some that combine the values for each key in PCollections of key/value pairs.
To apply a Combine transform, you must provide the function that contains the logic for combining the elements or values. The combining function should be commutative and associative, as the function is not necessarily invoked exactly once on all values with a given key. Because the input data (including the value collection) may be distributed across multiple workers, the combining function might be called multiple times to perform partial combining on subsets of the value collection. The Beam SDK also provides some pre-built combine functions for common numeric combination operations such as sum, min, and max.
Simple combine operations, such as sums, can usually be implemented as a simple function. More complex combination operations might require you to create a subclass of CombineFn that has an accumulation type distinct from the input/output type.
Step12: Exercise 3 — Combine Items
Start with Beam pipeline you built in the previous exercise
Step13: Flatten
Flatten is a transform for PCollection objects that store the same data type. Flatten merges multiple PCollection objects into a single logical PCollection.
Data encoding in merged collections
By default, the coder for the output PCollection is the same as the coder for the first PCollection in the input PCollectionList. However, the input PCollection objects can each use different coders, as long as they all contain the same data type in your chosen language.
Merging windowed collections
When using Flatten to merge PCollection objects that have a windowing strategy applied, all of the PCollection objects you want to merge must use a compatible windowing strategy and window sizing. For example, all the collections you're merging must all use (hypothetically) identical 5-minute fixed windows or 4-minute sliding windows starting every 30 seconds.
If your pipeline attempts to use Flatten to merge PCollection objects with incompatible windows, Beam generates an IllegalStateException error when your pipeline is constructed.
Step14: Partition
Partition is a transform for PCollection objects that store the same data type. Partition splits a single PCollection into a fixed number of smaller collections.
Partition divides the elements of a PCollection according to a partitioning function that you provide. The partitioning function contains the logic that determines how to split up the elements of the input PCollection into each resulting partition PCollection. The number of partitions must be determined at graph construction time. You can, for example, pass the number of partitions as a command-line option at runtime (which will then be used to build your pipeline graph), but you cannot determine the number of partitions in mid-pipeline (based on data calculated after your pipeline graph is constructed, for instance).
Step15: Side Inputs
In addition to the main input PCollection, you can provide additional inputs to a ParDo transform in the form of side inputs. A side input is an additional input that your DoFn can access each time it processes an element in the input PCollection. When you specify a side input, you create a view of some other data that can be read from within the ParDo transform’s DoFn while processing each element.
Side inputs are useful if your ParDo needs to inject additional data when processing each element in the input PCollection, but the additional data needs to be determined at runtime (and not hard-coded). Such values might be determined by the input data, or depend on a different branch of your pipeline.
Step16: Additional Outputs
While ParDo always produces a main output PCollection (as the return value from apply), you can also have your ParDo produce any number of additional output PCollections. If you choose to have multiple outputs, your ParDo returns all of the output PCollections (including the main output) bundled together.
To emit elements to multiple output PCollections, invoke with_outputs() on the ParDo, and specify the expected tags for the outputs. with_outputs() returns a DoOutputsTuple object. Tags specified in with_outputs are attributes on the returned DoOutputsTuple object. The tags give access to the corresponding output PCollections.
Step17: Branching
A transform does not consume or otherwise alter the input collection – remember that a PCollection is immutable by definition. This means that you can apply multiple transforms to the same input PCollection to create a branching pipeline.
Step18: Composite Transforms
Transforms can have a nested structure, where a complex transform performs multiple simpler transforms (such as more than one ParDo, Combine, GroupByKey, or even other composite transforms). These transforms are called composite transforms. Nesting multiple transforms inside a single composite transform can make your code more modular and easier to understand.
Your composite transform's parameters and return value must match the initial input type and final return type for the entire transform, even if the transform's intermediate data changes type multiple times.
To create a composite transform, create a subclass of the PTransform class and override the expand method to specify the actual processing logic. Then use this transform just as you would a built-in transform.
Step19: Filter
Filter, given a predicate, filters out all elements that don't satisfy that predicate. Filter may also be used to filter based on an inequality with a given value based on the comparison ordering of the element. You can pass functions with multiple arguments to Filter. They are passed as additional positional arguments or keyword arguments to the function. If the PCollection has a single value, such as the average from another computation, passing the PCollection as a singleton accesses that value. If the PCollection has multiple values, pass the PCollection as an iterator. This accesses elements lazily as they are needed, so it is possible to iterate over large PCollections that won't fit into memory.
Note
Step20: Aggregation
Beam uses windowing to divide a continuously updating unbounded PCollection into logical windows of finite size. These logical windows are determined by some characteristic associated with a data element, such as a timestamp. Aggregation transforms (such as GroupByKey and Combine) work on a per-window basis — as the data set is generated, they process each PCollection as a succession of these finite windows.
A related concept, called triggers, determines when to emit the results of aggregation as unbounded data arrives. You can use triggers to refine the windowing strategy for your PCollection. Triggers allow you to deal with late-arriving data, or to provide early results.
Step21: Pipeline I/O
When you create a pipeline, you often need to read data from some external source, such as a file or a database. Likewise, you may want your pipeline to output its result data to an external storage system. Beam provides read and write transforms for a number of common data storage types. If you want your pipeline to read from or write to a data storage format that isn’t supported by the built-in transforms, you can implement your own read and write transforms.
Download example data
Download the sample dataset for use with the cells below.
Step22: Putting Everything Together
Use several of the concepts, classes, and methods discussed above in a concrete example.
Exercise 4 — Reading, Filtering, Parsing, Grouping and Averaging
Write a Beam pipeline that reads the dataset, computes the mean label (the numbers in the last column) for each article category (the third column) and prints out the results.
Hints
Step23: Note that there are many other built-in I/O transforms.
Windowing
As discussed above, windowing subdivides a PCollection according to the timestamps of its individual elements.
Some Beam transforms, such as GroupByKey and Combine, group multiple elements by a common key. Ordinarily, that grouping operation groups all of the elements that have the same key within the entire data set. With an unbounded data set, it is impossible to collect all of the elements, since new elements are constantly being added and may be infinitely many (e.g. streaming data). If you are working with unbounded PCollections, windowing is especially useful.
In the Beam model, any PCollection (including unbounded PCollections) can be subdivided into logical windows. Each element in a PCollection is assigned to one or more windows according to the PCollection's windowing function, and each individual window contains a finite number of elements. Grouping transforms then consider each PCollection's elements on a per-window basis. GroupByKey, for example, implicitly groups the elements of a PCollection by key and window.
Additional information on Beam Windowing is available in the Beam Programming Guide. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright © 2019 The TensorFlow Authors.
End of explanation
!pip install -q -U \
tensorflow==2.0.0 \
apache-beam
Explanation: TFX – Introduction to Apache Beam
TFX is designed to be scalable to very large datasets which require substantial resources. Distributed pipeline frameworks such as Apache Beam offer the ability to distribute processing across compute clusters and apply the resources required. Many of the standard TFX components use Apache Beam, and custom components that you may write may also benefit from using Apache Beam for distibuted processing.
This notebook introduces the concepts and code patterns for developing with the Apache Beam Python API.
Setup
First, we install the necessary packages, download data, import modules and set up paths.
Install TensorFlow and Apache Beam
Note
Because of some of the updates to packages you must use the button at the bottom of the output of this cell to restart the runtime. Following restart, you should rerun this cell.
End of explanation
from datetime import datetime
import os
import pprint
import tempfile
import urllib
pp = pprint.PrettyPrinter()
import tensorflow as tf
import apache_beam as beam
from apache_beam import pvalue
from apache_beam.runners.interactive.display import pipeline_graph
import graphviz
print('TensorFlow version: {}'.format(tf.__version__))
print('Beam version: {}'.format(beam.__version__))
Explanation: Import packages
We import necessary packages, including Beam.
End of explanation
first_pipeline = beam.Pipeline()
lines = (first_pipeline
| "Create" >> beam.Create(["Hello", "World", "!!!"]) # PCollection
| "Print" >> beam.ParDo(print)) # ParDo transform
result = first_pipeline.run()
result.state
Explanation: Create a Beam Pipeline
Create a pipeline, including a simple PCollection and a ParDo() transform.
A PCollection<T> is an immutable collection of values of type T. A PCollection can contain either a bounded or unbounded number of elements. Bounded and unbounded PCollections are produced as the output of PTransforms (including root PTransforms like Read and Create), and can be passed as the inputs of other PTransforms.
ParDo is the core element-wise transform in Apache Beam, invoking a user-specified function on each of the elements of the input PCollection to produce zero or more output elements, all of which are collected into the output PCollection.
First, use the .run() method.
End of explanation
def display_pipeline(pipeline):
graph = pipeline_graph.PipelineGraph(pipeline)
return graphviz.Source(graph.get_dot())
display_pipeline(first_pipeline)
Explanation: Display the structure of this pipeline.
End of explanation
with beam.Pipeline() as with_pipeline:
lines = (with_pipeline
| "Create" >> beam.Create(["Hello", "World", "!!!"])
| "Print" >> beam.ParDo(print))
display_pipeline(with_pipeline)
Explanation: Next, invoke run inside a with block.
End of explanation
with beam.Pipeline() as with_pipeline:
lines = (with_pipeline
| "Create" >> beam.Create(range(10 + 1))
| "Square" >> beam.ParDo(lambda x: [x ** 2])
| "Print" >> beam.ParDo(print))
display_pipeline(with_pipeline)
Explanation: Exercise 1 — Creating and Running Your Beam Pipeline
Build a Beam pipeline that creates a PCollection containing integers 0 to 10 and prints them.
Add a step in the pipeline to square each item.
Display the pipeline.
Warning: the ParDo() method must either return None or a list.
Solution:
End of explanation
with beam.Pipeline() as pipeline:
lines = (pipeline
| "Create" >> beam.Create([1, 2, 3])
| "Multiply" >> beam.ParDo(lambda number: [number * 2]) # ParDo with integers
| "Print" >> beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| "Create" >> beam.Create([1, 2, 3])
| "Multiply" >> beam.Map(lambda number: number * 2) # Map with integers
| "Print" >> beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| "Create" >> beam.Create(["Hello Beam", "This is cool"])
| "Split" >> beam.ParDo(lambda sentence: sentence.split()) # ParDo with strings
| "Print" >> beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| "Create" >> beam.Create(["Hello Beam", "This is cool"])
| "Split" >> beam.Map(lambda sentence: sentence.split()) # Map with strings
| "Print" >> beam.ParDo(print))
class BreakIntoWordsDoFn(beam.DoFn):
def process(self, element):
return element.split()
with beam.Pipeline() as pipeline:
lines = (pipeline
| "Create" >> beam.Create(["Hello Beam", "This is cool"])
| "Split" >> beam.ParDo(BreakIntoWordsDoFn()) # Apply a DoFn with a process method
| "Print" >> beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| "Create" >> beam.Create(["Hello Beam", "This is cool"])
| "Split" >> beam.FlatMap(lambda sentence: sentence.split()) # Compare to a FlatMap
| "Print" >> beam.ParDo(print))
Explanation: Core Transforms
Beam has a set of core transforms on data that is contained in PCollections. In the cells that follow, explore several core transforms and observe the results in order to develop some understanding and intuition for what each transform does.
Map
The Map transform applies a simple 1-to-1 mapping function over each element in the collection. Map accepts a function that returns a single element for every input element in the PCollection. You can pass functions with multiple arguments to Map. They are passed as additional positional arguments or keyword arguments to the function.
First, compare the results of a ParDo transform and a Map transform.
End of explanation
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(['apple', 'ball', 'car', 'bear', 'cheetah', 'ant'])
| beam.Map(lambda word: (word[0], word))
| beam.GroupByKey()
| beam.ParDo(print))
Explanation: GroupByKey
GroupByKey takes a keyed collection of elements and produces a collection where each element consists of a key and all values associated with that key.
GroupByKey is a transform for processing collections of key/value pairs. It’s a parallel reduction operation, analogous to the Shuffle phase of a Map/Shuffle/Reduce-style algorithm. The input to GroupByKey is a collection of key/value pairs that represents a multimap, where the collection contains multiple pairs that have the same key, but different values. Given such a collection, you use GroupByKey to collect all of the values associated with each unique key.
GroupByKey is a good way to aggregate data that has something in common. For example, if you have a collection that stores records of customer orders, you might want to group together all the orders from the same postal code (wherein the “key” of the key/value pair is the postal code field, and the “value” is the remainder of the record).
End of explanation
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(range(10 + 1))
| beam.Map(lambda x: ("odd" if x % 2 else "even", x))
| beam.GroupByKey()
| beam.ParDo(print))
Explanation: Exercise 2 — Group Items by Key
Build a Beam pipeline that creates a PCollection containing integers 0 to 10 and prints them.
Add a step in the pipeline to add a key to each item that will indicate whether it is even or odd.
Use GroupByKey to group even items together and odd items together.
Solution:
End of explanation
pipeline = beam.Pipeline()
fruits = pipeline | 'Fruits' >> beam.Create(['apple',
'banana',
'cherry'])
countries = pipeline | 'Countries' >> beam.Create(['australia',
'brazil',
'belgium',
'canada'])
def add_key(word):
return (word[0], word)
fruits_with_keys = fruits | "fruits_with_keys" >> beam.Map(add_key)
countries_with_keys = countries | "countries_with_keys" >> beam.Map(add_key)
({"fruits": fruits_with_keys, "countries": countries_with_keys}
| beam.CoGroupByKey()
| beam.Map(print))
pipeline.run()
Explanation: CoGroupByKey can combine multiple PCollections, assuming every element is a tuple whose first item is the key to join on.
End of explanation
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create([1, 2, 3, 4, 5])
| beam.CombineGlobally(sum)
| beam.Map(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create([1, 2, 3, 4, 5])
| beam.combiners.Mean.Globally()
| beam.Map(print))
class AverageFn(beam.CombineFn):
def create_accumulator(self):
return (0.0, 0)
def add_input(self, accumulator, input_):
total, count = accumulator
total += input_
count += 1
return (total, count)
def merge_accumulators(self, accumulators):
totals, counts = zip(*accumulators)
return sum(totals), sum(counts)
def extract_output(self, accumulator):
total, count = accumulator
return total / count if count else float("NaN")
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create([1, 2, 3, 4, 5])
| beam.CombineGlobally(AverageFn())
| beam.Map(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(['bob', 'alice', 'alice', 'bob', 'charlie', 'alice'])
| beam.combiners.Count.PerElement()
| beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(['bob', 'alice', 'alice', 'bob', 'charlie', 'alice'])
| beam.Map(lambda word: (word, 1))
| beam.CombinePerKey(sum)
| beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(['bob', 'alice', 'alice', 'bob', 'charlie', 'alice'])
| beam.combiners.Count.Globally()
| beam.ParDo(print))
Explanation: Combine
Combine is a transform for combining collections of elements or values. Combine has variants that work on entire PCollections, and some that combine the values for each key in PCollections of key/value pairs.
To apply a Combine transform, you must provide the function that contains the logic for combining the elements or values. The combining function should be commutative and associative, as the function is not necessarily invoked exactly once on all values with a given key. Because the input data (including the value collection) may be distributed across multiple workers, the combining function might be called multiple times to perform partial combining on subsets of the value collection. The Beam SDK also provides some pre-built combine functions for common numeric combination operations such as sum, min, and max.
Simple combine operations, such as sums, can usually be implemented as a simple function. More complex combination operations might require you to create a subclass of CombineFn that has an accumulation type distinct from the input/output type.
End of explanation
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(range(10 + 1))
| beam.Map(lambda x: ("odd" if x % 2 else "even", x))
| beam.Map(lambda x: (x[0], x[1] ** 2))
| beam.CombinePerKey(AverageFn())
| beam.ParDo(print))
Explanation: Exercise 3 — Combine Items
Start with Beam pipeline you built in the previous exercise: it creates a PCollection containing integers 0 to 10, groups them by their parity, and prints the groups.
Add a step that computes the mean of each group (i.e., the mean of all odd numbers between 0 and 10, and the mean of all even numbers between 0 and 10).
Add another step to make the pipeline compute the mean of the squares of the numbers in each group.
Solution:
End of explanation
pipeline = beam.Pipeline()
wordsStartingWithA = (pipeline
| 'Words starting with A' >> beam.Create(['apple', 'ant', 'arrow']))
wordsStartingWithB = (pipeline
| 'Words starting with B' >> beam.Create(['ball', 'book', 'bow']))
((wordsStartingWithA, wordsStartingWithB)
| beam.Flatten()
| beam.ParDo(print))
pipeline.run()
Explanation: Flatten
Flatten is a transform for PCollection objects that store the same data type. Flatten merges multiple PCollection objects into a single logical PCollection.
Data encoding in merged collections
By default, the coder for the output PCollection is the same as the coder for the first PCollection in the input PCollectionList. However, the input PCollection objects can each use different coders, as long as they all contain the same data type in your chosen language.
Merging windowed collections
When using Flatten to merge PCollection objects that have a windowing strategy applied, all of the PCollection objects you want to merge must use a compatible windowing strategy and window sizing. For example, all the collections you're merging must all use (hypothetically) identical 5-minute fixed windows or 4-minute sliding windows starting every 30 seconds.
If your pipeline attempts to use Flatten to merge PCollection objects with incompatible windows, Beam generates an IllegalStateException error when your pipeline is constructed.
End of explanation
def partition_fn(number, num_partitions):
partition = number // 100
return min(partition, num_partitions - 1)
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create([1, 110, 2, 350, 4, 5, 100, 150, 3])
| beam.Partition(partition_fn, 3))
lines[0] | '< 100' >> beam.ParDo(print, "Small")
lines[1] | '[100, 200)' >> beam.ParDo(print, "Medium")
lines[2] | '> 200' >> beam.ParDo(print, "Big")
Explanation: Partition
Partition is a transform for PCollection objects that store the same data type. Partition splits a single PCollection into a fixed number of smaller collections.
Partition divides the elements of a PCollection according to a partitioning function that you provide. The partitioning function contains the logic that determines how to split up the elements of the input PCollection into each resulting partition PCollection. The number of partitions must be determined at graph construction time. You can, for example, pass the number of partitions as a command-line option at runtime (which will then be used to build your pipeline graph), but you cannot determine the number of partitions in mid-pipeline (based on data calculated after your pipeline graph is constructed, for instance).
End of explanation
def increment(number, inc=1):
return number + inc
with beam.Pipeline() as pipeline:
lines = (pipeline
| "Create" >> beam.Create([1, 2, 3, 4, 5])
| "Increment" >> beam.Map(increment)
| "Print" >> beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| "Create" >> beam.Create([1, 2, 3, 4, 5])
| "Increment" >> beam.Map(increment, 10) # Pass a side input of 10
| "Print" >> beam.ParDo(print))
Explanation: Side Inputs
In addition to the main input PCollection, you can provide additional inputs to a ParDo transform in the form of side inputs. A side input is an additional input that your DoFn can access each time it processes an element in the input PCollection. When you specify a side input, you create a view of some other data that can be read from within the ParDo transform’s DoFn while processing each element.
Side inputs are useful if your ParDo needs to inject additional data when processing each element in the input PCollection, but the additional data needs to be determined at runtime (and not hard-coded). Such values might be determined by the input data, or depend on a different branch of your pipeline.
End of explanation
def compute(number):
if number % 2 == 0:
yield number
else:
yield pvalue.TaggedOutput("odd", number + 10)
with beam.Pipeline() as pipeline:
even, odd = (pipeline
| "Create" >> beam.Create([1, 2, 3, 4, 5, 6, 7])
| "Increment" >> beam.ParDo(compute).with_outputs("odd",
main="even"))
even | "Even" >> beam.ParDo(print, "even")
odd | "Odd" >> beam.ParDo(print, "odd")
Explanation: Additional Outputs
While ParDo always produces a main output PCollection (as the return value from apply), you can also have your ParDo produce any number of additional output PCollections. If you choose to have multiple outputs, your ParDo returns all of the output PCollections (including the main output) bundled together.
To emit elements to multiple output PCollections, invoke with_outputs() on the ParDo, and specify the expected tags for the outputs. with_outputs() returns a DoOutputsTuple object. Tags specified in with_outputs are attributes on the returned DoOutputsTuple object. The tags give access to the corresponding output PCollections.
End of explanation
with beam.Pipeline() as branching_pipeline:
numbers = (branching_pipeline | beam.Create([1, 2, 3, 4, 5]))
mult5_results = numbers | beam.Map(lambda num: num * 5)
mult10_results = numbers | beam.Map(lambda num: num * 10)
mult5_results | 'Log multiply 5' >> beam.ParDo(print, 'Mult 5')
mult10_results | 'Log multiply 10' >> beam.ParDo(print, 'Mult 10')
display_pipeline(branching_pipeline)
Explanation: Branching
A transform does not consume or otherwise alter the input collection – remember that a PCollection is immutable by definition. This means that you can apply multiple transforms to the same input PCollection to create a branching pipeline.
End of explanation
class ExtractAndMultiplyNumbers(beam.PTransform):
def expand(self, pcollection):
return (pcollection
| beam.FlatMap(lambda line: line.split(","))
| beam.Map(lambda num: int(num) * 10))
with beam.Pipeline() as composite_pipeline:
lines = (composite_pipeline
| beam.Create(['1,2,3,4,5', '6,7,8,9,10'])
| ExtractAndMultiplyNumbers()
| beam.ParDo(print))
display_pipeline(composite_pipeline)
Explanation: Composite Transforms
Transforms can have a nested structure, where a complex transform performs multiple simpler transforms (such as more than one ParDo, Combine, GroupByKey, or even other composite transforms). These transforms are called composite transforms. Nesting multiple transforms inside a single composite transform can make your code more modular and easier to understand.
Your composite transform's parameters and return value must match the initial input type and final return type for the entire transform, even if the transform's intermediate data changes type multiple times.
To create a composite transform, create a subclass of the PTransform class and override the expand method to specify the actual processing logic. Then use this transform just as you would a built-in transform.
End of explanation
class FilterOddNumbers(beam.DoFn):
def process(self, element, *args, **kwargs):
if element % 2 == 1:
yield element
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(range(1, 11))
| beam.ParDo(FilterOddNumbers())
| beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(range(1, 11))
| beam.Filter(lambda num: num % 2 == 1)
| beam.ParDo(print))
Explanation: Filter
Filter, given a predicate, filters out all elements that don't satisfy that predicate. Filter may also be used to filter based on an inequality with a given value based on the comparison ordering of the element. You can pass functions with multiple arguments to Filter. They are passed as additional positional arguments or keyword arguments to the function. If the PCollection has a single value, such as the average from another computation, passing the PCollection as a singleton accesses that value. If the PCollection has multiple values, pass the PCollection as an iterator. This accesses elements lazily as they are needed, so it is possible to iterate over large PCollections that won't fit into memory.
Note: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection), but this requires that all the elements fit into memory.
If a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary. Each element must be a (key, value) pair. Note that all the elements of the PCollection must fit into memory. If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead.
End of explanation
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(range(1, 11))
| beam.combiners.Count.Globally() # Count
| beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(range(1, 11))
| beam.CombineGlobally(sum) # CombineGlobally sum
| beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(range(1, 11))
| beam.combiners.Mean.Globally() # Mean
| beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(range(1, 11))
| beam.combiners.Top.Smallest(1) # Top Smallest
| beam.ParDo(print))
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.Create(range(1, 11))
| beam.combiners.Top.Largest(1) # Top Largest
| beam.ParDo(print))
Explanation: Aggregation
Beam uses windowing to divide a continuously updating unbounded PCollection into logical windows of finite size. These logical windows are determined by some characteristic associated with a data element, such as a timestamp. Aggregation transforms (such as GroupByKey and Combine) work on a per-window basis — as the data set is generated, they process each PCollection as a succession of these finite windows.
A related concept, called triggers, determines when to emit the results of aggregation as unbounded data arrives. You can use triggers to refine the windowing strategy for your PCollection. Triggers allow you to deal with late-arriving data, or to provide early results.
End of explanation
DATA_PATH = 'https://raw.githubusercontent.com/ageron/open-datasets/master/' \
'online_news_popularity_for_course/online_news_popularity_for_course.csv'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
!head {_data_filepath}
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.io.ReadFromText(_data_filepath)
| beam.Filter(lambda line: line.startswith("2013-01-07,0,World"))
| beam.ParDo(print))
Explanation: Pipeline I/O
When you create a pipeline, you often need to read data from some external source, such as a file or a database. Likewise, you may want your pipeline to output its result data to an external storage system. Beam provides read and write transforms for a number of common data storage types. If you want your pipeline to read from or write to a data storage format that isn’t supported by the built-in transforms, you can implement your own read and write transforms.
Download example data
Download the sample dataset for use with the cells below.
End of explanation
with beam.Pipeline() as pipeline:
lines = (pipeline
| beam.io.ReadFromText(_data_filepath)
| beam.Filter(lambda line: line < "2014-01-01")
| beam.Map(lambda line: line.split(",")) # CSV parser?
| beam.Map(lambda cols: (cols[2], float(cols[-1])))
| beam.combiners.Mean.PerKey()
| beam.ParDo(print))
with tf.io.TFRecordWriter("test.tfrecord") as tfrecord_file:
for index in range(10):
tfrecord_file.write("Record {}".format(index).encode("utf-8"))
dataset = tf.data.TFRecordDataset('test.tfrecord')
for record in dataset:
print(record.numpy())
with beam.Pipeline() as rw_pipeline:
lines = (rw_pipeline
| beam.io.ReadFromTFRecord("test.tfrecord")
| beam.Map(lambda line: line + b' processed')
| beam.io.WriteToTFRecord("test_processed.tfrecord")
| beam.ParDo(print))
display_pipeline(rw_pipeline)
with beam.Pipeline() as utf_pipeline:
lines = (utf_pipeline
| "Read" >> beam.io.ReadFromTFRecord("test_processed.tfrecord-00000-of-00001")
| "Decode" >> beam.Map(lambda line: line.decode('utf-8'))
| "Print" >> beam.ParDo(print))
display_pipeline(utf_pipeline)
Explanation: Putting Everything Together
Use several of the concepts, classes, and methods discussed above in a concrete example.
Exercise 4 — Reading, Filtering, Parsing, Grouping and Averaging
Write a Beam pipeline that reads the dataset, computes the mean label (the numbers in the last column) for each article category (the third column) and prints out the results.
Hints:
* Use the code above to read the dataset and change the filtering logic to keep only the year 2013.
* Add a Map step to split each row on the commas.
* Add another Map step to add a key equal to the category, and a GroupByKey step to group the articles by their category.
* Add a step to convert the last column (i.e., the label) to a float, and another step to compute the mean of that column for each category, using beam.combiners.Mean.PerKey.
* Finally, add a ParDo step to print out the results.
Solution:
End of explanation
DAYS = 24 * 60 * 60
class AssignTimestamps(beam.DoFn):
def process(self, element):
date = datetime.strptime(element[0], "%Y-%m-%d")
yield beam.window.TimestampedValue(element, date.timestamp())
with beam.Pipeline() as window_pipeline:
lines = (window_pipeline
| beam.io.ReadFromText(_data_filepath)
| beam.Filter(lambda line: line < "2014-01-01")
| beam.Map(lambda line: line.split(",")) # CSV parser?
| beam.ParDo(AssignTimestamps())
| beam.WindowInto(beam.window.FixedWindows(7*DAYS))
| beam.Map(lambda cols: (cols[2], float(cols[-1])))
| beam.combiners.Mean.PerKey()
| beam.ParDo(print))
display_pipeline(window_pipeline)
class AssignTimestamps(beam.DoFn):
def process(self, element):
date = datetime.strptime(element[0], "%Y-%m-%d")
yield beam.window.TimestampedValue(element, date.timestamp())
class PrintWithTimestamp(beam.DoFn):
def process(self, element, timestamp=beam.DoFn.TimestampParam):
print(timestamp.to_rfc3339()[:10], element)
with beam.Pipeline() as ts_pipeline:
lines = (ts_pipeline
| beam.io.ReadFromText(_data_filepath)
| beam.Filter(lambda line: line < "2014-01-01")
| beam.Map(lambda line: line.split(",")) # CSV parser?
| beam.ParDo(AssignTimestamps())
| beam.WindowInto(beam.window.FixedWindows(7 * DAYS))
| beam.Map(lambda cols: (cols[2], float(cols[-1])))
| beam.combiners.Mean.PerKey()
| beam.ParDo(PrintWithTimestamp()))
display_pipeline(ts_pipeline)
Explanation: Note that there are many other built-in I/O transforms.
Windowing
As discussed above, windowing subdivides a PCollection according to the timestamps of its individual elements.
Some Beam transforms, such as GroupByKey and Combine, group multiple elements by a common key. Ordinarily, that grouping operation groups all of the elements that have the same key within the entire data set. With an unbounded data set, it is impossible to collect all of the elements, since new elements are constantly being added and may be infinitely many (e.g. streaming data). If you are working with unbounded PCollections, windowing is especially useful.
In the Beam model, any PCollection (including unbounded PCollections) can be subdivided into logical windows. Each element in a PCollection is assigned to one or more windows according to the PCollection's windowing function, and each individual window contains a finite number of elements. Grouping transforms then consider each PCollection's elements on a per-window basis. GroupByKey, for example, implicitly groups the elements of a PCollection by key and window.
Additional information on Beam Windowing is available in the Beam Programming Guide.
End of explanation |
209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebooks aims at visualizing the content of the MNIST dataset
As a fist step, we use keras to download the dataset.
Then we print the shape of it.
Step1: In case it needs disambiguation, we plot some arrays corresponding to the X values and some of the Y values
Step2: Now, we are sure that X_train contains grey-scaled images and Y_train the digit corresponding to its X_train image.
Data distribution
Next step consists in knowing how is our data distibuted
Step3: Left plots tells us that the Xs are ranging in $[0, 255]$ and that most of the pixels are around zero.
Right plots shows the distribution of the 10 labels MNIST has.
In both cases (train and test) we learn that "5" is the least represented digit and "1" is the most represented one.
Data normalization
As we'll see, we rather normlize and center this data such that X_train has a standard deviation of one and a mean of zero | Python Code:
import keras
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print "input of training set has shape {} and output has shape {}".format(x_train.shape, y_train.shape)
print "input of testing set has shape {} and output has shape {}".format(x_test.shape, y_test.shape)
Explanation: This notebooks aims at visualizing the content of the MNIST dataset
As a fist step, we use keras to download the dataset.
Then we print the shape of it.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
print "The first 15 <X_train> values:"
fig, axs = plt.subplots(3,5)
axs = [b for a in axs for b in a]
for i in range(3*5):
axs[i].imshow(x_train[i], cmap='gray')
axs[i].axis('off')
plt.show()
print "The first 15 <y_test> values:"
print y_train[: 3 * 5]
Explanation: In case it needs disambiguation, we plot some arrays corresponding to the X values and some of the Y values:
End of explanation
fig, axs = plt.subplots(2,2)
axs[0][0].hist(x_train.reshape([-1]), bins = 25)
axs[0][1].hist(y_train.reshape([-1]), bins = 10)
axs[1][0].hist(x_test.reshape([-1]), bins = 25)
axs[1][1].hist(y_test.reshape([-1]), bins = 10)
plt.show()
print "Standard deviation of x_train is {} and mean is {}".format(
x_train.std(), x_train.mean())
print "Standard deviation of x_test is {} and mean is {}".format(
x_test.std(), x_test.mean())
Explanation: Now, we are sure that X_train contains grey-scaled images and Y_train the digit corresponding to its X_train image.
Data distribution
Next step consists in knowing how is our data distibuted
End of explanation
# Normalize the MNIST data
def data_preprocessing(data, std, mean):
data = data - mean
data = data / std
return data
std = x_train.std()
mean = x_train.mean()
x_train = data_preprocessing(x_train, std, mean)
x_test = data_preprocessing(x_test, std, mean)
# Show the results
fig, axs = plt.subplots(2,1)
axs[0].hist(x_train.reshape([-1]), bins = 25)
axs[1].hist(x_test.reshape([-1]), bins = 25)
plt.show()
print "Standard deviation of x_train is {} and mean is {}".format(
x_train.std(), x_train.mean())
print "Standard deviation of x_test is {} and mean is {}".format(
x_test.std(), x_test.mean())
Explanation: Left plots tells us that the Xs are ranging in $[0, 255]$ and that most of the pixels are around zero.
Right plots shows the distribution of the 10 labels MNIST has.
In both cases (train and test) we learn that "5" is the least represented digit and "1" is the most represented one.
Data normalization
As we'll see, we rather normlize and center this data such that X_train has a standard deviation of one and a mean of zero
End of explanation |
210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numerical Evaluation of Integrals
Integration problems are common in statistics whenever we are dealing with continuous distributions. For example the expectation of a function is an integration problem
$$
E[f(x)] = \int{f(x) \, p(x) \, dx}
$$
In Bayesian statistics, we need to solve the integration problem for the marginal likelihood or evidence
$$
p(X \mid \alpha) = \int{p(X \mid \theta) \, p(\theta \mid \alpha) d\theta}
$$
where $\alpha$ is a hyperparameter and $p(X \mid \alpha)$ appears in the denominator of Bayes theorem
$$
p(\theta | X) = \frac{p(X \mid \theta) \, p(\theta \mid \alpha)}{p(X \mid \alpha)}
$$
In general, there is no closed form solution to these integrals, and we have to approximate them numerically. The first step is to check if there is some reparameterization that will simplify the problem. Then, the general approaches to solving integration problems are
Numerical quadrature
Importance sampling, adaptive importance sampling and variance reduction techniques (Monte Carlo swindles)
Markov Chain Monte Carlo
Asymptotic approximations (Laplace method and its modern version in variational inference)
This lecture will review the concepts for quadrature and Monte Carlo integration.
Quadrature
You may recall from Calculus that integrals can be numerically evaluated using quadrature methods such as Trapezoid and Simpson's's rules. This is easy to do in Python, but has the drawback of the complexity growing as $O(n^d)$ where $d$ is the dimensionality of the data, and hence infeasible once $d$ grows beyond a modest number.
Integrating functions
Step1: Exact solution
Step2: Using quadrature
Step3: Multiple integration
Following the scipy.integrate documentation, we integrate
$$
I=\int_{y=0}^{1/2}\int_{x=0}^{1-2y} x y \, dx\, dy
$$
Step4: Monte Carlo integration
The basic idea of Monte Carlo integration is very simple and only requires elementary statistics. Suppose we want to find the value of
$$
I = \int_a^b f(x) dx
$$
in some region with volume $V$. Monte Carlo integration estimates this integral by estimating the fraction of random points that fall below $f(x)$ multiplied by $V$.
In a statistical context, we use Monte Carlo integration to estimate the expectation
$$
E[g(X)] = \int_X g(x) p(x) dx
$$
with
$$
\bar{g_n} = \frac{1}{n} \sum_{i=1}^n g(x_i)
$$
where $x_i \sim p$ is a draw from the density $p$.
We can estimate the Monte Carlo variance of the approximation as
$$
v_n = \frac{1}{n^2} \sum_{o=1}^n (g(x_i) - \bar{g_n})^2)
$$
Also, from the Central Limit Theorem,
$$
\frac{\bar{g_n} - E[g(X)]}{\sqrt{v_n}} \sim \mathcal{N}(0, 1)
$$
The convergence of Monte Carlo integration is $\mathcal{0}(n^{1/2})$ and independent of the dimensionality. Hence Monte Carlo integration generally beats numerical integration for moderate- and high-dimensional integration since numerical integration (quadrature) converges as $\mathcal{0}(n^{d})$. Even for low dimensional problems, Monte Carlo integration may have an advantage when the volume to be integrated is concentrated in a very small region and we can use information from the distribution to draw samples more often in the region of importance.
An elementary, readable description of Monte Carlo integration and variance reduction techniques can be found here.
Intuition behind Monte Carlo integration
We want to find some integral
$$I = \int{f(x)} \, dx$$
Consider the expectation of a function $g(x)$ with respect to some distribution $p(x)$. By definition, we have
$$
E[g(x)] = \int{g(x) \, p(x) \, dx}
$$
If we choose $g(x) = f(x)/p(x)$, then we have
$$
\begin{align}
E[g(x)] &= \int{\frac{f(x}{p(x)} \, p(x) \, dx} \
&= \int{f(x) dx} \
&= I
\end{align}
$$
By the law of large numbers, the average converges on the expectation, so we have
$$
I \approx \bar{g_n} = \frac{1}{n} \sum_{i=1}^n g(x_i)
$$
If $f(x)$ is a proper integral (i.e. bounded), and $p(x)$ is the uniform distribution, then $g(x) = f(x)$ and this is known as ordinary Monte Carlo. If the integral of $f(x)$ is improper, then we need to use another distribution with the same support as $f(x)$.
Intuition for error rate
We will just work this out for a proper integral $f(x)$ defined in the unit cube and bounded by $|f(x)| \le 1$. Draw a random uniform vector $x$ in the unit cube. Then
$$
\begin{align}
E[f(x_i)] &= \int{f(x) p(x) dx} = I \
\text{Var}[f(x_i)] &= \int{(f(x_i) - I )^2 p(x) \, dx} \
&= \int{f(x)^2 \, p(x) \, dx} - 2I \int(f(x) \, p(x) \, dx + I^2 \int{p(x) \, dx} \
& \le \int{f(x)^2 \, p(x) \, dx} + I^2 \
& \le \int{f(x)^2 \, p(x) \, dx} \
& \le \int{p(x) \, dx} = 1
\end{align}
$$
Now consider summing over many such IID draws $S_n = f(x_1) + f(x_2) + \cdots + f(x_n)$. We have
$$
\begin{align}
E[S_n] &= nI \
\text{Var}[S_n] & \le n
\end{align}
$$
and as expected, we see that $I \approx S_n/n$. From Chebyshev's inequality,
$$
\begin{align}
P \left( \left| \frac{s_n}{n} - I \right| \ge \epsilon \right) &=
P \left( \left| s_n - nI \right| \ge n \epsilon \right) & \le \frac{\text{Var}[s_n]}{n^2 \epsilon^2} & \le
\frac{1}{n \epsilon^2} = \delta
\end{align}
$$
Suppose we want 1% accuracy and 99% confidence - i.e. set $\epsilon = \delta = 0.01$. The above inequality tells us that we can achieve this with just $n = 1/(\delta \epsilon^2) = 1,000,000$ samples, regardless of the data dimensionality.
Example
We want to estimate the following integral $\int_0^1 e^x dx$. The minimum value of the function is 1 at $x=0$ and $e$ at $x=1$.
Step5: Analytic solution
Step6: Using quadrature
Step7: Monte Carlo integration
Step8: Monitoring variance in Monte Carlo integration
We are often interested in knowing how many iterations it takes for Monte Carlo integration to "converge". To do this, we would like some estimate of the variance, and it is useful to inspect such plots. One simple way to get confidence intervals for the plot of Monte Carlo estimate against number of iterations is simply to do many such simulations.
For the example, we will try to estimate the function (again)
$$
f(x) = x \cos 71 x + \sin 13x, \ \ 0 \le x \le 1
$$
Step9: Single MC integration estimate
Step10: Using multiple independent sequences to monitor convergence
We vary the sample size from 1 to 100 and calculate the value of $y = \sum{x}/n$ for 1000 replicates. We then plot the 2.5th and 97.5th percentile of the 1000 values of $y$ to see how the variation in $y$ changes with sample size. The blue lines indicate the 2.5th and 97.5th percentiles, and the red line a sample path.
Step11: Using bootstrap to monitor convergence
If it is too expensive to do 1000 replicates, we can use a bootstrap instead.
Step12: Variance Reduction
With independent samples, the variance of the Monte Carlo estimate is
$$
\begin{align}
\text{Var}[\bar{g_n}] &= \text{Var} \left[ \frac{1}{N}\sum_{i=1}^{N} \frac{f(x_i)}{p(x_i)} \right] \
&= \frac{1}{N^2} \sum_{i=1}^{N} \text{Var} \left[ \frac{f(x_i)}{p(x_i)} \right] \
&= \frac{1}{N^2} \sum_{i=1}^{N} \text{Var}[Y_i] \
&= \frac{1}{N} \text{Var}[Y_i]
\end{align}
$$
where $Y_i = f(x_i)/p(x_i)$. The objective of Monte Carlo swindles is to make $\text{Var}[\bar{g_n}]$ as small as possible for the same number of samples.
Change of variables
The Cauchy distribution is given by
$$
f(x) = \frac{1}{\pi (1 + x^2)}, \ \ -\infty \lt x \lt \infty
$$
Suppose we want to integrate the tail probability $P(X > 3)$ using Monte Carlo. One way to do this is to draw many samples form a Cauchy distribution, and count how many of them are greater than 3, but this is extremely inefficient.
Only 10% of samples will be used
Step13: A change of variables lets us use 100% of draws
We are trying to estimate the quantity
$$
\int_3^\infty \frac{1}{\pi (1 + x^2)} dx
$$
Using the substitution $y = 3/x$ (and a little algebra), we get
$$
\int_0^1 \frac{3}{\pi(9 + y^2)} dy
$$
Hence, a much more efficient MC estimator is
$$
\frac{1}{n} \sum_{i=1}^n \frac{3}{\pi(9 + y_i^2)}
$$
where $y_i \sim \mathcal{U}(0, 1)$.
Step14: Monte Carlo swindles
Apart from change of variables, there are several general techniques for variance reduction, sometimes known as Monte Carlo swindles since these methods improve the accuracy and convergence rate of Monte Carlo integration without increasing the number of Monte Carlo samples. Some Monte Carlo swindles are
Step15: Vanilla Monte Carlo
Step16: Antithetic variables use first half of u supplemented with 1-u
This works because the random draws are now negatively correlated, and hence the sum of the variances will be less than in the IID case, while the expectation is unchanged.
Step17: Importance sampling
Ordinary Monte Carlo sampling evaluates
$$
E[g(X)] = \int_X g(x)\, p(x) \, dx
$$
Using another distribution $h(x)$ - the so-called "importance function", we can rewrite the above expression as an expectation with respect to $h$
$$
E_p[g(x)] \ = \ \int_X g(x) \frac{p(x)}{h(x)} h(x) dx \ = \ E_h\left[ \frac{g(X) p(X)}{h(X)} \right]
$$
giving us the new estimator
$$
\bar{g_n} = \frac{1}{n} \sum_{i=1}^n \frac{p(x_i)}{h(x_i)} g(x_i)
$$
where $x_i \sim g$ is a draw from the density $h$. This is helpful if the distribution $h$ has a similar shape as the function $f(x)$ that we are integrating over, since we will draw more samples from places where the integrand makes a larger or more "important" contribution. This is very dependent on a good choice for the importance function $h$. Two simple choices for $h$ are scaling
$$
h(x) = \frac{1}{a} p(x/a)
$$
and translation
$$
h(x) = p*(x - a)
$$
In these cases, the parameter $a$ is typically chosen using some adaptive algorithm, giving rise to adaptive importance sampling. Alternatively, a different distribution can be chosen as shown in the example below.
Example
Suppose we want to estimate the tail probability of $\mathcal{N}(0, 1)$ for $P(X > 5)$. Regular MC integration using samples from $\mathcal{N}(0, 1)$ is hopeless since nearly all samples will be rejected. However, we can use the exponential density truncated at 5 as the importance function and use importance sampling.
Step18: Expected answer
We expect about 3 draws out of 10,000,000 from $\mathcal{N}(0, 1)$ to have a value greater than 5. Hence simply sampling from $\mathcal{N}(0, 1)$ is hopelessly inefficient for Monte Carlo integration.
Step19: Using direct Monte Carlo integration
Step20: Using importance sampling
Step21: Quasi-random numbers
Recall that the convergence of Monte Carlo integration is $\mathcal{0}(n^{1/2})$. One issue with simple Monte Carlo is that randomly chosen points tend to be clumped. Clumping reduces accuracy since nearby points provide little additional information about the function begin estimated. One way to address this is to split the space into multiple integration regions, then sum them up. This is known as stratified sampling. Another alternative is to use quasi-random numbers which fill space more efficiently than random sequences
It turns out that if we use quasi-random or low discrepancy sequences, we can get convergence approaching $\mathcal{0}(1/n)$. There are several such generators, but their use in statistical settings is limited to cases where we are integrating with respect to uniform distributions. The regularity can also give rise to errors when estimating integrals of periodic functions. However, these quasi-Monte Carlo methods are used in computational finance models.
Run
! pip install ghalton
if ghalton is not installed.
Step22: Quasi-Monte Carlo integration can reduce variance | Python Code:
from scipy.integrate import quad
def f(x):
return x * np.cos(71*x) + np.sin(13*x)
x = np.linspace(0, 1, 100)
plt.plot(x, f(x))
pass
Explanation: Numerical Evaluation of Integrals
Integration problems are common in statistics whenever we are dealing with continuous distributions. For example the expectation of a function is an integration problem
$$
E[f(x)] = \int{f(x) \, p(x) \, dx}
$$
In Bayesian statistics, we need to solve the integration problem for the marginal likelihood or evidence
$$
p(X \mid \alpha) = \int{p(X \mid \theta) \, p(\theta \mid \alpha) d\theta}
$$
where $\alpha$ is a hyperparameter and $p(X \mid \alpha)$ appears in the denominator of Bayes theorem
$$
p(\theta | X) = \frac{p(X \mid \theta) \, p(\theta \mid \alpha)}{p(X \mid \alpha)}
$$
In general, there is no closed form solution to these integrals, and we have to approximate them numerically. The first step is to check if there is some reparameterization that will simplify the problem. Then, the general approaches to solving integration problems are
Numerical quadrature
Importance sampling, adaptive importance sampling and variance reduction techniques (Monte Carlo swindles)
Markov Chain Monte Carlo
Asymptotic approximations (Laplace method and its modern version in variational inference)
This lecture will review the concepts for quadrature and Monte Carlo integration.
Quadrature
You may recall from Calculus that integrals can be numerically evaluated using quadrature methods such as Trapezoid and Simpson's's rules. This is easy to do in Python, but has the drawback of the complexity growing as $O(n^d)$ where $d$ is the dimensionality of the data, and hence infeasible once $d$ grows beyond a modest number.
Integrating functions
End of explanation
from sympy import sin, cos, symbols, integrate
x = symbols('x')
integrate(x * cos(71*x) + sin(13*x), (x, 0,1)).evalf(6)
Explanation: Exact solution
End of explanation
y, err = quad(f, 0, 1.0)
y
Explanation: Using quadrature
End of explanation
x, y = symbols('x y')
integrate(x*y, (x, 0, 1-2*y), (y, 0, 0.5))
from scipy.integrate import nquad
def f(x, y):
return x*y
def bounds_y():
return [0, 0.5]
def bounds_x(y):
return [0, 1-2*y]
y, err = nquad(f, [bounds_x, bounds_y])
y
Explanation: Multiple integration
Following the scipy.integrate documentation, we integrate
$$
I=\int_{y=0}^{1/2}\int_{x=0}^{1-2y} x y \, dx\, dy
$$
End of explanation
x = np.linspace(0, 1, 100)
plt.plot(x, np.exp(x))
plt.xlim([0,1])
plt.ylim([0, np.e])
pass
Explanation: Monte Carlo integration
The basic idea of Monte Carlo integration is very simple and only requires elementary statistics. Suppose we want to find the value of
$$
I = \int_a^b f(x) dx
$$
in some region with volume $V$. Monte Carlo integration estimates this integral by estimating the fraction of random points that fall below $f(x)$ multiplied by $V$.
In a statistical context, we use Monte Carlo integration to estimate the expectation
$$
E[g(X)] = \int_X g(x) p(x) dx
$$
with
$$
\bar{g_n} = \frac{1}{n} \sum_{i=1}^n g(x_i)
$$
where $x_i \sim p$ is a draw from the density $p$.
We can estimate the Monte Carlo variance of the approximation as
$$
v_n = \frac{1}{n^2} \sum_{o=1}^n (g(x_i) - \bar{g_n})^2)
$$
Also, from the Central Limit Theorem,
$$
\frac{\bar{g_n} - E[g(X)]}{\sqrt{v_n}} \sim \mathcal{N}(0, 1)
$$
The convergence of Monte Carlo integration is $\mathcal{0}(n^{1/2})$ and independent of the dimensionality. Hence Monte Carlo integration generally beats numerical integration for moderate- and high-dimensional integration since numerical integration (quadrature) converges as $\mathcal{0}(n^{d})$. Even for low dimensional problems, Monte Carlo integration may have an advantage when the volume to be integrated is concentrated in a very small region and we can use information from the distribution to draw samples more often in the region of importance.
An elementary, readable description of Monte Carlo integration and variance reduction techniques can be found here.
Intuition behind Monte Carlo integration
We want to find some integral
$$I = \int{f(x)} \, dx$$
Consider the expectation of a function $g(x)$ with respect to some distribution $p(x)$. By definition, we have
$$
E[g(x)] = \int{g(x) \, p(x) \, dx}
$$
If we choose $g(x) = f(x)/p(x)$, then we have
$$
\begin{align}
E[g(x)] &= \int{\frac{f(x}{p(x)} \, p(x) \, dx} \
&= \int{f(x) dx} \
&= I
\end{align}
$$
By the law of large numbers, the average converges on the expectation, so we have
$$
I \approx \bar{g_n} = \frac{1}{n} \sum_{i=1}^n g(x_i)
$$
If $f(x)$ is a proper integral (i.e. bounded), and $p(x)$ is the uniform distribution, then $g(x) = f(x)$ and this is known as ordinary Monte Carlo. If the integral of $f(x)$ is improper, then we need to use another distribution with the same support as $f(x)$.
Intuition for error rate
We will just work this out for a proper integral $f(x)$ defined in the unit cube and bounded by $|f(x)| \le 1$. Draw a random uniform vector $x$ in the unit cube. Then
$$
\begin{align}
E[f(x_i)] &= \int{f(x) p(x) dx} = I \
\text{Var}[f(x_i)] &= \int{(f(x_i) - I )^2 p(x) \, dx} \
&= \int{f(x)^2 \, p(x) \, dx} - 2I \int(f(x) \, p(x) \, dx + I^2 \int{p(x) \, dx} \
& \le \int{f(x)^2 \, p(x) \, dx} + I^2 \
& \le \int{f(x)^2 \, p(x) \, dx} \
& \le \int{p(x) \, dx} = 1
\end{align}
$$
Now consider summing over many such IID draws $S_n = f(x_1) + f(x_2) + \cdots + f(x_n)$. We have
$$
\begin{align}
E[S_n] &= nI \
\text{Var}[S_n] & \le n
\end{align}
$$
and as expected, we see that $I \approx S_n/n$. From Chebyshev's inequality,
$$
\begin{align}
P \left( \left| \frac{s_n}{n} - I \right| \ge \epsilon \right) &=
P \left( \left| s_n - nI \right| \ge n \epsilon \right) & \le \frac{\text{Var}[s_n]}{n^2 \epsilon^2} & \le
\frac{1}{n \epsilon^2} = \delta
\end{align}
$$
Suppose we want 1% accuracy and 99% confidence - i.e. set $\epsilon = \delta = 0.01$. The above inequality tells us that we can achieve this with just $n = 1/(\delta \epsilon^2) = 1,000,000$ samples, regardless of the data dimensionality.
Example
We want to estimate the following integral $\int_0^1 e^x dx$. The minimum value of the function is 1 at $x=0$ and $e$ at $x=1$.
End of explanation
from sympy import symbols, integrate, exp
x = symbols('x')
expr = integrate(exp(x), (x,0,1))
expr.evalf()
Explanation: Analytic solution
End of explanation
from scipy import integrate
y, err = integrate.quad(exp, 0, 1)
y
Explanation: Using quadrature
End of explanation
for n in 10**np.array([1,2,3,4,5,6,7,8]):
x = np.random.uniform(0, 1, n)
sol = np.mean(np.exp(x))
print('%10d %.6f' % (n, sol))
Explanation: Monte Carlo integration
End of explanation
def f(x):
return x * np.cos(71*x) + np.sin(13*x)
x = np.linspace(0, 1, 100)
plt.plot(x, f(x))
pass
Explanation: Monitoring variance in Monte Carlo integration
We are often interested in knowing how many iterations it takes for Monte Carlo integration to "converge". To do this, we would like some estimate of the variance, and it is useful to inspect such plots. One simple way to get confidence intervals for the plot of Monte Carlo estimate against number of iterations is simply to do many such simulations.
For the example, we will try to estimate the function (again)
$$
f(x) = x \cos 71 x + \sin 13x, \ \ 0 \le x \le 1
$$
End of explanation
n = 100
x = f(np.random.random(n))
y = 1.0/n * np.sum(x)
y
Explanation: Single MC integration estimate
End of explanation
n = 100
reps = 1000
x = f(np.random.random((n, reps)))
y = 1/np.arange(1, n+1)[:, None] * np.cumsum(x, axis=0)
upper, lower = np.percentile(y, [2.5, 97.5], axis=1)
plt.plot(np.arange(1, n+1), y, c='grey', alpha=0.02)
plt.plot(np.arange(1, n+1), y[:, 0], c='red', linewidth=1);
plt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')
pass
Explanation: Using multiple independent sequences to monitor convergence
We vary the sample size from 1 to 100 and calculate the value of $y = \sum{x}/n$ for 1000 replicates. We then plot the 2.5th and 97.5th percentile of the 1000 values of $y$ to see how the variation in $y$ changes with sample size. The blue lines indicate the 2.5th and 97.5th percentiles, and the red line a sample path.
End of explanation
xb = np.random.choice(x[:,0], (n, reps), replace=True)
yb = 1/np.arange(1, n+1)[:, None] * np.cumsum(xb, axis=0)
upper, lower = np.percentile(yb, [2.5, 97.5], axis=1)
plt.plot(np.arange(1, n+1)[:, None], yb, c='grey', alpha=0.02)
plt.plot(np.arange(1, n+1), yb[:, 0], c='red', linewidth=1)
plt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')
pass
Explanation: Using bootstrap to monitor convergence
If it is too expensive to do 1000 replicates, we can use a bootstrap instead.
End of explanation
import scipy.stats as stats
h_true = 1 - stats.cauchy().cdf(3)
h_true
n = 100
x = stats.cauchy().rvs(n)
h_mc = 1.0/n * np.sum(x > 3)
h_mc, np.abs(h_mc - h_true)/h_true
Explanation: Variance Reduction
With independent samples, the variance of the Monte Carlo estimate is
$$
\begin{align}
\text{Var}[\bar{g_n}] &= \text{Var} \left[ \frac{1}{N}\sum_{i=1}^{N} \frac{f(x_i)}{p(x_i)} \right] \
&= \frac{1}{N^2} \sum_{i=1}^{N} \text{Var} \left[ \frac{f(x_i)}{p(x_i)} \right] \
&= \frac{1}{N^2} \sum_{i=1}^{N} \text{Var}[Y_i] \
&= \frac{1}{N} \text{Var}[Y_i]
\end{align}
$$
where $Y_i = f(x_i)/p(x_i)$. The objective of Monte Carlo swindles is to make $\text{Var}[\bar{g_n}]$ as small as possible for the same number of samples.
Change of variables
The Cauchy distribution is given by
$$
f(x) = \frac{1}{\pi (1 + x^2)}, \ \ -\infty \lt x \lt \infty
$$
Suppose we want to integrate the tail probability $P(X > 3)$ using Monte Carlo. One way to do this is to draw many samples form a Cauchy distribution, and count how many of them are greater than 3, but this is extremely inefficient.
Only 10% of samples will be used
End of explanation
y = stats.uniform().rvs(n)
h_cv = 1.0/n * np.sum(3.0/(np.pi * (9 + y**2)))
h_cv, np.abs(h_cv - h_true)/h_true
Explanation: A change of variables lets us use 100% of draws
We are trying to estimate the quantity
$$
\int_3^\infty \frac{1}{\pi (1 + x^2)} dx
$$
Using the substitution $y = 3/x$ (and a little algebra), we get
$$
\int_0^1 \frac{3}{\pi(9 + y^2)} dy
$$
Hence, a much more efficient MC estimator is
$$
\frac{1}{n} \sum_{i=1}^n \frac{3}{\pi(9 + y_i^2)}
$$
where $y_i \sim \mathcal{U}(0, 1)$.
End of explanation
def f(x):
return x * np.cos(71*x) + np.sin(13*x)
from sympy import sin, cos, symbols, integrate
x = symbols('x')
sol = integrate(x * cos(71*x) + sin(13*x), (x, 0,1)).evalf(16)
sol
Explanation: Monte Carlo swindles
Apart from change of variables, there are several general techniques for variance reduction, sometimes known as Monte Carlo swindles since these methods improve the accuracy and convergence rate of Monte Carlo integration without increasing the number of Monte Carlo samples. Some Monte Carlo swindles are:
importance sampling
stratified sampling
control variates
antithetic variates
conditioning swindles including Rao-Blackwellization and independent variance decomposition
Most of these techniques are not particularly computational in nature, so we will not cover them in the course. I expect you will learn them elsewhere. We will illustrate importance sampling and antithetic variables here as examples.
Antithetic variables
The idea behind antithetic variables is to choose two sets of random numbers that are negatively correlated, then take their average, so that the total variance of the estimator is smaller than it would be with two sets of IID random variables.
End of explanation
n = 10000
u = np.random.random(n)
x = f(u)
y = 1.0/n * np.sum(x)
y, abs(y-sol)/sol
Explanation: Vanilla Monte Carlo
End of explanation
u = np.r_[u[:n//2], 1-u[:n//2]]
x = f(u)
y = 1.0/n * np.sum(x)
y, abs(y-sol)/sol
Explanation: Antithetic variables use first half of u supplemented with 1-u
This works because the random draws are now negatively correlated, and hence the sum of the variances will be less than in the IID case, while the expectation is unchanged.
End of explanation
x = np.linspace(4, 10, 100)
plt.plot(x, stats.expon(5).pdf(x))
plt.plot(x, stats.norm().pdf(x))
pass
Explanation: Importance sampling
Ordinary Monte Carlo sampling evaluates
$$
E[g(X)] = \int_X g(x)\, p(x) \, dx
$$
Using another distribution $h(x)$ - the so-called "importance function", we can rewrite the above expression as an expectation with respect to $h$
$$
E_p[g(x)] \ = \ \int_X g(x) \frac{p(x)}{h(x)} h(x) dx \ = \ E_h\left[ \frac{g(X) p(X)}{h(X)} \right]
$$
giving us the new estimator
$$
\bar{g_n} = \frac{1}{n} \sum_{i=1}^n \frac{p(x_i)}{h(x_i)} g(x_i)
$$
where $x_i \sim g$ is a draw from the density $h$. This is helpful if the distribution $h$ has a similar shape as the function $f(x)$ that we are integrating over, since we will draw more samples from places where the integrand makes a larger or more "important" contribution. This is very dependent on a good choice for the importance function $h$. Two simple choices for $h$ are scaling
$$
h(x) = \frac{1}{a} p(x/a)
$$
and translation
$$
h(x) = p*(x - a)
$$
In these cases, the parameter $a$ is typically chosen using some adaptive algorithm, giving rise to adaptive importance sampling. Alternatively, a different distribution can be chosen as shown in the example below.
Example
Suppose we want to estimate the tail probability of $\mathcal{N}(0, 1)$ for $P(X > 5)$. Regular MC integration using samples from $\mathcal{N}(0, 1)$ is hopeless since nearly all samples will be rejected. However, we can use the exponential density truncated at 5 as the importance function and use importance sampling.
End of explanation
%precision 10
h_true =1 - stats.norm().cdf(5)
h_true
Explanation: Expected answer
We expect about 3 draws out of 10,000,000 from $\mathcal{N}(0, 1)$ to have a value greater than 5. Hence simply sampling from $\mathcal{N}(0, 1)$ is hopelessly inefficient for Monte Carlo integration.
End of explanation
n = 10000
y = stats.norm().rvs(n)
h_mc = 1.0/n * np.sum(y > 5)
# estimate and relative error
h_mc, np.abs(h_mc - h_true)/h_true
Explanation: Using direct Monte Carlo integration
End of explanation
n = 10000
y = stats.expon(loc=5).rvs(n)
h_is = 1.0/n * np.sum(stats.norm().pdf(y)/stats.expon(loc=5).pdf(y))
# estimate and relative error
h_is, np.abs(h_is- h_true)/h_true
Explanation: Using importance sampling
End of explanation
import ghalton
gen = ghalton.Halton(2)
plt.figure(figsize=(10,5))
plt.subplot(121)
xs = np.random.random((100,2))
plt.scatter(xs[:, 0], xs[:,1])
plt.axis([-0.05, 1.05, -0.05, 1.05])
plt.title('Pseudo-random', fontsize=20)
plt.subplot(122)
ys = np.array(gen.get(100))
plt.scatter(ys[:, 0], ys[:,1])
plt.axis([-0.05, 1.05, -0.05, 1.05])
plt.title('Quasi-random', fontsize=20);
Explanation: Quasi-random numbers
Recall that the convergence of Monte Carlo integration is $\mathcal{0}(n^{1/2})$. One issue with simple Monte Carlo is that randomly chosen points tend to be clumped. Clumping reduces accuracy since nearby points provide little additional information about the function begin estimated. One way to address this is to split the space into multiple integration regions, then sum them up. This is known as stratified sampling. Another alternative is to use quasi-random numbers which fill space more efficiently than random sequences
It turns out that if we use quasi-random or low discrepancy sequences, we can get convergence approaching $\mathcal{0}(1/n)$. There are several such generators, but their use in statistical settings is limited to cases where we are integrating with respect to uniform distributions. The regularity can also give rise to errors when estimating integrals of periodic functions. However, these quasi-Monte Carlo methods are used in computational finance models.
Run
! pip install ghalton
if ghalton is not installed.
End of explanation
h_true = 1 - stats.cauchy().cdf(3)
n = 10
x = stats.uniform().rvs((n, 5))
y = 3.0/(np.pi * (9 + x**2))
h_mc = np.sum(y, 0)/n
list(zip(h_mc, 100*np.abs(h_mc - h_true)/h_true))
gen1 = ghalton.Halton(1)
x = np.reshape(gen1.get(n*5), (n, 5))
y = 3.0/(np.pi * (9 + x**2))
h_qmc = np.sum(y, 0)/n
list(zip(h_qmc, 100*np.abs(h_qmc - h_true)/h_true))
Explanation: Quasi-Monte Carlo integration can reduce variance
End of explanation |
211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Treating Trees
Although any expression in Joy can be considered to describe a tree with the quotes as compound nodes and the non-quote values as leaf nodes, in this page I want to talk about ordered binary trees and how to make and use them.
The basic structure, in a crude type notation, is
Step1: Adding Nodes to the BTree
Let's consider adding nodes to a BTree structure.
BTree value key BTree-add == BTree
Adding to an empty node.
If the current node is [] then you just return [key value [] []]
Step2: (As an implementation detail, the [[] []] literal used in the definition of BTree-new will be reused to supply the constant tail for all new nodes produced by it. This is one of those cases where you get amortized storage "for free" by using persistent datastructures. Because the tail, which is ((), ((), ())) in Python, is immutable and embedded in the definition body for BTree-new, all new nodes can reuse it as their own tail without fear that some other code somewhere will change it.)
If the current node isn't empty.
We now have to derive R0 and R1, consider
Step3: If the key we're adding is greater than the node's key.
Here the parantheses are meant to signify that the right-hand side (RHS) is not literal, the code in the parentheses is meant to have been evaluated
Step4: If the key we're adding is less than the node's key.
This is very very similar to the above
Step5: Else the keys must be equal.
This means we must find
Step6: Now we can define BTree-add
BTree-add == [popop not] [[pop] dipd BTree-new] [] [[P >] [T] [E] ifte] genrec
Putting it all together
Step7: We can use this to make a set-like datastructure by just setting values to e.g. 0 and ignoring them. It's set-like in that duplicate items added to it will only occur once within it, and we can query it in $O(\log_2 N)$ time.
Step8: And with that we can write a little program to remove duplicate items from a list.
Step9: cmp combinator
Instead of all this mucking about with nested ifte let's just go whole hog and define cmp which takes two values and three quoted programs on the stack and runs one of the three depending on the results of comparing the two values
Step10: Factoring and naming
It may seem silly, but a big part of programming in Forth (and therefore in Joy) is the idea of small, highly-factored definitions. If you choose names carefully the resulting definitions can take on a semantic role.
get-node-key == popop popop first
remove-key-and-value-from-node == rest rest
pack-key-and-value == cons cons
prep-new-key-and-value == pop swap roll<
pack-and-apply == [pack-key-and-value] swoncat cons pack-key-and-value infra
BTree-new == swap [[] []] pack-key-and-value
P == over [get-node-key] nullary
T> == [dipdd] pack-and-apply
T< == [dipd] pack-and-apply
E == prep-new-key-and-value remove-key-and-value-from-node pack-key-and-value
A Version of BTree-iter that does In-Order Traversal
If you look back to the non-empty case of the BTree-iter function we can design a varient that first processes the left child, then the current node, then the right child. This will allow us to traverse the tree in sort order.
BTree-iter-order == [not] [pop] [R0 [BTree-iter] R1] ifte
To define R0 and R1 it helps to look at them as they will appear when they run
Step11: Getting values by key
Let's derive a function that accepts a tree and a key and returns the value associated with that key.
tree key BTree-get
------------------------
value
The base case []
As before, the stopping predicate just has to detect the empty list
Step12: TODO
Step13: [] 0 [C] [N] treestep
---------------------------
0
[n [tree*]] 0 [sum +] [] treestep
--------------------------------------------------
n [tree*] [0 [sum +] [] treestep] map sum +
Step14: A slight modification.
Let's simplify the tree datastructure definition slightly by just letting the children be the rest of the tree
Step15: I think these trees seem a little easier to read.
Redefining our BTree in terms of this form.
BTree = [] | [[key value] left right]
What kind of functions can we write for this with our treestep? The pattern for processing a non-empty node is
Step16: Doesn't work because map extracts the first item of whatever its mapped function produces. We have to return a list, rather than depositing our results directly on the stack.
[key value] N [left right] [K] map C
[key value] first [left right] [K] map flatten cons
key [left right] [K] map flatten cons
key [[lk] [rk] ] flatten cons
key [ lk rk ] cons
[key lk rk ]
So
Step17: There we go.
In-order traversal with treestep.
From here | Python Code:
from notebook_preamble import J, V, define
define('BTree-iter == [not] [pop] roll< [dupdip rest rest] cons [step] genrec')
J('[] [23] BTree-iter') # It doesn't matter what F is as it won't be used.
J('["tommy" 23 [] []] [first] BTree-iter')
J('["tommy" 23 ["richard" 48 [] []] ["jenny" 18 [] []]] [first] BTree-iter')
J('["tommy" 23 ["richard" 48 [] []] ["jenny" 18 [] []]] [second] BTree-iter')
Explanation: Treating Trees
Although any expression in Joy can be considered to describe a tree with the quotes as compound nodes and the non-quote values as leaf nodes, in this page I want to talk about ordered binary trees and how to make and use them.
The basic structure, in a crude type notation, is:
BTree :: [] | [key value BTree BTree]
That says that a BTree is either the empty quote [] or a quote with four items: a key, a value, and two BTrees representing the left and right branches of the tree.
A Function to Traverse this Structure
Let's take a crack at writing a function that can recursively iterate or traverse these trees.
Base case []
The stopping predicate just has to detect the empty list:
BTree-iter == [not] [E] [R0] [R1] genrec
And since there's nothing at this node, we just pop it:
BTree-iter == [not] [pop] [R0] [R1] genrec
Node case [key value left right]
Now we need to figure out R0 and R1:
BTree-iter == [not] [pop] [R0] [R1] genrec
== [not] [pop] [R0 [BTree-iter] R1] ifte
Let's look at it in situ:
[key value left right] R0 [BTree-iter] R1
Processing the current node.
R0 is almost certainly going to use dup to make a copy of the node and then dip on some function to process the copy with it:
[key value left right] [F] dupdip [BTree-iter] R1
[key value left right] F [key value left right] [BTree-iter] R1
For example, if we're getting all the keys F would be first:
R0 == [first] dupdip
[key value left right] [first] dupdip [BTree-iter] R1
[key value left right] first [key value left right] [BTree-iter] R1
key [key value left right] [BTree-iter] R1
Recur
Now R1 needs to apply [BTree-iter] to left and right. If we drop the key and value from the node using rest twice we are left with an interesting situation:
key [key value left right] [BTree-iter] R1
key [key value left right] [BTree-iter] [rest rest] dip
key [key value left right] rest rest [BTree-iter]
key [left right] [BTree-iter]
Hmm, will step do?
key [left right] [BTree-iter] step
key left BTree-iter [right] [BTree-iter] step
key left-keys [right] [BTree-iter] step
key left-keys right BTree-iter
key left-keys right-keys
Wow. So:
R1 == [rest rest] dip step
Putting it together
We have:
BTree-iter == [not] [pop] [[F] dupdip] [[rest rest] dip step] genrec
When I was reading this over I realized rest rest could go in R0:
BTree-iter == [not] [pop] [[F] dupdip rest rest] [step] genrec
(And [step] genrec is such a cool and suggestive combinator!)
Parameterizing the F per-node processing function.
[F] BTree-iter == [not] [pop] [[F] dupdip rest rest] [step] genrec
Working backward:
[not] [pop] [[F] dupdip rest rest] [step] genrec
[not] [pop] [F] [dupdip rest rest] cons [step] genrec
[F] [not] [pop] roll< [dupdip rest rest] cons [step] genrec
Ergo:
BTree-iter == [not] [pop] roll< [dupdip rest rest] cons [step] genrec
End of explanation
define('BTree-new == swap [[] []] cons cons')
V('"v" "k" BTree-new')
Explanation: Adding Nodes to the BTree
Let's consider adding nodes to a BTree structure.
BTree value key BTree-add == BTree
Adding to an empty node.
If the current node is [] then you just return [key value [] []]:
BTree-add == [popop not] [[pop] dipd BTree-new] [R0] [R1] genrec
Where BTree-new is:
value key BTree-new == [key value [] []]
value key swap [[] []] cons cons
key value [[] []] cons cons
key [value [] []] cons
[key value [] []]
BTree-new == swap [[] []] cons cons
End of explanation
define('P == pop roll> pop first')
V('["k" "v" [] []] "vv" "kk" [0] P >')
Explanation: (As an implementation detail, the [[] []] literal used in the definition of BTree-new will be reused to supply the constant tail for all new nodes produced by it. This is one of those cases where you get amortized storage "for free" by using persistent datastructures. Because the tail, which is ((), ((), ())) in Python, is immutable and embedded in the definition body for BTree-new, all new nodes can reuse it as their own tail without fear that some other code somewhere will change it.)
If the current node isn't empty.
We now have to derive R0 and R1, consider:
[key_n value_n left right] value key R0 [BTree-add] R1
In this case, there are three possibilites: the key can be greater or less than or equal to the node's key. In two of those cases we will need to apply a copy of BTree-add, so R0 is pretty much out of the picture.
[R0] == []
A predicate to compare keys.
The first thing we need to do is compare the the key we're adding to see if it is greater than the node key and branch accordingly, although in this case it's easier to write a destructive predicate and then use ifte to apply it nullary:
[key_n value_n left right] value key [BTree-add] R1
[key_n value_n left right] value key [BTree-add] [P >] [T] [E] ifte
[key_n value_n left right] value key [BTree-add] P >
[key_n value_n left right] value key [BTree-add] pop roll> pop first >
[key_n value_n left right] value key roll> pop first >
key [key_n value_n left right] value roll> pop first >
key key_n >
Boolean
P > == pop roll> pop first >
P < == pop roll> pop first <
P == pop roll> pop first
End of explanation
define('K == cons cons dipdd')
define('T == [K] cons cons cons infra')
V('"r" "l" "v" "k" "vv" "kk" [0] K')
V('["k" "v" "l" "r"] "vv" "kk" [0] T')
Explanation: If the key we're adding is greater than the node's key.
Here the parantheses are meant to signify that the right-hand side (RHS) is not literal, the code in the parentheses is meant to have been evaluated:
[key_n value_n left right] value key [BTree-add] T == [key_n value_n left (BTree-add key value right)]
Use infra on K.
So how do we do this? We know we're going to want to use infra on some function K that has the key and value to work with, as well as the quoted copy of BTree-add to apply somehow:
right left value_n key_n value key [BTree-add] K
...
right value key BTree-add left value_n key_n
Pretty easy:
right left value_n key_n value key [BTree-add] cons cons dipdd
right left value_n key_n [value key BTree-add] dipdd
right value key BTree-add left value_n key_n
So:
K == cons cons dipdd
And:
[key_n value_n left right] [value key [BTree-add] K] infra
Derive T.
So now we're at getting from this to this:
[key_n value_n left right] value key [BTree-add] T
...
[key_n value_n left right] [value key [BTree-add] K] infra
And so T is just:
value key [BTree-add] T == [value key [BTree-add] K] infra
T == [ K] cons cons cons infra
End of explanation
define('Te == [cons cons dipd] cons cons cons infra')
V('["k" "v" "l" "r"] "vv" "kk" [0] Te')
Explanation: If the key we're adding is less than the node's key.
This is very very similar to the above:
[key_n value_n left right] value key [BTree-add] E
[key_n value_n left right] value key [BTree-add] [P <] [Te] [Ee] ifte
In this case Te works that same as T but on the left child tree instead of the right, so the only difference is that it must use dipd instead of dipdd:
Te == [cons cons dipd] cons cons cons infra
This suggests an alternate factorization:
ccons == cons cons
T == [ccons dipdd] ccons cons infra
Te == [ccons dipd] ccons cons infra
But whatever.
End of explanation
define('Ee == pop swap roll< rest rest cons cons')
V('["k" "v" "l" "r"] "vv" "k" [0] Ee')
define('E == [P <] [Te] [Ee] ifte')
Explanation: Else the keys must be equal.
This means we must find:
[key_n value_n left right] value key [BTree-add] Ee
...
[key value left right]
This is another easy one:
Ee == pop swap roll< rest rest cons cons
[key_n value_n left right] value key [BTree-add] pop swap roll< rest rest cons cons
[key_n value_n left right] value key swap roll< rest rest cons cons
[key_n value_n left right] key value roll< rest rest cons cons
key value [key_n value_n left right] rest rest cons cons
key value [ left right] cons cons
[key value left right]
End of explanation
define('BTree-add == [popop not] [[pop] dipd BTree-new] [] [[P >] [T] [E] ifte] genrec')
J('[] 23 "b" BTree-add') # Initial
J('["b" 23 [] []] 88 "c" BTree-add') # Less than
J('["b" 23 [] []] 88 "a" BTree-add') # Greater than
J('["b" 23 [] []] 88 "b" BTree-add') # Equal to
J('[] 23 "a" BTree-add 88 "b" BTree-add 44 "c" BTree-add') # Series.
Explanation: Now we can define BTree-add
BTree-add == [popop not] [[pop] dipd BTree-new] [] [[P >] [T] [E] ifte] genrec
Putting it all together:
BTree-new == swap [[] []] cons cons
P == pop roll> pop first
T == [cons cons dipdd] cons cons cons infra
Te == [cons cons dipd] cons cons cons infra
Ee == pop swap roll< rest rest cons cons
E == [P <] [Te] [Ee] ifte
BTree-add == [popop not] [[pop] dipd BTree-new] [] [[P >] [T] [E] ifte] genrec
End of explanation
J('[] [3 9 5 2 8 6 7 8 4] [0 swap BTree-add] step')
define('to_set == [] swap [0 swap BTree-add] step')
J('[3 9 5 2 8 6 7 8 4] to_set')
Explanation: We can use this to make a set-like datastructure by just setting values to e.g. 0 and ignoring them. It's set-like in that duplicate items added to it will only occur once within it, and we can query it in $O(\log_2 N)$ time.
End of explanation
define('unique == [to_set [first] BTree-iter] cons run')
J('[3 9 3 5 2 9 8 8 8 6 2 7 8 4 3] unique') # Filter duplicate items.
Explanation: And with that we can write a little program to remove duplicate items from a list.
End of explanation
from joy.library import FunctionWrapper
from joy.utils.stack import pushback
from notebook_preamble import D
@FunctionWrapper
def cmp_(stack, expression, dictionary):
L, (E, (G, (b, (a, stack)))) = stack
expression = pushback(G if a > b else L if a < b else E, expression)
return stack, expression, dictionary
D['cmp'] = cmp_
J("1 0 ['G'] ['E'] ['L'] cmp")
J("1 1 ['G'] ['E'] ['L'] cmp")
J("0 1 ['G'] ['E'] ['L'] cmp")
from joy.library import DefinitionWrapper
DefinitionWrapper.add_definitions('''
P == over [popop popop first] nullary
T> == [cons cons dipdd] cons cons cons infra
T< == [cons cons dipd] cons cons cons infra
E == pop swap roll< rest rest cons cons
BTree-add == [popop not] [[pop] dipd BTree-new] [] [P [T>] [E] [T<] cmp] genrec
''', D)
J('[] 23 "b" BTree-add') # Initial
J('["b" 23 [] []] 88 "c" BTree-add') # Less than
J('["b" 23 [] []] 88 "a" BTree-add') # Greater than
J('["b" 23 [] []] 88 "b" BTree-add') # Equal to
J('[] 23 "a" BTree-add 88 "b" BTree-add 44 "c" BTree-add') # Series.
Explanation: cmp combinator
Instead of all this mucking about with nested ifte let's just go whole hog and define cmp which takes two values and three quoted programs on the stack and runs one of the three depending on the results of comparing the two values:
a b [G] [E] [L] cmp
------------------------- a > b
G
a b [G] [E] [L] cmp
------------------------- a = b
E
a b [G] [E] [L] cmp
------------------------- a < b
L
We need a new non-destructive predicate P:
[key_n value_n left right] value key [BTree-add] P
[key_n value_n left right] value key [BTree-add] over [Q] nullary
[key_n value_n left right] value key [BTree-add] key [Q] nullary
[key_n value_n left right] value key [BTree-add] key Q
[key_n value_n left right] value key [BTree-add] key popop popop first
[key_n value_n left right] value key popop first
[key_n value_n left right] first
key_n
[key_n value_n left right] value key [BTree-add] key [Q] nullary
[key_n value_n left right] value key [BTree-add] key key_n
P == over [popop popop first] nullary
Here are the definitions again, pruned and renamed in some cases:
BTree-new == swap [[] []] cons cons
P == over [popop popop first] nullary
T> == [cons cons dipdd] cons cons cons infra
T< == [cons cons dipd] cons cons cons infra
E == pop swap roll< rest rest cons cons
Using cmp to simplify our code above at R1:
[key_n value_n left right] value key [BTree-add] R1
[key_n value_n left right] value key [BTree-add] P [T>] [E] [T<] cmp
The line above becomes one of the three lines below:
[key_n value_n left right] value key [BTree-add] T>
[key_n value_n left right] value key [BTree-add] E
[key_n value_n left right] value key [BTree-add] T<
The definition is a little longer but, I think, more elegant and easier to understand:
BTree-add == [popop not] [[pop] dipd BTree-new] [] [P [T>] [E] [T<] cmp] genrec
End of explanation
define('BTree-iter-order == [not] [pop] [dup third] [[cons dip] dupdip [[first] dupdip] dip [rest rest rest first] dip i] genrec')
J('[3 9 5 2 8 6 7 8 4] to_set BTree-iter-order')
Explanation: Factoring and naming
It may seem silly, but a big part of programming in Forth (and therefore in Joy) is the idea of small, highly-factored definitions. If you choose names carefully the resulting definitions can take on a semantic role.
get-node-key == popop popop first
remove-key-and-value-from-node == rest rest
pack-key-and-value == cons cons
prep-new-key-and-value == pop swap roll<
pack-and-apply == [pack-key-and-value] swoncat cons pack-key-and-value infra
BTree-new == swap [[] []] pack-key-and-value
P == over [get-node-key] nullary
T> == [dipdd] pack-and-apply
T< == [dipd] pack-and-apply
E == prep-new-key-and-value remove-key-and-value-from-node pack-key-and-value
A Version of BTree-iter that does In-Order Traversal
If you look back to the non-empty case of the BTree-iter function we can design a varient that first processes the left child, then the current node, then the right child. This will allow us to traverse the tree in sort order.
BTree-iter-order == [not] [pop] [R0 [BTree-iter] R1] ifte
To define R0 and R1 it helps to look at them as they will appear when they run:
[key value left right] R0 [BTree-iter-order] R1
Process the left child.
Staring at this for a bit suggests dup third to start:
[key value left right] R0 [BTree-iter-order] R1
[key value left right] dup third [BTree-iter-order] R1
[key value left right] left [BTree-iter-order] R1
Now maybe:
[key value left right] left [BTree-iter-order] [cons dip] dupdip
[key value left right] left [BTree-iter-order] cons dip [BTree-iter-order]
[key value left right] [left BTree-iter-order] dip [BTree-iter-order]
left BTree-iter-order [key value left right] [BTree-iter-order]
Process the current node.
So far, so good. Now we need to process the current node's values:
left BTree-iter-order [key value left right] [BTree-iter-order] [[F] dupdip] dip
left BTree-iter-order [key value left right] [F] dupdip [BTree-iter-order]
left BTree-iter-order [key value left right] F [key value left right] [BTree-iter-order]
If F needs items from the stack below the left stuff it should have cons'd them before beginning maybe? For functions like first it works fine as-is.
left BTree-iter-order [key value left right] first [key value left right] [BTree-iter-order]
left BTree-iter-order key [key value left right] [BTree-iter-order]
Process the right child.
First ditch the rest of the node and get the right child:
left BTree-iter-order key [key value left right] [BTree-iter-order] [rest rest rest first] dip
left BTree-iter-order key right [BTree-iter-order]
Then, of course, we just need i to run BTree-iter-order on the right side:
left BTree-iter-order key right [BTree-iter-order] i
left BTree-iter-order key right BTree-iter-order
Defining BTree-iter-order
The result is a little awkward:
R1 == [cons dip] dupdip [[F] dupdip] dip [rest rest rest first] dip i
Let's do a little semantic factoring:
fourth == rest rest rest first
proc_left == [cons dip] dupdip
proc_current == [[F] dupdip] dip
proc_right == [fourth] dip i
BTree-iter-order == [not] [pop] [dup third] [proc_left proc_current proc_right] genrec
Now we can sort sequences.
End of explanation
# I don't want to deal with name conflicts with the above so I'm inlining everything here.
# The original Joy system has "hide" which is a meta-command which allows you to use named
# definitions that are only in scope for a given definition. I don't want to implement
# that (yet) so...
define('''
BTree-get == [pop not] swap [] [
over [pop popop first] nullary
[[rest rest rest first] dipd i]
[popop second]
[[third] dipd i]
cmp
] genrec
''')
J('[] "gary" [popop "err"] BTree-get')
J('["gary" 23 [] []] "gary" [popop "err"] BTree-get')
J('''
[] [[0 'a'] [1 'b'] [2 'c']] [i BTree-add] step
'c' [popop 'not found'] BTree-get
''')
Explanation: Getting values by key
Let's derive a function that accepts a tree and a key and returns the value associated with that key.
tree key BTree-get
------------------------
value
The base case []
As before, the stopping predicate just has to detect the empty list:
BTree-get == [pop not] [E] [R0] [R1] genrec
But what do we do if the key isn't in the tree? In Python we might raise a KeyError but I'd like to avoid exceptions in Joy if possible, and here I think it's possible. (Division by zero is an example of where I think it's probably better to let Python crash Joy. Sometimes the machinery fails and you have to "stop the line", methinks.)
Let's pass the buck to the caller by making the base case a given, you have to decide for yourself what [E] should be.
tree key [E] BTree-get
---------------------------- key in tree
value
tree key [E] BTree-get
---------------------------- key not in tree
tree key E
Now we define:
BTree-get == [pop not] swap [R0] [R1] genrec
Note that this BTree-get creates a slightly different function than itself and that function does the actual recursion. This kind of higher-level programming is unusual in most languages but natural in Joy.
tree key [E] [pop not] swap [R0] [R1] genrec
tree key [pop not] [E] [R0] [R1] genrec
The anonymous specialized recursive function that will do the real work.
[pop not] [E] [R0] [R1] genrec
Node case [key value left right]
Now we need to figure out R0 and R1:
[key value left right] key R0 [BTree-get] R1
We want to compare the search key with the key in the node, and if they are the same return the value and if they differ then recurse on one of the child nodes. So it's very similar to the above funtion, with [R0] == [] and R1 == P [T>] [E] [T<] cmp:
[key value left right] key [BTree-get] P [T>] [E] [T<] cmp
So:
get-node-key == pop popop first
P == over [get-node-key] nullary
The only difference is that get-node-key does one less pop because there's no value to discard. Now we have to derive the branches:
[key_n value_n left right] key [BTree-get] T>
[key_n value_n left right] key [BTree-get] E
[key_n value_n left right] key [BTree-get] T<
The cases of T> and T< are similar to above but instead of using infra we have to discard the rest of the structure:
[key_n value_n left right] key [BTree-get] T> == right key BTree-get
[key_n value_n left right] key [BTree-get] T< == left key BTree-get
So:
T> == [fourth] dipd i
T< == [third] dipd i
E.g.:
[key_n value_n left right] key [BTree-get] [fourth] dipd i
[key_n value_n left right] fourth key [BTree-get] i
right key [BTree-get] i
right key BTree-get
And:
[key_n value_n left right] key [BTree-get] E == value_n
E == popop second
So:
fourth == rest rest rest first
get-node-key == pop popop first
P == over [get-node-key] nullary
T> == [fourth] dipd i
T< == [third] dipd i
E == popop second
BTree-get == [pop not] swap [] [P [T>] [E] [T<] cmp] genrec
End of explanation
DefinitionWrapper.add_definitions('''
TS0 == [not] swap unit [pop] swoncat
TS1 == [dip] cons [i] swoncat
treestep == swap [map] swoncat [TS1 [TS0] dip] dip genrec
''', D)
Explanation: TODO: BTree-delete
Then, once we have add, get, and delete we can see about abstracting them.
tree key [E] BTree-delete
---------------------------- key in tree
tree
tree key [E] BTree-delete
---------------------------- key not in tree
tree key E
So:
BTree-delete == [pop not] [] [R0] [R1] genrec
And:
[n_key n_value left right] key R0 [BTree-get] R1
[n_key n_value left right] key [dup first] dip [BTree-get] R1
[n_key n_value left right] n_key key [BTree-get] R1
[n_key n_value left right] n_key key [BTree-get] roll> [T>] [E] [T<] cmp
[n_key n_value left right] [BTree-get] n_key key [T>] [E] [T<] cmp
BTree-delete == [pop not] swap [[dup first] dip] [roll> [T>] [E] [T<] cmp] genrec
[n_key n_value left right] [BTree-get] T>
[n_key n_value left right] [BTree-get] E
[n_key n_value left right] [BTree-get] T<
[n_key n_value left right] [BTree-get]
[n_key n_value left right] [BTree-get] E
[n_key n_value left right] [BTree-get] T<
Tree with node and list of trees.
Let's consider a tree structure, similar to one described "Why functional programming matters" by John Hughes, that consists of a node value and a sequence of zero or more child trees. (The asterisk is meant to indicate the Kleene star.)
tree = [] | [node [tree*]]
treestep
In the spirit of step we are going to define a combinator treestep which expects a tree and three additional items: a base-case value z, and two quoted programs [C] and [N].
tree z [C] [N] treestep
If the current tree node is empty then just leave z on the stack in lieu:
[] z [C] [N] treestep
---------------------------
z
Otherwise, evaluate N on the node value, map the whole function (abbreviated here as k) over the child trees recursively, and then combine the result with C.
[node [tree*]] z [C] [N] treestep
--------------------------------------- w/ K == z [C] [N] treestep
node N [tree*] [K] map C
Derive the recursive form.
Since this is a recursive function, we can begin to derive it by finding the ifte stage that genrec will produce. The predicate and base-case functions are trivial, so we just have to derive J.
K == [not] [pop z] [J] ifte
The behavior of J is to accept a (non-empty) tree node and arrive at the desired outcome.
[node [tree*]] J
------------------------------
node N [tree*] [K] map C
So J will have some form like:
J == .. [N] .. [K] .. [C] ..
Let's dive in. First, unquote the node and dip N.
[node [tree*]] i [N] dip
node [tree*] [N] dip
node N [tree*]
Next, map K over teh child trees and combine with C.
node N [tree*] [K] map C
node N [tree*] [K] map C
node N [K.tree*] C
So:
J == i [N] dip [K] map C
Plug it in and convert to genrec:
K == [not] [pop z] [i [N] dip [K] map C] ifte
K == [not] [pop z] [i [N] dip] [map C] genrec
Extract the givens to parameterize the program.
[not] [pop z] [i [N] dip] [map C] genrec
[not] [pop z] [i [N] dip] [map C] genrec
[not] [z] [pop] swoncat [i [N] dip] [map C] genrec
[not] z unit [pop] swoncat [i [N] dip] [map C] genrec
z [not] swap unit [pop] swoncat [i [N] dip] [map C] genrec
\ .........TS0............./
\/
z TS0 [i [N] dip] [map C] genrec
z [i [N] dip] [TS0] dip [map C] genrec
z [[N] dip] [i] swoncat [TS0] dip [map C] genrec
z [N] [dip] cons [i] swoncat [TS0] dip [map C] genrec
\ ......TS1........./
\/
z [N] TS1 [TS0] dip [map C] genrec
z [N] [map C] [TS1 [TS0] dip] dip genrec
z [N] [C] [map] swoncat [TS1 [TS0] dip] dip genrec
z [C] [N] swap [map] swoncat [TS1 [TS0] dip] dip genrec
The givens are all to the left so we have our definition.
Define treestep
TS0 == [not] swap unit [pop] swoncat
TS1 == [dip] cons [i] swoncat
treestep == swap [map] swoncat [TS1 [TS0] dip] dip genrec
End of explanation
J('[] 0 [sum +] [] treestep')
J('[23 []] 0 [sum +] [] treestep')
J('[23 [[2 []] [3 []]]] 0 [sum +] [] treestep')
Explanation: [] 0 [C] [N] treestep
---------------------------
0
[n [tree*]] 0 [sum +] [] treestep
--------------------------------------------------
n [tree*] [0 [sum +] [] treestep] map sum +
End of explanation
define('TS1 == [dip] cons [uncons] swoncat') # We only need to redefine one word.
J('[23 [2] [3]] 0 [sum +] [] treestep')
J('[23 [2 [8] [9]] [3] [4 []]] 0 [sum +] [] treestep')
Explanation: A slight modification.
Let's simplify the tree datastructure definition slightly by just letting the children be the rest of the tree:
tree = [] | [node tree*]
The J function changes slightly.
[node tree*] J
------------------------------
node N [tree*] [K] map C
[node tree*] uncons [N] dip [K] map C
node [tree*] [N] dip [K] map C
node N [tree*] [K] map C
node N [tree*] [K] map C
node N [K.tree*] C
J == uncons [N] dip [K] map C
K == [not] [pop z] [uncons [N] dip] [map C] genrec
End of explanation
J('[[3 0] [[2 0] [] []] [[9 0] [[5 0] [[4 0] [] []] [[8 0] [[6 0] [] [[7 0] [] []]] []]] []]] 23 [i] [uncons pop] treestep')
Explanation: I think these trees seem a little easier to read.
Redefining our BTree in terms of this form.
BTree = [] | [[key value] left right]
What kind of functions can we write for this with our treestep? The pattern for processing a non-empty node is:
node N [tree*] [K] map C
Plugging in our BTree structure:
[key value] N [left right] [K] map C
[key value] uncons pop [left right] [K] map i
key [value] pop [left right] [K] map i
key [left right] [K] map i
key [lkey rkey ] i
key lkey rkey
End of explanation
J('[[3 0] [[2 0] [] []] [[9 0] [[5 0] [[4 0] [] []] [[8 0] [[6 0] [] [[7 0] [] []]] []]] []]] [] [flatten cons] [first] treestep')
Explanation: Doesn't work because map extracts the first item of whatever its mapped function produces. We have to return a list, rather than depositing our results directly on the stack.
[key value] N [left right] [K] map C
[key value] first [left right] [K] map flatten cons
key [left right] [K] map flatten cons
key [[lk] [rk] ] flatten cons
key [ lk rk ] cons
[key lk rk ]
So:
[] [flatten cons] [first] treestep
End of explanation
J('[[3 0] [[2 0] [] []] [[9 0] [[5 0] [[4 0] [] []] [[8 0] [[6 0] [] [[7 0] [] []]] []]] []]] [] [i roll< swons concat] [uncons pop] treestep')
Explanation: There we go.
In-order traversal with treestep.
From here:
key [[lk] [rk]] C
key [[lk] [rk]] i
key [lk] [rk] roll<
[lk] [rk] key swons concat
[lk] [key rk] concat
[lk key rk]
So:
[] [i roll< swons concat] [first] treestep
End of explanation |
212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Original Voce-Chaboche Model Fitting Using Only Tension Test Example 1
An example of fitting the original Voce-Chaboche model to only a tension test is presented in this notebook.
Documentation for all the functions used in this example can be found by either looking at docstrings for any of the functions.
Step1: Run the constrained tensile-only optimization
This is a canonical example for fitting the Voce-Chaboche model to a tension test with cyclic behavior embedded through constraints on the parameters.
A simple model appears to work best in this case, so only one backstress is specified.
The overall steps to calibrate the model parameters are as follows
Step2: Plot results
After the analysis is finished we can plot the test data versus the fitted model.
Note that we add two dummy parameters to the list of final parameters because the plotting function was written for the updated Voce-Chaboche model that has two additional parameters.
Setting the first of these two additional parameters equal to zero neglects the effects of the updated model.
If we set output_dir='./output/', for example, instead of output_dir='' the uvc_data_plotter function will save pdf's of all the plots instead of displaying them below.
The function uvc_data_multi_plotter is also provided to give more fine-grained control over the plotting process, and can compare multiple analyses. | Python Code:
import RESSPyLab as rpl
import numpy as np
Explanation: Original Voce-Chaboche Model Fitting Using Only Tension Test Example 1
An example of fitting the original Voce-Chaboche model to only a tension test is presented in this notebook.
Documentation for all the functions used in this example can be found by either looking at docstrings for any of the functions.
End of explanation
# Specify the true stress-strain of the tension test to be used in the calibration
data_files = ['example_3.csv']
# Set initial parameters for the Voce-Chaboche model with one backstresses
# [E, \sigma_{y0}, Q_\infty, b, C_1, \gamma_1]
x_0 = np.array([200000., 355., 1.0, 1.0, 1.0, 1.0])
# Set the bounds on the hardening metrics
rho_iso_inf = 0.35
rho_iso_sup = 0.50
rho_yield_inf = 1.5
rho_yield_sup = 2.5
rho_gamma_b_inf = 2.25
rho_gamma_b_sup = 3.25
rho_gamma_12_inf = 0.
rho_gamma_12_sup = 0.
# Log files for the parameters at each step, and values of the objective function at each step
x_log = './output/tensile_x_log.txt'
fun_log = './output/tensile_fun_log.txt'
# Run the calibration
# Set filter_data=True if you have NOT already filtered/reduced the data
# We recommend that you filter/reduce the data beforehand (i.e., filter_data=False is recommended)
sol = rpl.vc_tensile_opt_scipy(x_0, data_files, rho_iso_inf, rho_iso_sup, rho_yield_inf, rho_yield_sup,
rho_gamma_b_inf, rho_gamma_b_sup, rho_gamma_12_inf, rho_gamma_12_sup,
x_log_file=x_log, fun_log_file=fun_log, filter_data=True,
max_its=600, tol=1.e-8, make_x0_feasible=True)
# Two objects are returned, the solution point is the first one
x_sol = sol[0]
Explanation: Run the constrained tensile-only optimization
This is a canonical example for fitting the Voce-Chaboche model to a tension test with cyclic behavior embedded through constraints on the parameters.
A simple model appears to work best in this case, so only one backstress is specified.
The overall steps to calibrate the model parameters are as follows:
1. Load the tension test
2. Choose a starting point
3. Set the bounds on the hardening metric constraints
4. Set the location to save the analysis history
5. Run the analysis
End of explanation
data = rpl.load_data_set(data_files)
# Added parameters are necessary for plotting the Voce-Chaboche model
x_sol_2 = np.insert(x_sol, 4, [0., 1.])
rpl.uvc_data_plotter(x_sol_2, data, output_dir='', file_name='vc_tensile_example_plot', plot_label='Fitted')
Explanation: Plot results
After the analysis is finished we can plot the test data versus the fitted model.
Note that we add two dummy parameters to the list of final parameters because the plotting function was written for the updated Voce-Chaboche model that has two additional parameters.
Setting the first of these two additional parameters equal to zero neglects the effects of the updated model.
If we set output_dir='./output/', for example, instead of output_dir='' the uvc_data_plotter function will save pdf's of all the plots instead of displaying them below.
The function uvc_data_multi_plotter is also provided to give more fine-grained control over the plotting process, and can compare multiple analyses.
End of explanation |
213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Arbeitsgrundlagen
Im Versuch M1 geht es darum die Geschwindigkeit einer Pistolenkugel auf verschiedene Arten zu bestimmen. Dabei gibt es noch verschiedene Arten von Geschwindigkeiten.
Die gesamten Formeln und Illustrationen dazu wurden aus den Unterlagen des Dozenten entnommen.
Flugzeitmethode
Die mittlere Geschwindigkeit eines Objektes kann durch \ref{eq
Step1: <center>
<t>Strecke zwischen den Lichtschranken
Step2: Es resultiert in Grafik \ref{fig
Step3: Vergleich der Messmethoden
Um die Messmethoden vergleichen zu können wurden alle einzelnen errechneten Geschwindigkeiten mithilfe der Luftwiderstandskorrekturgleichung \ref{eq
Step4: Streuung der Mündungsgeschwindigkeit
In Grafik \ref{fig
Step5: Vergleich der Messmethoden
Um die Messmethoden vergleichen zu können wurden alle einzelnen errechneten Geschwindigkeiten mithilfe der Luftwiderstandskorrekturgleichung \ref{eq
Step6: Fehlerrechnung
Ballistische Methode
Die Mittelwerte der ballistischen Methode unterscheiden sich um {{'{0
Step7: Wie anhand der Fehlerbalken in Grafik \ref{fig
Step8: Natürlich kann sofort erkannt werden wie gross der systematische Fehler bei der ballistischen Methode ist. Deswegen wurden so unterschiedliche Ergebnisse für die Projektilgeschwindigkeit erhalten. Die Fehlerbereiche überlappen sich nun und es ist somit okay anzunehmen dass die beiden Rechnungen stimmen wobei die Flugzeitmethode um ein Vielfaches genauer ist.
Drehstossmethode
Die Mittelwerte der Drehstossethode unterscheiden sich um {{'{0
Step9: Wie anhand der Fehlerbalken in Grafik \ref{fig
Step10: Auch hier kann erneut erkannt werden wie gross der systematische Fehler bei der Drehstossmethode ist. Deswegen wurden auch hier so unterschiedliche Ergebnisse für die Projektilgeschwindigkeit erhalten. Die Fehlerbereiche überlappen sich nun und es ist somit okay anzunehmen dass die beiden Rechnungen stimmen wobei die Flugzeitmethode auch hier um ein Vielfaches genauer ist.
Resultate & Diskussion
Flugzeitmethode
Die Flugzeitmethode wurde mit genauen Messgeräten durchgeführt, welche im Verhältnis zu den beiden unkonventionelleren Methoden praktisch keinen Fehler bergen. Deswegen kann die Flugzeitmethode als eine gewisse Referenz betrachtet werden. Wenn Grafik \ref{fig
Step12: Ballistische Methode
Die Resultate aus dem Versuch mit der ballistischen Methode können der Tabelle \ref{tab
Step13: Die Resultate aus der ballistischen Methode unterscheiden sich kaum von denen der Flugzeitmethode. Um genau {{'{0
Step14: Die Resultate aus der Drehstossmethode unterscheiden sich ebenfalls kaum von denen der Flugzeitmethode. Um {{'{0
Step15: Flugzeitmethode
Step16: Drehstossmethode | Python Code:
# define base values and measurements
v1_s = 0.500
v1_sb1 = 1.800
v1_sb2 = 1.640
v1_m = np.mean([0.47, 0.46, 0.46, 0.46, 0.46, 0.47, 0.46, 0.46, 0.46, 0.46, 4.65 / 10]) * 1e-3
v1_T = np.mean([28.68 / 10, 28.91 / 10])
v1_cw = 0.75
v1_cw_u = 0.08
v1_A = 4*1e-6
v1_pl = 1.2041
def air_resistance(s, v):
k = v1_cw * v1_A * v1_pl / 2 / v1_m
v0 = v / (1 - k * s)
return v0
Explanation: Arbeitsgrundlagen
Im Versuch M1 geht es darum die Geschwindigkeit einer Pistolenkugel auf verschiedene Arten zu bestimmen. Dabei gibt es noch verschiedene Arten von Geschwindigkeiten.
Die gesamten Formeln und Illustrationen dazu wurden aus den Unterlagen des Dozenten entnommen.
Flugzeitmethode
Die mittlere Geschwindigkeit eines Objektes kann durch \ref{eq:velocity} berechnet werden.
\begin{equation}
\overline{v} = \frac{s}{t}
\label{eq:velocity}
\end{equation}
Wenn also die Abschusszeit und die Aufprallzeit, sowie die Flugstrecke bekannt sind, so kann die Zeitdifferenz und somit die mittlere Fluggeschwindigkeit des Geschosses ermittelt werden.
Ballistische Methode
Bei dieser Methode wird ein Schuss auf ein hängendes Pendel abgegeben. Die Gleichungen im folgenden Abschnitt wurden durch den Dozenten zur Verfügung gestellt.
Der Impuls p ist gegeben durch die Gleichung in \ref{eq:impulse}. Dieser Impuls kann beim Aufprall der Pistolenkugel auf ein ballistisches Pendel gemessen werden. Durch den erheblichen Massenunterschied der Kugel und des Pendels bleibt die Kugel stecken. Ein inelastischer Stoss resultiert. Aus diesem Grund wird die komplette kinetische Energie in Wärme umgewandelt. Der Energieerhaltungssatz ist hier also schwierig anwendbar.
\begin{equation}
p = mv = (m + M)\cdot u
\label{eq:impulse}
\end{equation}
Die verbleibende kinetische Energie wird beim Ausschlagen des Pendels in potentielle Hubenergie umgewandelt.
Mithilfe des Energiesatzes wird die Geschwindigkeit u durch die Hubhöhe h mit Gleichung \ref{eq:hubheight} ersetzt.
\begin{equation}
u = \sqrt{2gh}
\label{eq:hubheight}
\end{equation}
Aus Grafik 1 können die Gleichungen \ref{eq:h} und \ref{eq:cosphi} bestimmt werden.
\begin{equation}
h = l \cdot (1 - cos\varphi)
\label{eq:h}
\end{equation}
\begin{equation}
\cos\varphi = \frac{a}{\sqrt{a^2 + x^2}}
\label{eq:cosphi}
\end{equation}
Da der Schwerpunkt und somit die Länge l des Pendels nur ungenau ermittelt werden können, wird ein Umweg über die Schwingungsdauer T ersichtlich in \ref{eq:Schwingungsdauer} gewählt.
\begin{equation}
T = 2\pi\cdot\sqrt{\frac{l}{g}}
\label{eq:Schwingungsdauer}
\end{equation}
Daraus resultiert die Gleichung \ref{eq:u}.
\begin{equation}
u = \frac{g}{2\pi}\cdot T\cdot\sqrt{2\cdot\Bigg(1-\frac{1}{\sqrt{1 + (\frac{x}{a})^2}}\Bigg)}
\label{eq:u}
\end{equation}
Mit je einer Taylorentwicklung kann die Gleichung \ref{eq:u} auf Gleichung \ref{eq:u_taylor} angenähert werden.
\begin{equation}
u = \frac{g}{2\pi}\cdot\frac{M+m}{m}\cdot T\cdot\frac{x}{a}\cdot\Bigg[1-\frac{3}{8}\cdot\Big(\frac{x}{a}\Big)^2\Bigg]
\label{eq:u_taylor}
\end{equation}
Da die Taylorreihe nur bis zum Grad 2 entwickelt wurde, wäre der nächste Term der Grössenordnung $10^{-4}$. Es ergibt sich also ein sehr genauer Wert.
Somit sind während des Experimentes die Grössen m, M, a, x und T zu bestimmen.
Drehstossmethode
Bei dieser Methode wird ein Geschoss wie in Abbildung 2 gezeigt ein Geschoss auf eine um eine starre Achse frei drehbar gelagerte Hantel abgeschossen. Die Gleichungen im folgenden Abschnitt wurden durch den Dozenten zur Verfügung gestellt.
Für eine mit v geradlinig bewegte Punktmasse m kann der Drehimpuls bezüglich der Drehachse a mit Gleichung \ref{eq:drehimpuls} bestimmt werden.
\begin{equation}
\vec{L_a}=\vec{r}\times\vec{p}=m\cdot (\vec{r}\times\vec{v})
\label{eq:drehimpuls}
\end{equation}
Ferner ist bekannt, dass wie in Gleichung \ref{eq:gesamtdrehimpuls} beschrieben, der Gesamtdrehimpuls des Systems erhalten bleibt.
\begin{equation}
L_a=L_{a,Kugel}+L_{a,Hantel}=const.
\label{eq:gesamtdrehimpuls}
\end{equation}
Daraus folgt, dass für die anfänglich ruhende Hantel Gleichung \ref{eq:ruhe} gilt welche dann gleichzusetzen ist mit der Gleichung \ref{eq:bewegung} der bewegten Hantel nach dem Aufprall.
\begin{equation}
L_a=m\cdot d\cdot v+0
\label{eq:ruhe}
\end{equation}
\begin{equation}
L_a=\omega\cdot(I_{a,H}+m\cdot d^2)
\label{eq:bewegung}
\end{equation}
Daraus abgeleitet ergibt sich für die Geschwindigkeit der Pistolenkugel beim Aufprall die Gleichung \ref{eq:vdrehstoss}, wobei $I_{a,H}$ dem Inneren Moment der Hantel bezüglich a und $\omega$ der Winkelgeschwingikeit der Hantel um die Achse a entspricht.
\begin{equation}
v=\omega\cdot\bigg(\frac{I_{a,H}}{m\cdot d}+d\bigg)
\label{eq:vdrehstoss}
\end{equation}
Somit sind für dieses Experiment die Grössen $I_{a,H}$, m, d und $\omega$ zu bestimmen.
$I_{a,H}$ kann bestimmt werden mit Gleichung \ref{eq:inertia}. Hierbei ist $I{a,H_0}$ das Moment der Hantel ohne Gewichte. Es wird durch den Dozenten geliefert. $m$ ist die Masse eines Gewichtes und $d$ der Abstand zum Mittelpunkt der Hantel. Die Gleichung kann hergeleitet werden durch das Innere Moment einer Punktmasse erhalten mit Gleichung \ref{eq:J_punktmasse}. Da zwei Gewichte vorhanden sind wird dieses doppelt addiert und dem Moment der Hantel ohne Gewichte hinzugefügt.
\begin{equation}
I_{a,P}=m*r^2
\label{eq:J_punktmasse}
\end{equation}
\begin{equation}
I_{a,H}=2md^2 + I{a,H_0}
\label{eq:inertia}
\end{equation}
Einfluss des Luftwiderstandes auf die Geschossgeschwindigkeit
Bisher wurde gezeigt wie die mittlere, sowie die Aufprallgeschwindigkeit ermittelt werden können. Interessant ist aber auch die Mündungsgeschwindigkeit des Geschosses. Zur berechnung dieser wurden die Gleichungen im Folgenden durch den Dozenten zur Verfügung gestellt.
Diese kann man sehr gut über den wirkenden Luftwiderstand bestimmen. Der Luftwiderstand ist gegeben durch die Beziehung in \ref{eq:luftwiderstand}, wobei v die Momentangeschwidigkeit des Geschosses, A die Querschnittfläche jenes Geschosses, $\rho_L$ die Luftdichte und c_w der Widerstandsbeiwert des Projektils sind.
\begin{equation}
F_L=\frac{1}{2}c_\omega\cdot A\cdot\rho_L\cdot v^2
\label{eq:luftwiderstand}
\end{equation}
Das Projektil verliert ebensoviel Kinetische Energie wie der Luftwiderstand Bremsarbeit verrichtet. Dies is mit Gleichungen \ref{eq:kinetische_energie} und \ref{eq:kinetische_energie_eingesetzt} ersichtlich.
\begin{equation}
dE_{kin}=-F_Lds
\label{eq:kinetische_energie}
\end{equation}
\begin{equation}
m\cdot v\cdot dv=-\frac{1}{2}c_\omega\cdot A\cdot\rho_L\cdot v^2\cdot ds
\label{eq:kinetische_energie_eingesetzt}
\end{equation}
Daraus kann die Differenzengleichung \ref{eq:kin_diff} gebildet werden. Mit den Anfangsbedingungen s = 0 und v = $v_0$ wird Gleichung \ref{eq:v0} erhalten. Diese kann für sehr kleine $k\cdot s$ mit \ref{eq:v0_approx} angenähert werden. $k$ ist in allen diesen Gleichungen mit dem Term $k = \frac{c_\omega\cdot A\cdot\rho_L}{2m}$ zu beziffern.
\begin{equation}
\frac{dv}{v}=-k\cdot ds
\label{eq:kin_diff}
\end{equation}
\begin{equation}
v=v_0\cdot e^{-k\cdot s}
\label{eq:v0}
\end{equation}
\begin{equation}
v=v_0\cdot (1-k\cdot s)
\label{eq:v0_approx}
\end{equation}
Versuchsdurchführung
Für alle drei im Versuch angewandten Methoden zur Bestimmung der Projektilgeschwindigkeit (Flugzeitmethode, ballistische Methode, Drehstossmethode) wurde der Versuchsaufbau vermessen. So konnten folgende Parameter bestimmt werden:
End of explanation
# Evaluate Data
# Read Data
v1_df = pd.read_csv('data/ballistisch.csv')
v1_M = 0.09836
v1_l = 1.973
v1_g = 9.80705
# Plot only one graph of the linear fit
df = pd.read_csv('data/ballistsisch_0.csv')
slope, intercept, r, p, sem = stats.linregress(df['measurement'] + 0.25, df['x'])
n = np.linspace(0, 21, 100)
ax = df.plot(kind='scatter', x='measurement', y='x', label='gemessener Ausschlag')
plt.plot(n, [i * slope + intercept for i in n], label='linearer Fit', axes=ax)
plt.xlabel('Messung')
plt.ylabel('x [m]')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.close(ax.figure)
figure = PrettyFigure(ax.figure, label='fig:luftwiderstand_fit', caption='Lineare Regression zur bestimmung des maximalen Ausschlages des Pendels, falls keinerlei Unidealitäten dieses beeinflussen.')
figure.show()
Explanation: <center>
<t>Strecke zwischen den Lichtschranken:</t> $s$ = {{v1_s}} $m$
<t>Strecke zwischen Mündung und Pendel:</t> $s_{B1}$ = {{v1_sb1}} $m$
<t>Strecke zwischen Mündung und Hantel:</t> $s_{B2}$ = {{v1_sb2}} $m$
<t>Masse des Projektils im Mittel:</t> $m$ = {{'{0:.2f}'.format(v1_m)}} $kg$
<t>Mittlere Schwingungsdauer des Pendels:</t> $T$ = {{'{0:.2f}'.format(v1_T)}} $s$
<t>Die Querschnittsfläche eines Projektils:</t> $A$ = {{'{0:.2f}'.format(v1_A)}} $m^2$
<t>Widerstandsbeiwert eines Projektils:</t> $c_{\omega}$ = {{v1_cw}}
<t>Luftdichte:</t> $\rho_{L}$ = {{v1_pl}}$\frac{kg}{m^3}$
</center>
Ballistische Methode
Erst wurden 10 Messungen mit der ballistischen Methode durchgeführt. Hierbei wird ein Schuss auf ein Pendel abgefeuert.
Es wurde zum einen die Zeit die vergeht bis das Projektil beide Lichtschranken passiert hat gemessen und zum anderen wurde der Ausschlag des Pendels gemessen.
Da in der Pendelgleichung der Luftwiderstand sowie der Reibungswiderstand der Aufhängung nicht berücksichtigt sind, wurden pro Schuss 5 Messungen des Ausschlages gemacht. Mit linearer Regression kann somit der korrekte erste Ausschlag gefunden werden. So ist nämlich der Y-Achsenabschnitt der Fit-Gerade gerade der um den Luftwiderstand korrigierte erste Ausschlag.
Wichtig dabei ist, dass für die Messungen der Amplituden n-0.75 anstelle von n verwendet wird. Denn bei Messung der ersten Amplitude sind ja schon 0.25 Schwingungen vergangen, etc.
Damit besser gezählt werden konnte, wurde nur jeder fünfte Ausschlag am oberen Ende des Lineals gemessen. Somit wurden die Amplituden zu den Schwingungszeiten [0.25, 5.25, 10.25, 15.25, 20.25] gemessen. Diese führen nun mit einem Linearen Fit zum gesuchten um die Nichtidealitäten korrigierten Ergebnis.
Dieses Verfahren ist in Grafik \ref{fig:luftwiderstand_fit} gut zu sehen. Dieses wurde für alle zehn abgegeben Schuss durchgeführt, hier aber nur für den ersten Schuss grafisch dargestellt.
End of explanation
# Calculate mean velocity in air
v1_df['v_flug'] = pd.Series(v1_s / v1_df['t'])
# Calculate all x from the measurements
v1_x = []
for n in range(10):
df = pd.read_csv('data/ballistsisch_{}.csv'.format(n))
slope, intercept, r, p, sem = stats.linregress(df['measurement'] + 0.25, df['x'])
v1_x.append(intercept)
v1_df['x'] = pd.Series(v1_x)
# Calculate all v_B at impact
v1_k = v1_g / (2 * math.pi) * (v1_M + v1_m) / v1_m / v1_l * v1_T
v1_b = 1 - 3 / 8 * (v1_df['x'] / v1_l)**2
v1_df['v_ballistic'] = pd.Series(v1_k * v1_df['x'] * v1_b)
v1_mean = v1_df.mean()
# Plot calculated velocities
ax = v1_df.plot(kind='scatter', x='measurement', y='v_flug', label='$v_F$: errechnete Geschwindigkeit mit Flugzeitmethode')
plt.scatter(v1_df['measurement'], v1_df['v_ballistic'], label='$v_B$: errechnete Geschwindigkeit mit ballistischer Methode', color='red', axes=ax)
plt.axhline(y=v1_mean['v_flug'], axes=ax, color='blue', label='Mittelwert von $v_F$')
plt.axhline(y=v1_mean['v_ballistic'], axes=ax, color='red', label='Mittelwert von $v_B$')
plt.xlabel('Messung')
plt.ylabel('$v[\\frac{m}{s}]$')
#plt.ylim([150, 160])
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.close(ax.figure)
figure = PrettyFigure(ax.figure, label='fig:v_ballistisch', caption='Aufprallgeschwindigkeiten der Kugel durch Ballistische Methode errechnet, im Vergleich zu den jeweiligen Geschwindigkeiten, welche durch die Flugzeitmethode berechnet wurden.')
figure.show()
Explanation: Es resultiert in Grafik \ref{fig:luftwiderstand_fit} also ein maximaler Ausschlag von {{'{0:.2f}'.format(intercept)}}m. In diesem sind bereits jegliche Verluste berücksichtigt.
Mittlere Geschwindigkeiten
Dieses Verfahren wurde für alle zehn Messungen durchgeführt. Nun kann aus Gleichung \ref{eq:u_taylor} die Aufprallgeschwindigkeit errechnet werden.
Die errechneten mittleren Geschwindigkeiten sind in Grafik \ref{fig:v_ballistisch} in Relation gesetzt.
End of explanation
# Luftwiderstandskorrektur single and mean calculation
v1_flug_muendung = pd.Series(list(map(partial(air_resistance, v1_s / 2), v1_df['v_flug'])))
v1_ballistic_muendung = pd.Series(list(map(partial(air_resistance, v1_sb1), v1_df['v_ballistic'])))
v1_a = v1_flug_muendung / v1_ballistic_muendung
v1_a_mean = np.mean(v1_a)
v1_a_sem = stats.sem(v1_a)
v1_df['a'] = v1_a
v1_df['v_flug_muendung'] = v1_flug_muendung
v1_df['v_ballistic_muendung'] = v1_ballistic_muendung
v1_mean = v1_df.mean()
ax = v1_df.plot(kind='scatter', x='measurement', y='v_flug_muendung', label='$v_{0,F}$')
plt.scatter(v1_df['measurement'], v1_df['v_ballistic_muendung'], label='$v_{0,B}$', color='red', axes=ax)
plt.axhline(y=v1_mean['v_flug_muendung'], axes=ax, color='blue', label='Mittelwert von $v_F$')
plt.axhline(y=v1_mean['v_ballistic_muendung'], axes=ax, color='red', label='Mittelwert von $v_B$')
plt.ylabel('$v_0 [\\frac{m}{s}]$')
plt.xlabel('Messung')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.close(ax.figure)
figure = PrettyFigure(ax.figure, label='fig:laufzeiten_luft_v_balllistisch_mean_muendung', caption='Die Geschwindigkeiten errechnet mit der Flugzeit- und der ballistischen Methode an Mündung im Vergleich.')
figure.show()
Explanation: Vergleich der Messmethoden
Um die Messmethoden vergleichen zu können wurden alle einzelnen errechneten Geschwindigkeiten mithilfe der Luftwiderstandskorrekturgleichung \ref{eq:v0_approx} auf einen Bezugspunkt, in diesem Falle wurde die Mündung gewählt, umgerechnet.
Hier wurde für die Strecke der Flugzeitmethode $s$ = $\frac{s}{2}$ und für die Strecke der ballistischen Methode $s$ = $s_{B1}$ gewählt.
End of explanation
# Evaluate Data
# Calculate inertia
v2_d = np.array([92e-3, 91e-3])
v2_m = np.array([np.mean([15.86, 15.88])*1e-3, np.mean([42.03, 42.13])*1e-3])
# Inertia
v2_J = v2_m * v2_d**2
v2_J3 = 7.4e-5
v2_J3_u = 0.1e-5
# Read Data
v2_df = pd.read_csv('data/drehstoss.csv')
v2_df['omega'] = math.pi / v2_df['T_2']
v2_df['v_drehstoss'] = v2_df['omega'] * ((v2_J[v2_df['m']] * 2 + v2_J3) / (v2_d[v2_df['m']] * v1_m) + v2_d[v2_df['m']])
v2_df['v_flug'] = v1_s / v2_df['t']
v2_mean = v2_df.mean()
ax = v2_df.plot(kind='scatter', x='measurement', y='v_drehstoss', label='$v_{D,k}$')
plt.scatter(v2_df['measurement'], v2_df['v_flug'], label='$v_{F}$', color='red', axes=ax)
plt.axhline(y=v2_mean['v_drehstoss'], axes=ax, color='blue', label='Mittelwert von $v_F$')
plt.axhline(y=v2_mean['v_flug'], axes=ax, color='red', label='Mittelwert von $v_D$')
plt.ylabel('$v [\\frac{m}{s}]$')
plt.xlabel('Messung')
plt.ylim([150, 160])
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.close(ax.figure)
figure = PrettyFigure(ax.figure, label='fig:v_luft_v_drehstoss', caption='Die Geschwindigkeiten errechnet mit der Flugzeit- und der Drehstossmethode mit je fünf Messungen beider Gewichten im Vergleich.')
figure.show()
Explanation: Streuung der Mündungsgeschwindigkeit
In Grafik \ref{fig:laufzeiten_luft_v_balllistisch_mean_muendung} sind die mithilfe der Flugzeitmethode und der Luftwiderstandskorrektur errechneten Mündungsgeschwindigkeiten gezeigt.
Drehstossmethode
Hierbei wurde ein Geschoss auf eine horizontal drehbare Hantel abgefeuert. Dabei wurde wieder die Flugzeit innerhalb der Lichtschranken gemessen. Diesmal wurde jedoch zusätlich noch die Zeit die die Hantel für eine halbe Umdrehung benötigt gemessen. Damit konnte die Winkelgeschwindigkeit $\omega$ durch Gleichung \ref{eq:omega} bestimmt werden.
\begin{equation}
\omega=\frac{2\cdot\pi}{T}
\label{eq:omega}
\end{equation}
Mithilfe der errechneten Gleichung \ref{eq:vdrehstoss} konnte die Geschwindikeit des Projektiles bestimmt werden. Dabei wurde die Gleichung in \ref{eq:inertia} zur Bestimmung der Trägheitsmomente der beiden Hanteln mit unterschiedlichen Gewichten verwendet.
Die Messresultate aus beiden Versuchen sind im Folgenden Teil dargestellt.
Es wird ein Vergleich dieser Methodik zur Flugzeitmethode angestellt.
Mittlere Geschwindigkeiten
Um die verschiedenen Methoden gut vergleichen zu können wurden die Mittelwerte der Flugzeitmethode und der Drehstossmethode errechnet.
Die errechneten mittleren Geschwindigkeiten und ihre Unsicherheiten sind in Grafik \ref{fig:laufzeiten_luft_v_drehstoss} in Relation gesetzt.
End of explanation
# Luftwiderstandskorrektur single and mean calculation
v2_mean = v2_df.mean()
v2_sem = v2_df.sem()
v2_df['v_flug_muendung'] = pd.Series(list(map(partial(air_resistance, v1_s / 2), v2_df['v_flug'])))
v2_df['v_drehstoss_muendung'] = pd.Series(list(map(partial(air_resistance, v1_sb1), v2_df['v_drehstoss'])))
v2_df['a'] = v2_df['v_flug_muendung'] / v2_df['v_drehstoss_muendung']
v2_mean = v2_df.mean()
ax = v2_df.plot(kind='scatter', x='measurement', y='v_flug_muendung', label='$v_{0,F}$')
plt.scatter(v2_df['measurement'], v2_df['v_drehstoss_muendung'], label='$v_{0,D}$', color='red', axes=ax)
plt.axhline(y=v2_mean['v_flug_muendung'], axes=ax, color='blue', label='Mittelwert von $v_F$')
plt.axhline(y=v2_mean['v_drehstoss_muendung'], axes=ax, color='red', label='Mittelwert von $v_D$')
plt.ylabel('$v_0 [\\frac{m}{s}]$')
plt.xlabel('Messung')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.close(ax.figure)
figure = PrettyFigure(ax.figure, label='fig:laufzeiten_luft_v_drehstoss_mean', caption='Die Geschwindigkeiten errechnet mit der Flugzeit- und der Drehstossmethode an Mündung im Vergleich.')
figure.show()
Explanation: Vergleich der Messmethoden
Um die Messmethoden vergleichen zu können wurden alle einzelnen errechneten Geschwindigkeiten mithilfe der Luftwiderstandskorrekturgleichung \ref{eq:v0_approx} auf einen Bezugspunkt, in diesem Falle wurde die Mündung gewählt, umgerechnet.
Hier wurde für die Strecke der Flugzeitmethode $s$ = $\frac{s}{2}$ und für die Strecke der Drehstossmethode $s$ = $s_{B2}$ gewählt.
End of explanation
# Plot different methods with sem
v1_mean = v1_df.mean()
v1_sem = v1_df.sem()
v1_flug_muendung_mean = air_resistance(v1_s / 2, v1_mean['v_flug'])
v1_flug_muendung_sem = air_resistance(v1_s / 2, v1_sem['v_flug'])
v1_ballistic_muendung_mean = air_resistance(v1_sb1, v1_mean['v_ballistic'])
v1_ballistic_muendung_sem = air_resistance(v1_sb1, v1_sem['v_ballistic'])
x = [1, 4, 7, 10]
y = [
v1_mean['v_flug'],
v1_mean['v_ballistic'],
v1_flug_muendung_mean,
v1_ballistic_muendung_mean
]
e = [
v1_sem['v_flug'],
v1_sem['v_ballistic'],
v1_flug_muendung_sem,
v1_ballistic_muendung_sem
]
labels = ['$v_F$', '$v_B$', '$v_{0,F}$', '$v_{0,B}$']
fig = plt.figure()
plt.errorbar(x, y, yerr=e, fmt='o')
plt.xlim([0, 11])
#plt.ylim([150, 160])
plt.xticks(x, labels, rotation='35')
plt.ylabel('$v_0 [\\frac{m}{s}]$')
plt.text(0.01, 0.7,
'''$v_F $: Flugzeitmethode
$v_B $: Ballistische Methode
$v_{0,F}$: Mündungsgeschwindigkeit Flugzeitmethode
$v_{0,B}$: Mündungsgeschwindigkeit ballistische Methode'''
, transform = ax.transAxes, backgroundcolor='white', va='bottom')
plt.close(fig)
figure = PrettyFigure(fig, label='fig:laufzeiten_luft_v_balllistisch', caption='Die mittleren Geschwindigkeiten der ballistischen und Flugzeitmethode im Vergleich mit ihren respektiven mittleren Mündungsgeschwindigkeiten.')
figure.show()
Explanation: Fehlerrechnung
Ballistische Methode
Die Mittelwerte der ballistischen Methode unterscheiden sich um {{'{0:.2f}'.format(abs(v1_mean['v_flug'] - v1_mean['v_ballistic']))}}$\frac{m}{s}$. Dies sind zwar nur {{'{0:.2f}'.format((1 - v1_mean['v_flug'] / v1_mean['v_ballistic']) * 100)}}%, jedoch muss diesem Unterschied auf den Grund gegangen werden.
Zuerst wurde dafür der statistische Fehler gerechnet. Vieleicht liegt dieser ja im akzeptablen Bereich.
End of explanation
# Calculate systematic error
v1_T_u = 3e-2
v1_x_u = 5e-3
v1_l_u = 5e-3
v1_mean = v1_df.mean()
v1_sem = v1_df.sem()
v1_k1 = v1_g / (2 * math.pi) * (v1_M + v1_m) / v1_m * v1_mean['x'] / v1_l
v1_b1 = 1 - 3 / 8 * (v1_mean['x'] / v1_l)**2
v1_k2 = v1_g / (2 * math.pi) * (v1_M + v1_m) / v1_m * v1_T / v1_l
v1_b2 = 1 - 9 / 8 * (v1_mean['x'] / v1_l)**2
v1_k3 = v1_g / (2 * math.pi) * (v1_M + v1_m) / v1_m * v1_T * v1_mean['x'] / v1_l**2
v1_b3 = -1 + 9 / 8 * (v1_mean['x'] / v1_l)**2
v1_systematic_error = math.sqrt((v1_k1 * v1_b1 * v1_T_u)**2
+ (v1_k2 * v1_b2 * v1_x_u)**2
+ (v1_k3 * v1_b3 * v1_l_u)**2)
# Plot different methods with sem
x = [1, 4]
y = [
v1_mean['v_flug'],
v1_mean['v_ballistic']
]
e = [
v1_sem['v_flug'],
v1_systematic_error
]
labels = ['$v_F$', '$v_B$']
fig = plt.figure()
plt.errorbar(x, y, yerr=e, fmt='o')
plt.xlim([0, 5])
#plt.ylim([150, 160])
plt.xticks(x, labels, rotation='35')
plt.ylabel('$v_0 [\\frac{m}{s}]$')
plt.text(-0.026, 0.76,
'''$v_F $: Flugzeitmethode
$v_B $: Ballistische Methode'''
, transform = ax.transAxes, backgroundcolor='white', va='bottom')
plt.close(fig)
figure = PrettyFigure(fig, label='fig:laufzeiten_luft_v_balllistisch_systematischer_fehler', caption='Die systematischen Fehler der mittleren Geschwindigkeiten der ballistischen und Flugzeitmethode im Vergleich.')
figure.show()
Explanation: Wie anhand der Fehlerbalken in Grafik \ref{fig:laufzeiten_luft_v_balllistisch} schnell festgestellt werden kann, ist der statistische Fehler relativ gering. Dies bedeutet aber auch, dass der die Fehlerbereiche sich nicht überlappen. Dies ist nicht sehr gut. Denn es gibt nur eine wahre Geschwindigkeit des Projektils. Natürlich kann diese nie genau bestimmt werden. Da sich aber die Fehlerbereiche nicht überlappen muss eine Messung falsch sein oder der systematische Fehler viel grösser sein.
Deswegen wurde eine Fehlerfortpflanzungsrechnung für die ballistische Methode gemacht. Für die Flugzeitmethode ist diese nicht sehr aussagekräftig da die Messfehler sehr gering sind. Bei der ballistischen Methode wurde jedoch von Auge ungefähr ein Wert in Bewegung abgelesen. Dies führt natürlich zu hoher Ungenauigkeit. Da $M$ und $m$ sowie auch $g$ durch sehr genaue Messinstrumente bestimmt werden konnten werden diese in der Fehlerfortpflanzungsrechnung vernachlässigt.
Somit ist der resultierende systematische Fehler durch Gleichung \ref{eq:ffg_pendel} gegeben.
\begin{equation}
S_{v,sys}=\sqrt{(\frac{\partial v}{\partial T}\cdot s_T)^2 + (\frac{\partial v}{\partial x}\cdot s_x)^2 + (\frac{\partial v}{\partial a}\cdot s_a)^2}
\label{eq:ffg_pendel}
\end{equation}
\small
\begin{equation}
= \sqrt{\bigg(\frac{g}{2\pi}\cdot\frac{M+m}{m}\cdot\frac{x}{a}\cdot\bigg[1-\frac{3}{8}\cdot\Big(\frac{x}{a}\Big)^2\bigg]s_T\bigg)^2
+ \bigg(\frac{g}{2\pi}\cdot\frac{M+m}{m}\cdot T\cdot\frac{1}{a}\cdot\bigg[1-\frac{9}{8}\cdot\Big(\frac{x}{a}\Big)^2\bigg]s_x\bigg)^2
+ \bigg(\frac{g}{2\pi}\cdot\frac{M+m}{m}\cdot T\cdot\frac{x}{a^2}\cdot\bigg[1-\frac{9}{8}\cdot\Big(\frac{x}{a}\Big)^2\bigg]s_a\bigg)^2}
\end{equation}
\normalsize
End of explanation
# Plot different methods with sem
v2_mean = v2_df.mean()
v2_sem = v2_df.sem()
v2_flug_muendung_mean = air_resistance(v1_s / 2, v2_mean['v_flug'])
v2_flug_muendung_sem = air_resistance(v1_s / 2, v2_sem['v_flug'])
v2_drehstoss_muendung_mean = air_resistance(v1_sb2, v2_mean['v_drehstoss'])
v2_drehstoss_muendung_sem = air_resistance(v1_sb2, v2_sem['v_drehstoss'])
x = [1, 4, 7, 10]
y = [
v2_mean['v_flug'],
v2_mean['v_drehstoss'],
v2_flug_muendung_mean,
v2_drehstoss_muendung_mean
]
e = [
v2_sem['v_flug'],
v2_sem['v_drehstoss'],
v2_flug_muendung_sem,
v2_drehstoss_muendung_sem
]
labels = ['$v_F$', '$v_D$', '$v_{0,F}$', '$v_{0,D}$']
fig = plt.figure()
plt.errorbar(x, y, yerr=e, fmt='o')
plt.xlim([0, 11])
plt.ylim([150, 160])
plt.xticks(x, labels, rotation='35')
plt.ylabel('$v_0 [\\frac{m}{s}]$')
plt.text(-0.026, 0.7,
'''$v_F $: Flugzeitmethode
$v_D $: Drehstossmethode
$v_{0,F}$: Mündungsgeschwindigkeit Flugzeitmethode
$v_{0,D}$: Mündungsgeschwindigkeit Drehstossmethode'''
, transform = ax.transAxes, backgroundcolor='white', va='bottom')
plt.close(fig)
figure = PrettyFigure(fig, label='fig:laufzeiten_luft_v_drehstoss', caption='Die mittleren Geschwindigkeiten der Drehstoss- und Flugzeitmethode im Vergleich mit ihren respektiven mittleren Mündungsgeschwindigkeiten.')
figure.show()
Explanation: Natürlich kann sofort erkannt werden wie gross der systematische Fehler bei der ballistischen Methode ist. Deswegen wurden so unterschiedliche Ergebnisse für die Projektilgeschwindigkeit erhalten. Die Fehlerbereiche überlappen sich nun und es ist somit okay anzunehmen dass die beiden Rechnungen stimmen wobei die Flugzeitmethode um ein Vielfaches genauer ist.
Drehstossmethode
Die Mittelwerte der Drehstossethode unterscheiden sich um {{'{0:.2f}'.format(abs(v2_mean['v_flug'] - v2_mean['v_drehstoss']))}}$\frac{m}{s}$. Dies sind zwar nur {{'{0:.2f}'.format((1 - v2_mean['v_flug'] / v2_mean['v_drehstoss']) * 100)}}%, jedoch muss diesem Unterschied auf den Grund gegangen werden.
Zuerst wurde dafür der erneut statistische Fehler gerechnet. Vieleicht liegt dieser ja im akzeptablen Bereich.
End of explanation
# Calculate systematic error
v2_d_u = 0.5e-3
v2_T_u = 5e-3
v2_mean = v2_df.mean()
v2_sem = v2_df.sem()
v2_k1 = -math.pi / v2_mean['T_2']**2 * ((v2_J[0] * 2 + v2_J3) / v1_m / v2_d[0] + v2_d[0])
v2_k2 = math.pi / v2_mean['T_2'] * (1 - (v2_J[0] * 2 + v2_J3) / v1_m / v2_d[0]**2)
v2_systematic_error = math.sqrt((v2_k1 * v2_d_u)**2 + (v2_k1 * v2_T_u)**2)
# Plot different methods with sem
x = [1, 4]
y = [
v2_mean['v_flug'],
v2_mean['v_drehstoss']
]
e = [
v2_sem['v_flug'],
v2_systematic_error
]
labels = ['$v_F$', '$v_D$']
fig = plt.figure()
plt.errorbar(x, y, yerr=e, fmt='o')
plt.xlim([0, 5])
#plt.ylim([150, 160])
plt.xticks(x, labels, rotation='35')
plt.ylabel('$v_0 [\\frac{m}{s}]$')
plt.text(0.01, 0.76,
'''$v_F $: Flugzeitmethode
$v_D $: Drehstossmethode'''
, transform = ax.transAxes, backgroundcolor='white', va='bottom')
plt.close(fig)
figure = PrettyFigure(fig, label='fig:laufzeiten_luft_v_drehstoss_systematischer_fehler', caption='Die systematischen Fehler der mittleren Geschwindigkeiten der Drehstoss- und Flugzeitmethode im Vergleich.')
figure.show()
Explanation: Wie anhand der Fehlerbalken in Grafik \ref{fig:laufzeiten_luft_v_drehstoss} schnell festgestellt werden kann, ist der statistische Fehler relativ gering. Dies bedeutet aber auch, dass der die Fehlerbereiche sich nicht überlappen. Dies ist nicht sehr gut. Denn es gibt nur eine wahre Geschwindigkeit des Projektils. Natürlich kann diese nie genau bestimmt werden. Da sich aber die Fehlerbereiche nicht überlappen muss eine Messung falsch sein oder der systematische Fehler erneut viel grösser sein.
Deswegen wurde auch eine Fehlerfortpflanzungsrechnung für die Drehstossmethode gemacht. Hier wurden die Variabeln $d$ und $\frac{T}{2}$ berücksichtigt. Bei den anderen Werten handelt es sich um Messungen welche durch genaue Instrumente gemacht wurden.
Somit ist der resultierende systematische Fehler durch Gleichung \ref{eq:ffg_hantel} gegeben.
\begin{equation}
S_{v,sys}=\sqrt{\big(\frac{\partial v}{\partial \frac{T}{2}\cdot s_{T/2}}\big)^2 + \big(\frac{\partial v}{\partial d}\cdot s_d\big)^2}
= \sqrt{
\bigg(\bigg(-\frac{\pi}{(\frac{T}{2})^2}\frac{I_{a,H}}{m\cdot d}+d\bigg)\cdot s_{T/2}\bigg)^2
+ \bigg(\frac{\pi}{\frac{T}{2}}\cdot\bigg(1-\frac{I_{a,H}}{m\cdot d^2}\bigg)\cdot s_d\bigg)^2
}
\label{eq:ffg_hantel}
\end{equation}
End of explanation
# Plot different methods with sem
v1_mean = v1_df.mean()['v_flug_muendung']
v1_sem = v1_df.sem()['v_flug_muendung']
v1_std = v1_df.std()['v_flug_muendung']
ax = v1_df.plot(kind='scatter', x='measurement', y='v_flug_muendung', label='errechnete Geschwindigkeit an der Mündung')
plt.axhline(y=v1_mean, axes=ax, color='red', label='Mittelwert')
plt.axhline(y=v1_mean+v1_sem, axes=ax, color='green', label='Mittelwert ± Fehler')
plt.axhline(y=v1_mean-v1_sem, axes=ax, color='green')
plt.axhline(y=v1_mean+v1_std, axes=ax, color='purple', label='Mittelwert ± Standardabweichung')
plt.axhline(y=v1_mean-v1_std, axes=ax, color='purple')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.xlabel('Messung')
plt.ylabel('$v_{0,F} [\\frac{m}{s}]$')
plt.close(ax.figure)
figure = PrettyFigure(ax.figure, label='fig:laufzeiten_luft_muendung', caption='Die Mündungsgeschwindigkeiten errechnet mit der Flugzeitmethode. Dazu ihr Mittelwert und der Bereich der Standardabweichung.')
figure.show()
Explanation: Auch hier kann erneut erkannt werden wie gross der systematische Fehler bei der Drehstossmethode ist. Deswegen wurden auch hier so unterschiedliche Ergebnisse für die Projektilgeschwindigkeit erhalten. Die Fehlerbereiche überlappen sich nun und es ist somit okay anzunehmen dass die beiden Rechnungen stimmen wobei die Flugzeitmethode auch hier um ein Vielfaches genauer ist.
Resultate & Diskussion
Flugzeitmethode
Die Flugzeitmethode wurde mit genauen Messgeräten durchgeführt, welche im Verhältnis zu den beiden unkonventionelleren Methoden praktisch keinen Fehler bergen. Deswegen kann die Flugzeitmethode als eine gewisse Referenz betrachtet werden. Wenn Grafik \ref{fig:laufzeiten_luft_muendung} betrachtet wird, kann leicht erkannt werden dass 70% der Ergebnissle innerhalb der Standardabweichung liegen. Dies ist ein Bisschen mehr als die erwarteten 68%. Natürlich wurden viel zuwenige Messungen gemacht als dass die 68% jemals erreicht werden könnten. Somit ist 70% also schon ein sehr gutes Ergebnis! Damit kann angenommen werden dass die Messungen nicht allzu sehr fehlerbehaftet sind.
End of explanation
# Show results
import math
from IPython.display import (
display, display_html, display_png, display_svg
)
class PrettyTable(list):
Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders HTML and LaTeX Table in
IPython Notebook. For LaTeX export two styles can be chosen.
def __init__(self, initlist=[], label=None, caption='Description missing', extra_header=None, entries_per_column=100, significant_digits=4, print_latex_longtable=True):
self.print_latex_longtable = print_latex_longtable
self.entries_per_column = entries_per_column
self.significant_digits = significant_digits
self.caption = caption
self.label = label
if extra_header is not None:
extra_header = [e.replace('%', '\\%') for e in extra_header]
if len(initlist[0]) != len(extra_header):
raise ValueError("Header list must have same length as data has columns.")
initlist = [extra_header]+list(initlist)
super(PrettyTable, self).__init__(initlist)
def latex_table_tabular(self):
latex = ["\\begin{tabular}"]
latex.append("{"+"|".join((["l"]*len(self[0])))+"}\n")
for row in self:
latex.append(" & ".join(map(format, row)))
latex.append("\\\\ \n")
latex.append("\\end{tabular}")
return ''.join(latex)
def latex_longtable(self):
latex = ["\\begin{longtable}[H]{@{}"]
l = len(self) - 1
li = len(self[0])
latex.append("l" * (li * math.ceil(l / self.entries_per_column)))
latex.append("@{}}\n")
latex.append("\\toprule\\addlinespace\n")
line = (" & ".join(map(format, self[0])))
latex.append((line + " & ") * (math.ceil(l / self.entries_per_column) - 1))
latex.append(line)
latex.append("\\\\\\addlinespace \n")
latex.append("\\midrule\\endhead\n")
rows = []
rows_done = 0
for row in self[1:]:
if rows_done < self.entries_per_column:
if isinstance(row, str):
rows.append(" & ".join(row))
elif isinstance(row, float):
rows.append(" & ".join(map(('{0:.' + str(self.significant_digits) + 'f}').format, row)))
else:
rows.append(" & ".join(map(str, row)))
rows.append("\\\\\\addlinespace \n")
else:
rows[(rows_done % self.entries_per_column) * 2] += " & " + " & ".join(map(('{0:.' + str(self.significant_digits) + 'f}').format, row))
rows_done += 1
latex.extend(rows)
latex.append('\\\\\\bottomrule\\caption{%s}\\\\\\label{%s}' % (self.caption, self.label))
latex.append("\n \\end{longtable}")
return ''.join(latex).replace('%','\\%')
def _repr_html_(self):
html = ["<table style=\"margin:auto;\">"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
html.append('<p style="text-align:center">{0}</p>'.format(self.caption))
return ''.join(html)
def _repr_latex_(self):
if self.print_latex_longtable:
return self.latex_longtable()
else:
return self.latex_table_tabular()
def show(self):
display(self)
v1_mean = v1_df.mean()
v1_sem = v1_df.sem()
v1_std = v1_df.std()
values = [
'mit Flugzeitmethode $v_{F}$',
'an der Mündung mit Flugzeitmethode $v_{0,F}$',
'mit ballistischer Methode $v_{B}$',
'an der Mündung mit ballistischer Methode $v_{0,B}$',
]
means = [
'{0:.2f}'.format(v1_mean['v_flug']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v1_mean['v_flug_muendung']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v1_mean['v_ballistic']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v1_mean['v_ballistic_muendung']) + r'$\frac{m}{s}$'
]
sem = [
'{0:.2f}'.format(v1_sem['v_flug_muendung']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v1_sem['v_flug']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v1_sem['v_ballistic']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v1_sem['v_ballistic_muendung']) + r'$\frac{m}{s}$'
]
syst = [
r'$\footnotetext[1]{Fehler nicht relevanter Grösse und deshalb weggelassen.}$',
r'$\footnotemark[1]$',
'{0:.2f}'.format(v1_systematic_error) + r'$\frac{m}{s}$',
r'$\footnotetext[2]{Fehler wurde der Einfachheit halber auf Absprache mit dem Dozenten weggelassen.}$'
]
rel = [
'{0:.2f}'.format(v1_sem['v_flug'] / v1_mean['v_flug'] * 100) + '%',
'{0:.2f}'.format(v1_sem['v_flug_muendung'] / v1_mean['v_flug_muendung'] * 100) + '%',
'{0:.2f}'.format(v1_systematic_error / v1_mean['v_ballistic'] * 100) + '%',
r'$\footnotemark[2]$'
]
v1_results_tbl = PrettyTable(list(zip(values, means, sem, syst, rel)), label='tab:resultat_ballistisch', caption='Resultate aus dem Versuch der Ballistischen Methode.', extra_header=[
'Projektilgeschwindigkeit',
'Wert',
'stat. Fehler',
'syst. Fehler',
'Relativer Fehler'
], entries_per_column=4)
v1_results_tbl.show()
# Plot of a
ax = v1_df.plot(kind='scatter', x='measurement', y='a', label='errechnetes Verhältnis $a_i = V_{F,i}/V_{B,i}$')
plt.axhline(y=v1_a_mean, axes=ax, label='Mittelwert des Verhältnisses a', color='green')
plt.axhline(y=1, axes=ax, label='Erwarteter Wert 1', color='purple')
plt.xlabel('Messung')
plt.ylabel('a')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.close(ax.figure)
figure = PrettyFigure(ax.figure, label='fig:laufzeiten_luft_v_balllistisch_mean_ratio', caption='Die mittleren Geschwindigkeiten errechnet mit der Flugzeit- und der ballistischen Methode an Mündung im Verhältnis dargestellt und ihr erwarteter Wert von 1 zum Vergleich.')
figure.show()
Explanation: Ballistische Methode
Die Resultate aus dem Versuch mit der ballistischen Methode können der Tabelle \ref{tab:resultat_ballistisch} entnommen werden.
End of explanation
# Show results
v2_mean = v2_df.mean()
v2_sem = v2_df.sem()
v2_std = v2_df.std()
values = [
'mit Flugzeitmethode $v_{F}$',
'an der Mündung mit Flugzeitmethode $v_{0,F}$',
'mit Drehstossmethode $v_{B}$',
'an der Mündung mit Drehstossmethode $v_{0,B}$',
]
means = [
'{0:.2f}'.format(v2_mean['v_flug']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v2_mean['v_flug_muendung']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v2_mean['v_drehstoss']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v2_mean['v_drehstoss_muendung']) + r'$\frac{m}{s}$'
]
sem = [
'{0:.2f}'.format(v2_sem['v_flug_muendung']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v2_sem['v_flug']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v2_sem['v_drehstoss']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(v2_sem['v_drehstoss_muendung']) + r'$\frac{m}{s}$'
]
syst = [
r'$\footnotemark[1]$',
r'$\footnotemark[1]$',
'{0:.2f}'.format(v2_systematic_error) + r'$\frac{m}{s}$',
r'$\footnotemark[2]$'
]
rel = [
'{0:.2f}'.format(v2_sem['v_flug'] / v2_mean['v_flug'] * 100) + '%',
'{0:.2f}'.format(v2_sem['v_flug_muendung'] / v2_mean['v_flug_muendung'] * 100) + '%',
'{0:.2f}'.format(v2_systematic_error / v2_mean['v_drehstoss'] * 100) + '%',
r'$\footnotemark[2]$'
]
v2_results_tbl = PrettyTable(list(zip(values, means, sem, syst, rel)), label='tab:resultat_drehstoss', caption='Resultate aus dem Versuch der Drehstossmethode.', extra_header=[
'Projektilgeschwindigkeit',
'Wert',
'stat. Fehler',
'syst. Fehler',
'Relativer Fehler'
], entries_per_column=4)
v2_results_tbl.show()
# Plot ratio a
ax = v2_df.plot(kind='scatter', x='measurement', y='a', label='errechnetes Verhältnis $a_i = V_{F,i}/V_{D,i}$')
plt.axhline(y=v2_mean['a'], axes=ax, label='Mittelwert des Verhältnisses a', color='green')
plt.axhline(y=1, axes=ax, label='Erwarteter Wert 1', color='purple')
plt.xlabel('Messung')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
plt.close(ax.figure)
figure = PrettyFigure(ax.figure, label='fig:laufzeiten_luft_v_drehstoss_mean_ratio', caption='Die mittleren Geschwindigkeiten errechnet mit der Flugzeit- und der Drehstossmethode an Mündung im Verhältnis dargestellt und ihr erwarteter Wert von 1 zum Vergleich.')
figure.show()
Explanation: Die Resultate aus der ballistischen Methode unterscheiden sich kaum von denen der Flugzeitmethode. Um genau {{'{0:.2f}'.format((1 - v1_mean['v_flug'] / v1_mean['v_ballistic']) * 100)}}% nämlich. Das Verhältnis der Mittelwerte ist sehr schön in Grafik \ref{fig:laufzeiten_luft_v_balllistisch_mean_ratio} zu erkennen. Dies ist schon sehr genau. Jedoch ist die Geschwindigkeit für das Projektil nur auf genau einen Wert definiert. Welcher Wert ist nun also korrekt?
Genau aus diesem Grund wurde eine Fehlerrechnung gemacht. Die Resultate aus jener sind in Tabelle \ref{tab:resultat_ballistisch} ersichtlich. In dieser sieht man wunderbar dass der statistische Fehler sehr klein ist, der systematische Fehler dafür umso grösser. Bei der Flugzeitmethode gibt es praktisch keinen systematischen Fehler, da die Messstrecke und die Flugzeit praktisch ohne Fehler bestimmt werden können.
Bei der ballistischen Methode hingegen wurden viele Werte von Auge abgelesen und können somit recht Fehlerbehaftet sein, da das Auge nicht genügend Präzision hat.
Der Fehler ist so gross dass sich das Resultat der Flugzeitmethode innerhalb des Fehlerbereiches der ballistischen Methode bewegt. Es kann daher angenommen werden dass die Flugzeitmethode das Resultat mit viel kleinerem Fehlerbereich und ziemlich akkurat bestimmt. Man könnte aus beiden Methoden einen gewichteten Mittelwert bilden. Da der Fehler der ballistischen Methode aber so viel grösser ist, als der Fehler der Flugzeitmethode, würde dieser ziemlich genau dem Resultat aus der Flugzeitmethode entsprechen.
Alles in allem ist die Rechnung mit mehr als zweiprozentiger Genauigkeit auch für diese Methode ziemlich akkurat!
Drehstossmethode
Die Resultate aus dem Versuch mit der ballistischen Methode können der Tabelle \ref{tab:resultat_drehstoss} entnommen werden.
End of explanation
for n in range(10):
df = pd.read_csv('data/ballistsisch_{}.csv'.format(n))
data = PrettyTable(list(zip(df['measurement'], df['x'])), caption='Schuss {0}'.format(n + 1), entries_per_column=5, extra_header=['Messung [1]', 'Ausschlag [m]'])
data.show()
Explanation: Die Resultate aus der Drehstossmethode unterscheiden sich ebenfalls kaum von denen der Flugzeitmethode. Um {{'{0:.2f}'.format((1 - v2_mean['v_flug'] / v2_mean['v_drehstoss']) * 100)}}% um genau zu sein. Das Verhältnis der Mittelwerte ist sehr schön in Grafik \ref{fig:laufzeiten_luft_v_drehstoss_mean_ratio} zu erkennen. Dies ist schon sehr genau. Jedoch ist die Geschwindigkeit für das Projektil nur auf genau einen Wert definiert. Welcher Wert ist nun also korrekt?
Genau aus diesem Grund wurde auch hier eine Fehlerrechnung gemacht. Die Resultate aus jener sind in Tabelle \ref{tab:resultat_drehstoss} ersichtlich. In dieser sieht man wunderbar dass der statistische Fehler sehr klein ist, der systematische Fehler dafür umso grösser. Bei der Flugzeitmethode gibt es praktisch keinen systematischen Fehler, da die Messstrecke und die Flugzeit praktisch ohne Fehler bestimmt werden können.
Bei der ballistischen Methode hingegen wurden viele Werte von Auge abgelesen und können somit recht Fehlerbehaftet sein, da das Auge nicht genügend Präzision hat.
Der Fehler ist so gross dass sich das Resultat der Flugzeitmethode innerhalb des Fehlerbereiches der ballistischen Methode bewegt. Es kann daher angenommen werden dass die Flugzeitmethode das Resultat mit viel kleinerem Fehlerbereich und ziemlich akkurat bestimmt. Man könnte aus beiden Methoden einen gewichteten Mittelwert bilden. Da der Fehler der ballistischen Methode aber so viel grösser ist, als der Fehler der Flugzeitmethode, würde dieser ziemlich genau dem Resultat aus der Flugzeitmethode entsprechen.
Alles in allem ist die Rechnung mit fast einprozentiger Genauigkeit auch für diese Methode ziemlich akkurat!
Damit ist die Drehstossmethode akkurater als die ballistische Methode. Dies kann vorallem auf erhöhte Messungenauigkeit bei drei statt zwei Parametern zurückgeführt werden.
Anhang
Ballistische Methode
End of explanation
data = PrettyTable(list(zip(v1_df['measurement'], v1_df['t'])), caption='Laufzeiten der einzelnen Schüsse.', entries_per_column=10, extra_header=['Messung [1]', 'Laufzeit [s]'])
data.show()
Explanation: Flugzeitmethode
End of explanation
v2_df['real_m'] = v2_m[v2_df['m']]
data = PrettyTable(list(zip(v2_df['measurement'], v2_df['t'], v2_df['T_2'], v2_df['real_m'])), caption='Messwerte der Drehstossmethode.', entries_per_column=10, extra_header=['Messung [1]', 'Laufzeit [s]', 'Halbe Periode [s]', 'Masse eines Hantelstücks [kg]'])
data.show()
Explanation: Drehstossmethode
End of explanation |
214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representing an Artificial Neural Network as a Cartesian Genetic Program
(a.k.a dCGPANN)
Neural networks (deep, shallow, convolutional or not) are, after all, computer programs and as such can be encoded in a chromosome and represented as a Genetic Program. A dCGPANN, coded in the class expression_ann, is exactly this
Step1: Instantiating and inspecting a dCGPANN
Step2: Mutating a dCGPANN
Step3: Training a dCGPANN | Python Code:
# Initial import
import dcgpy
import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm
%matplotlib inline
Explanation: Representing an Artificial Neural Network as a Cartesian Genetic Program
(a.k.a dCGPANN)
Neural networks (deep, shallow, convolutional or not) are, after all, computer programs and as such can be encoded in a chromosome and represented as a Genetic Program. A dCGPANN, coded in the class expression_ann, is exactly this: a feed forward neural network represented into a chromosome using a Cartesian Genetic Programming encoding. Derivatives with respect to weights and biases can be easily extracted, the underlying algorithm being backward automated differentiation. Unlike other dCGP expressions, higher order derivatives are not available (no gduals).
End of explanation
# A limited amount of kernels are available for dCGPANN. Notably the most common in deep learning literature are.
nonlinearities = dcgpy.kernel_set_double(["sig", "ReLu", "tanh"])
# After defining the possible nonlinearities, we instantiate the dCGPANN
dcgpann = dcgpy.expression_ann(inputs=2, outputs=2, rows=20, cols=5,
levels_back=2, arity=[2,20,10,10,5], kernels=nonlinearities())
# By default all weights (and biases) are set to 1 (and 0). We initialize them normally distributed
dcgpann.randomise_weights(mean = 0., std = 0.1)
dcgpann.randomise_biases(mean = 0., std = 0.001)
# We then visualize the network thus encoded as a cartesian program
ax = dcgpann.visualize(show_nonlinearities=True, active_connection_alpha=0.0)
# The weights and biases can be extracted as a whole
w = dcgpann.get_weights()
b = dcgpann.get_biases()
# Or only for a specific node/input
w5_1 = dcgpann.get_weight(node_id = 5, input_id = 1)
# The resulting expression can, as usual be computed both on numerical values ...
x = [0.1,-0.3]
print("Value in", x, " is", dcgpann(x))
# ... and symbolic names (this can get real messy so we show only 100 characters of the first output)
x = ["x", "y"]
print("Value in", x, " is", dcgpann(x)[0][:150])
Explanation: Instantiating and inspecting a dCGPANN
End of explanation
# We define a single input single output dCGPANN.
dcgpann = dcgpy.expression_ann(inputs=1, outputs=1, rows=10, cols=5,
levels_back=2, arity=[1,10,10,10,10], kernels=nonlinearities())
# By default all weights (and biases) are set to 1 (and 0). We initialize them randomly
dcgpann.randomise_weights(mean = 0., std = 0.1)
w = dcgpann.get_weights()
b = dcgpann.get_biases()
# As we all CGP expressions, we can now mutate producing a slightly different architecture
# Note that mutation only affect the chromosome (i.e. the ANN encoding) not any of the weights nor biases
# We visualize the starting dCGPANN
f, ax = plt.subplots(1,4, figsize=(15,3))
ax = plt.subplot(1,4,1)
ax = dcgpann.visualize(show_nonlinearities=True, legend = False, axes = ax, active_connection_alpha=0.1)
_ = ax.set_title("Original")
# mutate three function genes (these will be easily visualized)
dcgpann.mutate_active_fgene(3)
ax = plt.subplot(1,4,2)
ax = dcgpann.visualize(show_nonlinearities=True, legend = False, axes = ax, active_connection_alpha=0.1)
_ = ax.set_title("Kernels mutation")
# mutate active connections (its difficult to "see" the change, except when some node become inactive or active)
dcgpann.mutate_active_cgene(30)
ax = plt.subplot(1,4,3)
ax = dcgpann.visualize(show_nonlinearities=True, legend = False, axes = ax, active_connection_alpha=0.1)
_ = ax.set_title("Connections mutation")
# mutate active connections (its difficult to "see" the change, except when some node become inactive or active)
dcgpann.mutate_ogene(1)
ax = plt.subplot(1,4,4)
ax = dcgpann.visualize(show_nonlinearities=True, legend = False, axes = ax, active_connection_alpha=0.1)
_ = ax.set_title("Output connection mutation")
Explanation: Mutating a dCGPANN
End of explanation
# We want to train the dCGPANN in a regression task. Lets create the points
sample_size = 100
points = np.linspace(-1.,1.,sample_size)
np.random.shuffle(points)
labels = ((points-0.5)**2 + np.cos(points * 2 * np.pi)) / 3.5
points = points.reshape((sample_size,1))
labels = labels.reshape((sample_size,1))
plt.plot(points,labels, '.')
_ = plt.title("function to be learned")
# Since the output is in [-1, 1] we force the output nonliearity to be tanh
dcgpann.set_output_f("tanh")
print("Starting error:", dcgpann.loss(points,labels, "MSE"))
print("Net complexity (number of active weights):", dcgpann.n_active_weights())
print("Net complexity (number of unique active weights):", dcgpann.n_active_weights(unique=True))
print("Net complexity (number of active nodes):", len(dcgpann.get_active_nodes()))
# This will store the learning history
n_epochs = 50000
res = [0] * n_epochs
dcgpann.set_weights(w)
dcgpann.set_biases(b)
# Let's go
for i in tqdm(range(n_epochs)):
res[i] = dcgpann.sgd(points = points, labels = labels, lr = 0.1, batch_size = 32, loss = "MSE", parallel = 4, shuffle = True)
print("End MSE: ", dcgpann.loss(points,labels, "MSE"))
f, ax = plt.subplots(1,2, figsize=(15,3))
# We plot the learned function against the target
plt.subplot(1,2,1)
_ = plt.plot(points,labels, '.')
_ = plt.plot(points, [dcgpann(p) for p in points],'.')
plt.subplot(1,2,2)
# We plot the mse during learning
_ = plt.semilogy(res)
Explanation: Training a dCGPANN
End of explanation |
215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_stats_cluster_source_1samp
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Step5: Compute statistic
Step6: Visualize the clusters | Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne import (io, spatial_tris_connectivity, compute_morph_matrix,
grade_to_tris)
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
Explanation: .. _tut_stats_cluster_source_1samp:
Permutation t-test on source data with spatio-temporal clustering
Tests if the evoked response is significantly different between
conditions across subjects (simulated here using one subject's data).
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = io.Raw(raw_fname)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
Explanation: Read epochs for all channels, removing a bad one
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50)
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50)
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep
Explanation: Transform to source space
End of explanation
# Normally you would read in estimates across several subjects and morph
# them to the same cortical space (e.g. fsaverage). For example purposes,
# we will simulate this by just having each "subject" have the same
# response (just noisy in source space) here. Note that for 7 subjects
# with a two-sided statistical test, the minimum significance under a
# permutation test is only p = 1/(2 ** 6) = 0.015, which is large.
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
# It's a good idea to spatially smooth the data, and for visualization
# purposes, let's morph these to fsaverage, which is a grade 5 source space
# with vertices 0:10242 for each hemisphere. Usually you'd have to morph
# each subject's data separately (and you might want to use morph_data
# instead), but here since all estimates are on 'sample' we can use one
# morph matrix for all the heavy lifting.
fsave_vertices = [np.arange(10242), np.arange(10242)]
morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
# Finally, we want to compare the overall activity levels in each condition,
# the diff is taken along the last axis (condition). The negative sign makes
# it so condition1 > condition2 shows up as "red blobs" (instead of blue).
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
Explanation: Transform to common cortical space
End of explanation
# To use an algorithm optimized for spatio-temporal clustering, we
# just pass the spatial connectivity matrix (instead of spatio-temporal)
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(grade_to_tris(5))
# Note that X needs to be a multi-dimensional array of shape
# samples (subjects) x time x space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.001
t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, connectivity=connectivity, n_jobs=2,
threshold=t_threshold)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute statistic
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(hemi='both', subjects_dir=subjects_dir,
time_label='Duration significant (ms)')
brain.set_data_time_index(0)
brain.show_view('lateral')
brain.save_image('clusters.png')
Explanation: Visualize the clusters
End of explanation |
216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Visualizations
Visualizations play an important role in the field of machine learning and data science. Often we need to distill key information found in large quantities of data down into meaningful and digestible forms. Good visualizations can tell a story about your data in a way that prose cannot.
In this lab we will explore some common visualization techniques. We will utilize toolkits such as Matplotlib's Pyplot and Seaborn to create informative images that provide information and insights about our data.
Pie Charts
A pie chart is used to show how much of each type of data in a dataset contributes to the whole. It is a circular chart where each class of data represents a portion of the whole.
Let's create a pie chart using a sample dataset.
The labels variable contains a tuple of ice cream flavors that we have available.
The votes variable contains a tuple of vote counts. They represent the number of votes each flavor got when we asked a group of people what their favorite flavor of ice cream is.
We can create the chart using Matplotlib's Pyplot library. We pass plt.pie() the number of votes for each flavor and then the labels that we want to use.
Step2: Given this chart we can easily see that chocolate is the most popular flavor, with vanilla not falling far behind. Admittedly we could also tell this by looking at the raw data. However, the data in the pie chart format also allows us to easily see other information, such as the fact that chocolate and vanilla combined represent more than half of the votes.
What we don't see are actual percentages.
If we want to see what percentage each flavor contributes, we can use the autopct argument. For the argument value we provide a format string that can be used to set the precision of the number that is shown.
Try changing the value to %1.0%% and %1.2f%%. What happens?
Step3: Now we can see the percent that each ice cream flavor contributes to the whole.
One thing that's still a little confusing about this chart is the choice of color. We have an idea of what color each of these ice cream flavors has in real life, but what is shown on the chart doesn't match up with those real-world colors.
We can fix this!
Matplotlib's pie chart allows you to change the colors shown on the chart by passing in an iterable of color values. You can use one of a small number of pre-programmed values such as 'b' for blue and 'g' for green.
In our case we pass in the HTML names for the colors. These are six-character values where the first two characters represent the amount of red in the color, the next two the amount of green, and the final two the amount of blue. You can find many tables and pickers for these by searching for 'html color codes'.
Below we picked custom colors for each of the flavors.
Step4: Excellent! Now the colors have a closer relationship to the data that they represent. We won't always have this tight of a relationship, but you might find yourself in situations where you need to use a fixed color palette for some other reason, such as corporate branding. colors is great for that.
Now let's imagine we're preparing this chart for a presentation, and we want to call out one of the flavors in particular. Maybe mango is new to market, and we want to call out how much popularity it has already captured.
To do this we can use the explode argument. This allows us to set an offset for each slice of the pie from the center. In the example below we pushed mango out by 0.1 while keeping all of the rest of the pieces tied to the center.
Step5: We now have mango pulled out a bit from the pie, so that we can highlight its impact.
Notice that we could set offsets for every piece of the chart, and those offsets can be arbitrary numbers. Play around a bit with different and multiple offsets. Do negative numbers work?
Our pie chart looks pretty nice now, but it is very flat. We can give it a bit of a three-dimensional look by adding a shadow with the shadow argument.
Step6: To wrap it up, we can add a title using plt.title(). Notice that this is not an argument to plt.pie(), but is instead a separate method call on plt.
Step7: We now have a nice pie chart that shows all of the favorite ice cream flavors in our poll!
Remember pie charts are good for showing how distinct classes of data (in this case, ice cream flavors) contribute to the whole.
They also work best when there are only a few classes represented. Imagine if we had 100 flavors of ice cream. The less popular flavors would all be impossible to view meaningfully.
Bar Charts
Bar charts are another powerful tool for comparing categorical data. Similar to pie charts, they can be used to compare categories of data against each other.
However, pie charts are also good for seeing how one category of data compares against the whole. Bar charts aren't very good for this.
Also, bar charts can meaningfully display more categories of data than pie charts.
Let's start by taking a look at a bar chart showing the populations of each country in South America.
To do this we will use Matplotlib again. This time we will use the bar() method.
bar() has two required arguments. The first argument contains the x-coordinates of the data. Since we want to plot country names on the x-axis, there aren't any natural x-coordinates.
In cases like this we can use NumPy's arange() function to create a list of evenly spaced numbers. We ask for numbers between 0 and the length of the data, which should give us a list of whole numbers starting at 0 and ending at len(data)-1, which is 13 in the example case.
The next argument is the numeric data to plot. In this example we plot the population data.
Step8: You can see in the chart above that the x-labels aren't meaningful. We can fix this by passing a tick_label argument to bar(). Since we have relatively wide labels, it is also useful to rotate the labels by 90-degrees so that they are more readable. We do this using the plt.xticks(rotation=90) method call.
Step9: We can add labels to bar charts to help make the charts more readable. In the example below we add a y-label using the ylabel() method and a chart title using the title() method.
Step10: The chart is looking pretty good.
But what if you were asked the question
Step11: Now we can easily see that Columbia is the second largest country.
If we wanted to call that out, we could pass a list of bar colors to the bar() method.
Step12: We can also make the chart larger using the figure() method. We pass the figsize= argument which represents the width and height of the figure in inches.
Step13: Line Graphs
Line graphs are another useful visualization. While pie charts and bar charts are useful in showing how classes of data relate to each other, line graphs are more useful for showing how data progresses over some period. For example, line graphs can be useful in charting temperature over time, stock prices over time, weight by day, or any other continuous metric.
We'll create a very simple line graph below. The data we have is the temperature in celsius and the hour of the day for a single day and location.
You can see that to create the line graph we use the plt.plot() method.
Step14: We can see that the temperature starts at around 2 degrees celsius at midnight, has a little drop to freezing around 05
Step15: We can also add markers at each of the data points. In the example below we add a dot marker at each data point using the marker='o' argument.
Step16: We can even have multiple lines on the same chart. Say, for instance, that we wanted to illustrate actual and predicted temperature values. We can just call plot() twice, once with each set of values.
Notice that in the second call, we use another argument to plot(), linestyle='--'. This causes the predicted line to look like a dashed-line while the actual values stay solid.
You can find all of the many line formatting options at the Matplotlib pyplot.plot() documentation.
Step17: Scatter Plots
Scatter plots work great for data with two numeric components. They provide a great way to get a quick look at your data to see if you notice any patterns or outliers.
In the example below, we have data related to gross domestic product (GDP) and population for countries with a population of more than one hundred million. GDP is the the total value of goods and services created/provided by a country over the course of a year.
We then use plt.scatter() to create a scatter plot of population and GDP.
Step18: The scatter plot is interesting because we can gather some insights about our data. We can see that there are two population outliers and one (arguably two) GDP outliers.
This information can help us decide if we need to correct for or exclude the outliers in our analysis.
We can also add more than one set of data to a scatter plot. In the example below, we plot the diameters and weights of a batch of lemons and limes to see if we can determine a pattern.
Step19: Looking at our sample, there isn't a very clear pattern. However, one of the citruses does seem to be a little heavier per centimeter of diameter.
But which one?
It is really difficult to tell. Let's clean this chart up a bit.
First we'll add a title using plt.title(), an x-label using plt.xlabel(), and a y-label using plt.ylabel().
Step20: Now we can add some color and a legend to make our scatter plot a little more intuitive.
We add color by passing the color= argument to plt.scatter(). In this case we just set the lemon points to be yellow using color='y' and the lime points to be green using color='g'.
To add the legend we call plt.legend() and pass it a list containing a label for each scatter of data.
Step21: Now we can see more clearly that our limes tend to be a little heavier per centimeter of diameter than our lemons.
Heatmaps
Heatmaps are a type of visualization that uses color coding to represent the relative value/density of data across a surface. Often this is a tabular chart, but it doesn't have to be limited to that.
For tabular data, there are labels on the x and y axes. The values at the intersection of those labels maps to a color.
These colors can then be used to visually inspect the data to find clusters of similar values and detect trends in the data.
Let's start with a sample dataset that will literally map heat. We will be working with data about the average high temperatures each month for the 12 largest cities in the world.
To create this heatmap we will use a new library, Seaborn. Seaborn is a visualization library that is built on top of Matplotlib. It provides a higher-level interface and can create more attractive charts with less effort. Any of the visualizations that we have seen in this lab so far could have also been created in Seaborn.
You'll see both Matplotlib and seaborn in use in real data analytics projects, so we want to introduce you to both of them in this lab.
Anyway, let's build a heatmap!
In the code below, we first import seaborn. We then create lists containing the names of the 12 largest cities in the world and the 12 months in the year.
Next we assign a list-of-lists to the temperatures variable. Each row in the list represents a city. Each column is a month. The values are the average high temperatures for the city for the month.
Finally we call sns.heatmap() to create the heatmap. We pass in the temperature data, the city names as y-labels, and the month abbreviations as x-labels.
Step22: We can see the data in the resultant chart. But how do we interpret it?
It is actually fairly difficult to make any sense of the data. The left and right of the graph might contain somewhat darker colors, which maps to cooler temperatures, but even that is difficult to determine.
If you think about it, this makes sense. The cities are sorted by size, largest to smallest. Let's change the sorting to be latitude.
Step23: This makes much more sense. We can see that the cities at higher latitudes are colder from September through March and that the temperature tends to rise as the latitude gets smaller.
Also notice that Sao Paulo still seems to see warmer months mid-year even though it is in the southern hemisphere.
Admittedly, the color scheme is difficult to read. It is possible to change the color scheme using the cmap= argument. cmap= accepts lists of colors and preset color schemes. You can find the schemes in the Matplotlib colormap documentation.
Step24: There are many more options available. Check out the heatmap documentation for more.
Exercises
Step25: Explanation
Which chart did you choose and why?
Your solution goes here.
Which year seemed to be better for Bitcoin holders?
Your solution goes here.
Exercise 2
Step26: Explanation
Which chart did you choose and why?
Your solution goes here.
What are the percentage odds that you'll choose a Snickers bar when randomly pulling a candy out of the bag?
Your solution goes here.
Exercise 3
Step27: Explanation
Which chart did you choose and why?
Your solution goes here.
Which three desserts should be removed from the menu?
Your solution goes here.
Exercise 4
Step28: Explanation
Which chart did you choose and why?
Your solution goes here.
When does the worker typically take lunch?
Your solution goes here.
Did the worker do work on the weekend?
Your solution goes here.
On which weekday did the worker start working on their computer at the latest hour?
Your solution goes here.
Exercise 5 | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/02_data/03_visualizations/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
import matplotlib.pyplot as plt
flavors = ('Chocolate', 'Vanilla', 'Pistachio', 'Mango', 'Strawberry')
votes = (12, 11, 4, 8, 7)
plt.pie(
votes,
labels=flavors,
)
plt.show()
Explanation: Visualizations
Visualizations play an important role in the field of machine learning and data science. Often we need to distill key information found in large quantities of data down into meaningful and digestible forms. Good visualizations can tell a story about your data in a way that prose cannot.
In this lab we will explore some common visualization techniques. We will utilize toolkits such as Matplotlib's Pyplot and Seaborn to create informative images that provide information and insights about our data.
Pie Charts
A pie chart is used to show how much of each type of data in a dataset contributes to the whole. It is a circular chart where each class of data represents a portion of the whole.
Let's create a pie chart using a sample dataset.
The labels variable contains a tuple of ice cream flavors that we have available.
The votes variable contains a tuple of vote counts. They represent the number of votes each flavor got when we asked a group of people what their favorite flavor of ice cream is.
We can create the chart using Matplotlib's Pyplot library. We pass plt.pie() the number of votes for each flavor and then the labels that we want to use.
End of explanation
import matplotlib.pyplot as plt
flavors = ('Chocolate', 'Vanilla', 'Pistachio', 'Mango', 'Strawberry')
votes = (12, 11, 4, 8, 7)
plt.pie(
votes,
labels=flavors,
autopct='%1.1f%%',
)
plt.show()
Explanation: Given this chart we can easily see that chocolate is the most popular flavor, with vanilla not falling far behind. Admittedly we could also tell this by looking at the raw data. However, the data in the pie chart format also allows us to easily see other information, such as the fact that chocolate and vanilla combined represent more than half of the votes.
What we don't see are actual percentages.
If we want to see what percentage each flavor contributes, we can use the autopct argument. For the argument value we provide a format string that can be used to set the precision of the number that is shown.
Try changing the value to %1.0%% and %1.2f%%. What happens?
End of explanation
import matplotlib.pyplot as plt
flavors = ('Chocolate', 'Vanilla', 'Pistachio', 'Mango', 'Strawberry')
votes = (12, 11, 4, 8, 7)
colors = ('#8B4513', '#FFF8DC', '#93C572', '#E67F0D', '#D53032')
plt.pie(
votes,
labels=flavors,
autopct='%1.1f%%',
colors=colors,
)
plt.show()
Explanation: Now we can see the percent that each ice cream flavor contributes to the whole.
One thing that's still a little confusing about this chart is the choice of color. We have an idea of what color each of these ice cream flavors has in real life, but what is shown on the chart doesn't match up with those real-world colors.
We can fix this!
Matplotlib's pie chart allows you to change the colors shown on the chart by passing in an iterable of color values. You can use one of a small number of pre-programmed values such as 'b' for blue and 'g' for green.
In our case we pass in the HTML names for the colors. These are six-character values where the first two characters represent the amount of red in the color, the next two the amount of green, and the final two the amount of blue. You can find many tables and pickers for these by searching for 'html color codes'.
Below we picked custom colors for each of the flavors.
End of explanation
import matplotlib.pyplot as plt
flavors = ('Chocolate', 'Vanilla', 'Pistachio', 'Mango', 'Strawberry')
votes = (12, 11, 4, 8, 7)
colors = ('#8B4513', '#FFF8DC', '#93C572', '#E67F0D', '#D53032')
explode = (0, 0, 0, 0.1, 0)
plt.pie(
votes,
labels=flavors,
autopct='%1.1f%%',
colors=colors,
explode=explode,
)
plt.show()
Explanation: Excellent! Now the colors have a closer relationship to the data that they represent. We won't always have this tight of a relationship, but you might find yourself in situations where you need to use a fixed color palette for some other reason, such as corporate branding. colors is great for that.
Now let's imagine we're preparing this chart for a presentation, and we want to call out one of the flavors in particular. Maybe mango is new to market, and we want to call out how much popularity it has already captured.
To do this we can use the explode argument. This allows us to set an offset for each slice of the pie from the center. In the example below we pushed mango out by 0.1 while keeping all of the rest of the pieces tied to the center.
End of explanation
import matplotlib.pyplot as plt
flavors = ('Chocolate', 'Vanilla', 'Pistachio', 'Mango', 'Strawberry')
votes = (12, 11, 4, 8, 7)
colors = ('#8B4513', '#FFF8DC', '#93C572', '#E67F0D', '#D53032')
explode = (0, 0, 0, 0.1, 0)
plt.pie(
votes,
labels=flavors,
autopct='%1.1f%%',
colors=colors,
explode=explode,
shadow=True
)
plt.show()
Explanation: We now have mango pulled out a bit from the pie, so that we can highlight its impact.
Notice that we could set offsets for every piece of the chart, and those offsets can be arbitrary numbers. Play around a bit with different and multiple offsets. Do negative numbers work?
Our pie chart looks pretty nice now, but it is very flat. We can give it a bit of a three-dimensional look by adding a shadow with the shadow argument.
End of explanation
import matplotlib.pyplot as plt
flavors = ('Chocolate', 'Vanilla', 'Pistachio', 'Mango', 'Strawberry')
votes = (12, 11, 4, 8, 7)
colors = ('#8B4513', '#FFF8DC', '#93C572', '#E67F0D', '#D53032')
explode = (0, 0, 0, 0.1, 0)
plt.title('Favorite Ice Cream Flavors')
plt.pie(
votes,
labels=flavors,
autopct='%1.1f%%',
colors=colors,
explode=explode,
shadow=True
)
plt.show()
Explanation: To wrap it up, we can add a title using plt.title(). Notice that this is not an argument to plt.pie(), but is instead a separate method call on plt.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
countries = ('Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador',
'Falkland Islands', 'French Guiana', 'Guyana', 'Paraguay', 'Peru',
'Suriname', 'Uruguay', 'Venezuela')
populations = (45076704, 11626410, 212162757, 19109629, 50819826, 17579085,
3481, 287750, 785409, 7107305, 32880332, 585169, 3470475,
28258770)
x_coords = np.arange(len(countries))
plt.bar(x_coords, populations)
plt.show()
Explanation: We now have a nice pie chart that shows all of the favorite ice cream flavors in our poll!
Remember pie charts are good for showing how distinct classes of data (in this case, ice cream flavors) contribute to the whole.
They also work best when there are only a few classes represented. Imagine if we had 100 flavors of ice cream. The less popular flavors would all be impossible to view meaningfully.
Bar Charts
Bar charts are another powerful tool for comparing categorical data. Similar to pie charts, they can be used to compare categories of data against each other.
However, pie charts are also good for seeing how one category of data compares against the whole. Bar charts aren't very good for this.
Also, bar charts can meaningfully display more categories of data than pie charts.
Let's start by taking a look at a bar chart showing the populations of each country in South America.
To do this we will use Matplotlib again. This time we will use the bar() method.
bar() has two required arguments. The first argument contains the x-coordinates of the data. Since we want to plot country names on the x-axis, there aren't any natural x-coordinates.
In cases like this we can use NumPy's arange() function to create a list of evenly spaced numbers. We ask for numbers between 0 and the length of the data, which should give us a list of whole numbers starting at 0 and ending at len(data)-1, which is 13 in the example case.
The next argument is the numeric data to plot. In this example we plot the population data.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
countries = ('Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador',
'Falkland Islands', 'French Guiana', 'Guyana', 'Paraguay', 'Peru',
'Suriname', 'Uruguay', 'Venezuela')
populations = (45076704, 11626410, 212162757, 19109629, 50819826, 17579085,
3481, 287750, 785409, 7107305, 32880332, 585169, 3470475,
28258770)
x_coords = np.arange(len(countries))
plt.bar(x_coords, populations, tick_label=countries)
plt.xticks(rotation=90) #rotates text for x-axis labels
plt.show()
Explanation: You can see in the chart above that the x-labels aren't meaningful. We can fix this by passing a tick_label argument to bar(). Since we have relatively wide labels, it is also useful to rotate the labels by 90-degrees so that they are more readable. We do this using the plt.xticks(rotation=90) method call.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
countries = ('Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador',
'Falkland Islands', 'French Guiana', 'Guyana', 'Paraguay', 'Peru',
'Suriname', 'Uruguay', 'Venezuela')
populations = (45076704, 11626410, 212162757, 19109629, 50819826, 17579085,
3481, 287750, 785409, 7107305, 32880332, 585169, 3470475,
28258770)
x_coords = np.arange(len(countries))
plt.bar(x_coords, populations, tick_label=countries)
plt.xticks(rotation=90)
plt.ylabel('Population (Millions)')
plt.title('South American Populations')
plt.show()
Explanation: We can add labels to bar charts to help make the charts more readable. In the example below we add a y-label using the ylabel() method and a chart title using the title() method.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
countries = ('Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador',
'Falkland Islands', 'French Guiana', 'Guyana', 'Paraguay', 'Peru',
'Suriname', 'Uruguay', 'Venezuela')
populations = (45076704, 11626410, 212162757, 19109629, 50819826, 17579085,
3481, 287750, 785409, 7107305, 32880332, 585169, 3470475,
28258770)
df = pd.DataFrame({
'Country': countries,
'Population': populations,
})
df.sort_values(by='Population', inplace=True)
x_coords = np.arange(len(df))
plt.bar(x_coords, df['Population'], tick_label=df['Country'])
plt.xticks(rotation=90)
plt.ylabel('Population (Millions)')
plt.title('South American Populations')
plt.show()
Explanation: The chart is looking pretty good.
But what if you were asked the question: What is the second most populous country in South America?
You would likely have to stare a bit at Argentina and Columbia.
This is because the data is sorted alphabetically, which isn't the most helpful sorting for answering questions about the data. Unfortunately Matplotlib doesn't have built in sorting. Instead, you can import Pandas and use it to sort the data.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
countries = ('Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador',
'Falkland Islands', 'French Guiana', 'Guyana', 'Paraguay', 'Peru',
'Suriname', 'Uruguay', 'Venezuela')
populations = (45076704, 11626410, 212162757, 19109629, 50819826, 17579085,
3481, 287750, 785409, 7107305, 32880332, 585169, 3470475,
28258770)
df = pd.DataFrame({
'Country': countries,
'Population': populations,
})
df.sort_values(by='Population', inplace=True)
x_coords = np.arange(len(df))
colors = ['#0000FF' for _ in range(len(df))]
colors[-2] = '#FF0000'
plt.bar(x_coords, df['Population'], tick_label=df['Country'], color=colors)
plt.xticks(rotation=90)
plt.ylabel('Population (Millions)')
plt.title('South American Populations')
plt.show()
Explanation: Now we can easily see that Columbia is the second largest country.
If we wanted to call that out, we could pass a list of bar colors to the bar() method.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
countries = ('Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador',
'Falkland Islands', 'French Guiana', 'Guyana', 'Paraguay', 'Peru',
'Suriname', 'Uruguay', 'Venezuela')
populations = (45076704, 11626410, 212162757, 19109629, 50819826, 17579085,
3481, 287750, 785409, 7107305, 32880332, 585169, 3470475,
28258770)
df = pd.DataFrame({
'Country': countries,
'Population': populations,
})
df.sort_values(by='Population', inplace=True)
x_coords = np.arange(len(df))
colors = ['#0000FF' for _ in range(len(df))]
colors[-2] = '#FF0000'
plt.figure(figsize=(20,10))
plt.bar(x_coords, df['Population'], tick_label=df['Country'], color=colors)
plt.xticks(rotation=90)
plt.ylabel('Population (Millions)')
plt.title('South American Populations')
plt.show()
Explanation: We can also make the chart larger using the figure() method. We pass the figsize= argument which represents the width and height of the figure in inches.
End of explanation
import matplotlib.pyplot as plt
temperature_c = [2, 1, 0, 0, 1, 5, 8, 9, 8, 5, 3, 2, 2]
hour = [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
plt.plot(
hour,
temperature_c
)
plt.show()
Explanation: Line Graphs
Line graphs are another useful visualization. While pie charts and bar charts are useful in showing how classes of data relate to each other, line graphs are more useful for showing how data progresses over some period. For example, line graphs can be useful in charting temperature over time, stock prices over time, weight by day, or any other continuous metric.
We'll create a very simple line graph below. The data we have is the temperature in celsius and the hour of the day for a single day and location.
You can see that to create the line graph we use the plt.plot() method.
End of explanation
import matplotlib.pyplot as plt
temperature_c = [2, 1, 0, 0, 1, 5, 8, 9, 8, 5, 3, 2, 2]
hour = [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
plt.plot(
hour,
temperature_c,
)
plt.title('Temperatures in Kirkland, WA, USA on 2 Feb 2020')
plt.ylabel('Temperature Celsius')
plt.xlabel('Hour')
plt.show()
Explanation: We can see that the temperature starts at around 2 degrees celsius at midnight, has a little drop to freezing around 05:00, gets up to around 9 degrees celsius at 15:00, and then drops back down to about 2 degrees at midnight.
We can, of course, add the standard chart elements of title(), ylabel(), and xlabel().
End of explanation
import matplotlib.pyplot as plt
temperature_c = [2, 1, 0, 0, 1, 5, 8, 9, 8, 5, 3, 2, 2]
hour = [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
plt.plot(
hour,
temperature_c,
marker='o',
)
plt.title('Temperatures in Kirkland, WA, USA on 2 Feb 2020')
plt.ylabel('Temperature Celsius')
plt.xlabel('Hour')
plt.show()
Explanation: We can also add markers at each of the data points. In the example below we add a dot marker at each data point using the marker='o' argument.
End of explanation
import matplotlib.pyplot as plt
temperature_c_actual = [2, 1, 0, 0, 1, 5, 8, 9, 8, 5, 3, 2, 2]
temperature_c_predicted = [2, 2, 1, 0, 1, 3, 7, 8, 8, 6, 4, 3, 3]
hour = [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
plt.plot(hour, temperature_c_actual)
plt.plot(hour, temperature_c_predicted, linestyle='--')
plt.title('Temperatures in Kirkland, WA, USA on 2 Feb 2020')
plt.ylabel('Temperature Celsius')
plt.xlabel('Hour')
plt.show()
Explanation: We can even have multiple lines on the same chart. Say, for instance, that we wanted to illustrate actual and predicted temperature values. We can just call plot() twice, once with each set of values.
Notice that in the second call, we use another argument to plot(), linestyle='--'. This causes the predicted line to look like a dashed-line while the actual values stay solid.
You can find all of the many line formatting options at the Matplotlib pyplot.plot() documentation.
End of explanation
import matplotlib.pyplot as plt
country = ['Bangladesh', 'Brazil', 'China', 'India', 'Indonesia', 'Japan',
'Mexico', 'Nigeria', 'Pakistan', 'Russia', 'United States']
gdp = [2421, 13418, 9475, 4353, 7378, 35477, 14276, 5087, 4133, 20255, 49267]
population = [148692131, 194946470, 1341335152, 1224614327, 239870937,
126535920, 113423047, 158423182, 173593383, 142958164, 310383948]
plt.scatter(population, gdp)
plt.show()
Explanation: Scatter Plots
Scatter plots work great for data with two numeric components. They provide a great way to get a quick look at your data to see if you notice any patterns or outliers.
In the example below, we have data related to gross domestic product (GDP) and population for countries with a population of more than one hundred million. GDP is the the total value of goods and services created/provided by a country over the course of a year.
We then use plt.scatter() to create a scatter plot of population and GDP.
End of explanation
import matplotlib.pyplot as plt
lemon_diameter = [6.44, 6.87, 7.7, 8.85, 8.15, 9.96, 7.21, 10.04, 10.2, 11.06]
lemon_weight = [112.05, 114.58, 116.71, 117.4, 128.93,
132.93, 138.92, 145.98, 148.44, 152.81]
lime_diameter = [6.15, 7.0, 7.0, 7.69, 7.95, 7.51, 10.46, 8.72, 9.53, 10.09]
lime_weight = [112.76, 125.16, 131.36, 132.41, 138.08,
142.55, 156.86, 158.67, 163.28, 166.74]
plt.scatter(lemon_diameter, lemon_weight)
plt.scatter(lime_diameter, lime_weight)
plt.show()
Explanation: The scatter plot is interesting because we can gather some insights about our data. We can see that there are two population outliers and one (arguably two) GDP outliers.
This information can help us decide if we need to correct for or exclude the outliers in our analysis.
We can also add more than one set of data to a scatter plot. In the example below, we plot the diameters and weights of a batch of lemons and limes to see if we can determine a pattern.
End of explanation
import matplotlib.pyplot as plt
lemon_diameter = [6.44, 6.87, 7.7, 8.85, 8.15, 9.96, 7.21, 10.04, 10.2, 11.06]
lemon_weight = [112.05, 114.58, 116.71, 117.4, 128.93,
132.93, 138.92, 145.98, 148.44, 152.81]
lime_diameter = [6.15, 7.0, 7.0, 7.69, 7.95, 7.51, 10.46, 8.72, 9.53, 10.09]
lime_weight = [112.76, 125.16, 131.36, 132.41, 138.08,
142.55, 156.86, 158.67, 163.28, 166.74]
plt.title('Lemons vs. Limes')
plt.xlabel('Diameter (cm)')
plt.ylabel('Weight (g)')
plt.scatter(lemon_diameter, lemon_weight)
plt.scatter(lime_diameter, lime_weight)
plt.show()
Explanation: Looking at our sample, there isn't a very clear pattern. However, one of the citruses does seem to be a little heavier per centimeter of diameter.
But which one?
It is really difficult to tell. Let's clean this chart up a bit.
First we'll add a title using plt.title(), an x-label using plt.xlabel(), and a y-label using plt.ylabel().
End of explanation
import matplotlib.pyplot as plt
lemon_diameter = [6.44, 6.87, 7.7, 8.85, 8.15, 9.96, 7.21, 10.04, 10.2, 11.06]
lemon_weight = [112.05, 114.58, 116.71, 117.4, 128.93,
132.93, 138.92, 145.98, 148.44, 152.81]
lime_diameter = [6.15, 7.0, 7.0, 7.69, 7.95, 7.51, 10.46, 8.72, 9.53, 10.09]
lime_weight = [112.76, 125.16, 131.36, 132.41, 138.08,
142.55, 156.86, 158.67, 163.28, 166.74]
plt.title('Lemons vs. Limes')
plt.xlabel('Diameter (cm)')
plt.ylabel('Weight (g)')
plt.scatter(lemon_diameter, lemon_weight, color='y')
plt.scatter(lime_diameter, lime_weight, color='g')
plt.legend(['lemons', 'limes'])
plt.show()
Explanation: Now we can add some color and a legend to make our scatter plot a little more intuitive.
We add color by passing the color= argument to plt.scatter(). In this case we just set the lemon points to be yellow using color='y' and the lime points to be green using color='g'.
To add the legend we call plt.legend() and pass it a list containing a label for each scatter of data.
End of explanation
import seaborn as sns
cities = ['Tokyo', 'Delhi', 'Shanghai', 'Sao Paulo', 'Mumbai', 'Mexico City',
'Beijing', 'Osaka', 'Cairo', 'New York', 'Dhaka', 'Karachi']
months = ['J', 'F', 'M', 'A', 'M', 'J', 'J', 'A', 'S', 'O', 'N', 'D']
temperatures = [
[10, 10, 14, 19, 23, 26, 30, 31, 27, 22, 17, 12], # Tokyo
[20, 24, 30, 37, 40, 39, 35, 34, 34, 33, 28, 22], # Delhi
[ 8, 10, 14, 20, 24, 28, 32, 32, 27, 23, 17, 11], # Shanghai
[29, 29, 28, 27, 23, 23, 23, 25, 25, 26, 27, 28], # Sao Paulo
[31, 32, 33, 33, 34, 32, 30, 30, 31, 34, 34, 32], # Mumbai
[22, 24, 26, 27, 27, 26, 24, 25, 24, 24, 23, 23], # Mexico City
[ 2, 5, 12, 21, 27, 30, 31, 30, 26, 19, 10, 4], # Beijing
[ 9, 10, 14, 20, 25, 28, 32, 33, 29, 23, 18, 12], # Osaka
[19, 21, 24, 29, 33, 35, 35, 35, 34, 30, 25, 21], # Cairo
[ 4, 6, 11, 18, 22, 27, 29, 29, 25, 18, 13, 7], # New York
[25, 29, 32, 33, 33, 32, 32, 32, 32, 31, 29, 26], # Dhaka
[26, 28, 32, 35, 36, 35, 33, 32, 33, 35, 32, 28], # Karachi
]
sns.heatmap(temperatures, yticklabels=cities, xticklabels=months)
Explanation: Now we can see more clearly that our limes tend to be a little heavier per centimeter of diameter than our lemons.
Heatmaps
Heatmaps are a type of visualization that uses color coding to represent the relative value/density of data across a surface. Often this is a tabular chart, but it doesn't have to be limited to that.
For tabular data, there are labels on the x and y axes. The values at the intersection of those labels maps to a color.
These colors can then be used to visually inspect the data to find clusters of similar values and detect trends in the data.
Let's start with a sample dataset that will literally map heat. We will be working with data about the average high temperatures each month for the 12 largest cities in the world.
To create this heatmap we will use a new library, Seaborn. Seaborn is a visualization library that is built on top of Matplotlib. It provides a higher-level interface and can create more attractive charts with less effort. Any of the visualizations that we have seen in this lab so far could have also been created in Seaborn.
You'll see both Matplotlib and seaborn in use in real data analytics projects, so we want to introduce you to both of them in this lab.
Anyway, let's build a heatmap!
In the code below, we first import seaborn. We then create lists containing the names of the 12 largest cities in the world and the 12 months in the year.
Next we assign a list-of-lists to the temperatures variable. Each row in the list represents a city. Each column is a month. The values are the average high temperatures for the city for the month.
Finally we call sns.heatmap() to create the heatmap. We pass in the temperature data, the city names as y-labels, and the month abbreviations as x-labels.
End of explanation
import seaborn as sns
cities = ['New York', 'Beijing', 'Tokyo', 'Osaka', 'Shanghai', 'Cairo', 'Delhi',
'Karachi', 'Dhaka', 'Mexico City', 'Mumbai', 'Sao Paulo']
temperatures = [
[ 4, 6, 11, 18, 22, 27, 29, 29, 25, 18, 13, 7], # New York
[ 2, 5, 12, 21, 27, 30, 31, 30, 26, 19, 10, 4], # Beijing
[10, 10, 14, 19, 23, 26, 30, 31, 27, 22, 17, 12], # Tokyo
[ 9, 10, 14, 20, 25, 28, 32, 33, 29, 23, 18, 12], # Osaka
[ 8, 10, 14, 20, 24, 28, 32, 32, 27, 23, 17, 11], # Shanghai
[19, 21, 24, 29, 33, 35, 35, 35, 34, 30, 25, 21], # Cairo
[20, 24, 30, 37, 40, 39, 35, 34, 34, 33, 28, 22], # Delhi
[26, 28, 32, 35, 36, 35, 33, 32, 33, 35, 32, 28], # Karachi
[25, 29, 32, 33, 33, 32, 32, 32, 32, 31, 29, 26], # Dhaka
[22, 24, 26, 27, 27, 26, 24, 25, 24, 24, 23, 23], # Mexico City
[31, 32, 33, 33, 34, 32, 30, 30, 31, 34, 34, 32], # Mumbai
[29, 29, 28, 27, 23, 23, 23, 25, 25, 26, 27, 28], # Sao Paulo
]
sns.heatmap(temperatures, yticklabels=cities, xticklabels=months)
Explanation: We can see the data in the resultant chart. But how do we interpret it?
It is actually fairly difficult to make any sense of the data. The left and right of the graph might contain somewhat darker colors, which maps to cooler temperatures, but even that is difficult to determine.
If you think about it, this makes sense. The cities are sorted by size, largest to smallest. Let's change the sorting to be latitude.
End of explanation
import seaborn as sns
cities = ['New York', 'Beijing', 'Tokyo', 'Osaka', 'Shanghai', 'Cairo', 'Delhi',
'Karachi', 'Dhaka', 'Mexico City', 'Mumbai', 'Sao Paulo']
temperatures = [
[ 4, 6, 11, 18, 22, 27, 29, 29, 25, 18, 13, 7], # New York
[ 2, 5, 12, 21, 27, 30, 31, 30, 26, 19, 10, 4], # Beijing
[10, 10, 14, 19, 23, 26, 30, 31, 27, 22, 17, 12], # Tokyo
[ 9, 10, 14, 20, 25, 28, 32, 33, 29, 23, 18, 12], # Osaka
[ 8, 10, 14, 20, 24, 28, 32, 32, 27, 23, 17, 11], # Shanghai
[19, 21, 24, 29, 33, 35, 35, 35, 34, 30, 25, 21], # Cairo
[20, 24, 30, 37, 40, 39, 35, 34, 34, 33, 28, 22], # Delhi
[26, 28, 32, 35, 36, 35, 33, 32, 33, 35, 32, 28], # Karachi
[25, 29, 32, 33, 33, 32, 32, 32, 32, 31, 29, 26], # Dhaka
[22, 24, 26, 27, 27, 26, 24, 25, 24, 24, 23, 23], # Mexico City
[31, 32, 33, 33, 34, 32, 30, 30, 31, 34, 34, 32], # Mumbai
[29, 29, 28, 27, 23, 23, 23, 25, 25, 26, 27, 28], # Sao Paulo
]
sns.heatmap(
temperatures,
yticklabels=cities,
xticklabels=months,
cmap='coolwarm',
)
Explanation: This makes much more sense. We can see that the cities at higher latitudes are colder from September through March and that the temperature tends to rise as the latitude gets smaller.
Also notice that Sao Paulo still seems to see warmer months mid-year even though it is in the southern hemisphere.
Admittedly, the color scheme is difficult to read. It is possible to change the color scheme using the cmap= argument. cmap= accepts lists of colors and preset color schemes. You can find the schemes in the Matplotlib colormap documentation.
End of explanation
prices = [14292.2, 12858.9, 11467.5, 9241.1, 8559.6, 11073.5, 9704.3, 11402.3,
8762.0, 7874.9, 8547.4, 6938.2, 6905.7, 8004.4, 8923.1, 9352.4,
9853.5, 8459.5, 8245.1, 7361.3, 7646.6, 7515.8, 6505.8, 6167.3,
6398.9, 6765.5, 6254.8, 7408.7, 8234.1, 7014.3, 6231.6, 6379.1,
6734.8, 7189.6, 6184.3, 6519.0, 6729.6, 6603.9, 6596.3, 6321.7,
6572.2, 6494.2, 6386.2, 6427.1, 5621.8, 3920.4, 4196.2, 3430.4,
3228.7, 3964.4, 3706.8, 3785.4, 3597.2, 3677.8, 3570.9, 3502.5,
3661.4, 3616.8, 4120.4, 3823.1, 3944.3, 4006.4, 4002.5, 4111.8,
5046.2, 5051.8, 5290.2, 5265.9, 5830.9, 7190.3, 7262.6, 8027.4,
8545.7, 7901.4, 8812.5, 10721.7, 11906.5, 11268.0, 11364.9, 10826.7,
9492.1, 10815.7, 11314.5, 10218.1, 10131.0, 9594.4, 10461.1, 10337.3,
9993.0, 8208.5, 8127.3, 8304.4, 7957.3, 9230.6, 9300.6, 8804.5,
8497.3, 7324.1, 7546.6, 7510.9, 7080.8, 7156.2, 7321.5, 7376.8]
# Your Solution Goes Here
Explanation: There are many more options available. Check out the heatmap documentation for more.
Exercises: Which Visualization?
There are five exercises in this Colab. Each exercise contains a dataset and a query about that dataset. Using Matplotlib or Seaborn, create a visualization that allows any viewer to easily answer the question.
For each exercise, choose one of the following visualizations:
* Pie chart
* Bar chart
* Line graph
* Scatter plot
* Heatmap
You may only use each visualization one time. When you are done, you should have one of each of the five types of visualizations.
In some cases, there may be more than one "right answer", as there are often multiple good ways to visualize data. Use your judgement to choose which one you think is best for each question.
Add titles, labels, color coding, and other visual aides when you can to help the user interpret the charts.
Exercise 1: Desserts
We have a list of Bitcoin prices recorded at the end of each week (Sunday) in 2018 and 2019. Create a visualization that allows you to answer the question: Which year, 2018 or 2019, tended to provide better returns for Bitcoin holders?
Student Solution
End of explanation
candy_names = ['Kit Kat', 'Snickers', 'Milky Way', 'Toblerone', 'Twix']
candy_counts = [52, 39, 78, 13, 78]
# Your Solution Goes Here
Explanation: Explanation
Which chart did you choose and why?
Your solution goes here.
Which year seemed to be better for Bitcoin holders?
Your solution goes here.
Exercise 2: Candy
We have a bag of candy. It contains five different kinds of candy, each named below. Create a chart that shows the percent chance that we would pull a Snickers candy out of the bag if we did a blind selection. Call out the chance of choosing a Snickers candy.
Student Solution
End of explanation
dessert_sales = {
'Lava Cake': 14,
'Mousse': 5,
'Chocolate Cake': 12,
'Ice Cream': 19,
'Truffles': 6,
'Brownie': 8,
'Chocolate Chip Cookie': 12,
'Chocolate Pudding': 9,
'Souffle': 10,
'Chocolate Cheesecake': 17,
'Chocolate Chips': 2,
'Fudge': 9,
'Mochi': 13,
}
# Your Solution Goes Here
Explanation: Explanation
Which chart did you choose and why?
Your solution goes here.
What are the percentage odds that you'll choose a Snickers bar when randomly pulling a candy out of the bag?
Your solution goes here.
Exercise 3: Dessert Popularity
A restaurant we're consulting for has a dessert menu that's too big. They want to cut a few items from the menu. In order to keep most of their customers happy, they want to remove only the three least popular desserts from the menu.
We have a list of the desserts that the restaurant serves, as well as a count of the number of times that dessert has been sold in the last week.
Create a visualization that shows the relative popularities of the desserts. Call out the three desserts that should be removed.
Student Solution
End of explanation
cpu_usage = [
[2, 2, 4, 2, 4, 1, 1, 4, 4, 12, 22, 23,
45, 9, 33, 56, 23, 40, 21, 6, 6, 2, 2, 3], # Monday
[1, 2, 3, 2, 3, 2, 3, 2, 7, 22, 45, 44,
33, 9, 23, 19, 33, 56, 12, 2, 3, 1, 2, 2], # Tuesday
[2, 3, 1, 2, 4, 4, 2, 2, 1, 2, 5, 31,
54, 7, 6, 34, 68, 34, 49, 6, 6, 2, 2, 3], # Wednesday
[1, 2, 3, 2, 4, 1, 2, 4, 1, 17, 24, 18,
41, 3, 44, 42, 12, 36, 41, 2, 2, 4, 2, 4], # Thursday
[4, 1, 2, 2, 3, 2, 5, 1, 2, 12, 33, 27,
43, 8, 38, 53, 29, 45, 39, 3, 1, 1, 3, 4], # Friday
[2, 3, 1, 2, 2, 5, 2, 8, 4, 2, 3,
1, 5, 1, 2, 3, 2, 6, 1, 2, 2, 1, 4, 3], # Saturday
[1, 2, 3, 1, 1, 3, 4, 2, 3, 1, 2,
2, 5, 3, 2, 1, 4, 2, 45, 26, 33, 2, 2, 1], # Sunday
]
# Your Solution Goes Here
Explanation: Explanation
Which chart did you choose and why?
Your solution goes here.
Which three desserts should be removed from the menu?
Your solution goes here.
Exercise 4: CPU Usage
We have the hourly average CPU usage for a worker's computer over the course of a week. Each row of data represents a day of the week starting with Monday. Each column of data is an hour in the day starting with 0 being midnight.
Create a chart that shows the CPU usage over the week. You should be able to answer the following questions using the chart:
When does the worker typically take lunch?
Did the worker do work on the weekend?
On which weekday did the worker start working on their computer at the latest hour?
Student Solution
End of explanation
x = [4.61, 5.08, 5.18, 7.82, 10.46, 7.66, 7.6, 9.32, 14.04, 9.95, 4.95, 7.23,
5.21, 8.64, 10.08, 8.32, 12.83, 7.51, 7.82, 6.29, 0.04, 6.62, 13.16, 6.34,
0.09, 10.04, 13.06, 9.54, 11.32, 7.12, -0.67, 10.5, 8.37, 7.24, 9.18,
10.12, 12.29, 8.53, 11.11, 9.65, 9.42, 8.61, -0.67, 5.94, 6.49, 7.57, 3.11,
8.7, 5.28, 8.28, 9.55, 8.33, 13.7, 6.65, 2.4, 3.54, 9.19, 7.51, -0.68,
8.47, 14.82, 5.31, 14.01, 8.75, -0.57, 5.35, 10.51, 3.11, -0.26, 5.74,
8.33, 6.5, 13.85, 9.78, 4.91, 4.19, 14.8, 10.04, 13.47, 3.28]
y = [-2.36, -3.41, 13.01, -2.91, -2.28, 12.83, 13.13, 11.94, 0.93, -2.76, 13.31,
-3.57, -2.33, 12.43, -1.83, 12.32, -0.42, -3.08, -2.98, 12.46, 8.34, -3.19,
-0.47, 12.78, 2.12, -2.72, 10.64, 11.98, 12.21, 12.52, 5.53, 11.72, 12.91,
12.56, -2.49, 12.08, -1.09, -2.89, -1.78, -2.47, 12.77, 12.41, 5.33, -3.23,
13.45, -3.41, 12.46, 12.1, -2.56, 12.51, -2.37, 12.76, 9.69, 12.59, -1.12,
-2.8, 12.94, -3.55, 7.33, 12.59, 2.92, 12.7, 0.5, 12.57, 6.39, 12.84,
-1.95, 11.76, 6.82, 12.44, 13.28, -3.46, 0.7, -2.55, -2.37, 12.48, 7.26,
-2.45, 0.31, -2.51]
# Your Solution Goes Here
Explanation: Explanation
Which chart did you choose and why?
Your solution goes here.
When does the worker typically take lunch?
Your solution goes here.
Did the worker do work on the weekend?
Your solution goes here.
On which weekday did the worker start working on their computer at the latest hour?
Your solution goes here.
Exercise 5: Mushrooms
A researcher is studying mushrooms. They have found a ring of mushrooms and have labelled their coordinates on a plane. Typically mushrooms radiate out from a central starting mushroom. Given the coordinates below, the researcher wants to answer the question: Approximately where on the plane was the original mushroom?
Create a chart that will allow the researcher to approximate the center of the growth.
Student Solution
End of explanation |
217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sky coordinate DSLR aperture photometry yielding untransformed magnitudes
Uses Python 3, astropy, matplotlib, PythonPhot, PhotUtils
Assumes a plate-solved image for RA/Dec determination
Definitions
Imports
Step1: Functions
RA and Dec for a list of stars
Step2: Extract all sources from image
Step3: Convert RA and Dec to local coordinates
Step4: Extract reference (check, comparison) magnitude map from ordered star information list
Step5: Photometry of a list of FITS files, creating a table of times and instrumental magnitudes
Step6: Single image+coordinate photometry, returning a time and instrumental magnitude
Invoked by multi_file_photometry()
Step7: Display an image with target and reference stars annotated, to sanity check local coordinates
Step8: Annotate plot axis with coordinate positions and designations
Invoked by show_image()
Step9: Compute standardised magnitudes given a data frame of all instrumental magnitudes, a list of all star names, a list of row names of interest (e.g. stk-median-g*) in the instrumental magnitude data frame, and a dictionary of comparison star magnitudes
Step10: Compute standardised magnitude given target's instrumental magnitude, a numpy array of comparison star instrumental magnitudes and catalog comparison star magnitudes
Step12: Write AAVSO Extended Upload Format file suitable for upload to WebObs
the results parameter is a list of tuples containing jd, mag, mag_err, band, check, check_instr_mag for each photometry result
Step13: Inputs
Change these to suit your environment
File settings
Step14: Names or AUIDs for target and comparison stars
Step15: Magnitude limit for comparison star lookups
Step16: Aperture radii and gain
Step17: Outputs
Obtain RA and Dec for selected AUIDs
Step18: Extract all sources from plate-solved image
Step19: Convert RA and Dec to local coordinates
Step20: Find B, G, R files in the FITS file directory
Step21: Aperture location sanity check by visual inspection
Arbitrarily choose the first G FITS file
Step22: Aperture photometry, yielding instrumental magnitudes
Step23: Differential Photometry and Standarised Magnitude | Python Code:
import os
from random import random
# TODO: shouldn't need ordered dictionary now either
from collections import OrderedDict
import numpy as np
import pandas as pd
from astropy.io import fits
from astropy.visualization import astropy_mpl_style
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from matplotlib.patches import Circle
from matplotlib.offsetbox import TextArea, DrawingArea, OffsetImage, AnnotationBbox
plt.style.use(astropy_mpl_style)
%matplotlib inline
from PythonPhot import aper
import requests, math, glob
from photutils import DAOStarFinder
from astropy.stats import mad_std
from astropy.coordinates import SkyCoord
from astropy.wcs import WCS
import astropy.units as u
from photutils import aperture_photometry, CircularAperture
import warnings
warnings.filterwarnings('ignore')
Explanation: Sky coordinate DSLR aperture photometry yielding untransformed magnitudes
Uses Python 3, astropy, matplotlib, PythonPhot, PhotUtils
Assumes a plate-solved image for RA/Dec determination
Definitions
Imports
End of explanation
def get_ra_and_dec(stars, maglimit):
result = []
for star in stars:
vsp_template = 'https://www.aavso.org/apps/vsp/api/chart/?format=json&fov=10&star={}&maglimit={}'
query = vsp_template.format(star, maglimit)
record = requests.get(query).json()
# assume that first element corresponds to the star
if len(record["photometry"]) != 0 and record["photometry"][0]["auid"] == record["auid"]:
bands = record["photometry"][0]["bands"]
else:
bands = []
result.append({"star":record["star"], "ra":record["ra"], "dec":record["dec"], "bands":bands})
return result
Explanation: Functions
RA and Dec for a list of stars
End of explanation
def extract_all_sources(fits_path, fwhm, source_snr=20):
hdulist = fits.open(fits_path)
data = hdulist[0].data.astype(float)
header = hdulist[0].header
wcs = WCS(header)
bkg_sigma = mad_std(data)
daofind = DAOStarFinder(fwhm=fwhm, threshold=source_snr*bkg_sigma)
sources = daofind(data)
return sources, wcs
Explanation: Extract all sources from image
End of explanation
def get_local_coords(ra_decs, wcs, radius=4):
local_position_map = OrderedDict()
for ra_dec in ra_decs:
star_coord = SkyCoord("{} {}".format(ra_dec['ra'], ra_dec['dec']), unit=(u.hourangle, u.deg))
xy = SkyCoord.to_pixel(star_coord, wcs=wcs, origin=1)
x = xy[0].item(0)
y = xy[1].item(0)
for source in sources:
if(source['xcentroid']-radius <= x <= source['xcentroid']+radius) and \
source['ycentroid']-radius <= y <= source['ycentroid']+radius:
local_position_map[ra_dec["star"]] = (x, y)
return local_position_map
Explanation: Convert RA and Dec to local coordinates
End of explanation
def get_ref_mags_for_band(star_info_list, desired_band):
mags = {}
# target star's band list will be empty
for info in star_info_list:
for band in info["bands"]:
#print("{}: {}".format(info["star"], band))
if band["band"] == desired_band:
mags[info["star"]] = band["mag"]
break
return mags
Explanation: Extract reference (check, comparison) magnitude map from ordered star information list
End of explanation
def multi_file_photometry(fits_root, fits_files, data_index, coords, dataframe,
aperture_radius, inner_sky_radius, outer_sky_radius,
gain=1, zeropoint=0, suffix='.fit'):
for fits_file in fits_files:
fits_file_path = os.path.join(fits_root, fits_file)
hdus = fits.open(fits_file_path)
instr_mags = []
for x, y in coords:
time, mag = aperture_photometry(hdus[data_index], x, y,
aperture_radius, inner_sky_radius, outer_sky_radius,
gain, zeropoint)
instr_mags.append(mag)
dataframe[fits_file[0:fits_file.rindex(suffix)]] = [time] + instr_mags
Explanation: Photometry of a list of FITS files, creating a table of times and instrumental magnitudes
End of explanation
def aperture_photometry(hdu, x, y,
aperture_radius, inner_sky_radius, outer_sky_radius,
gain, zeropoint):
image_data = hdu.data
time = hdu.header[time_name]
mag, magerr, flux, fluxerr, sky, skyerr, badflag, outstr = \
aper.aper(image_data, x, y, phpadu=gain,
apr=aperture_radius, zeropoint=zeropoint,
skyrad=[inner_sky_radius, outer_sky_radius],
exact=True)
return time, mag[0]
Explanation: Single image+coordinate photometry, returning a time and instrumental magnitude
Invoked by multi_file_photometry()
End of explanation
def show_image(image_data, coord_map, aperture_size, annotate=True, vmin=10, vmax=200, figx=20, figy=10):
fig = plt.figure(figsize=(figx, figy))
plt.imshow(image_data, cmap='gray', vmin=vmin, vmax=vmax)
plt.gca().invert_yaxis()
plt.colorbar()
if annotate:
for designation in coord_map:
xy = coord_map[designation]
annotate_image(fig.axes[0], designation, xy, aperture_size)
plt.show()
Explanation: Display an image with target and reference stars annotated, to sanity check local coordinates
End of explanation
def annotate_image(axis, designation, xy, aperture_size):
axis.plot(xy[0], xy[1], 'o', markersize=aperture_size,
markeredgecolor='r', markerfacecolor='none',
markeredgewidth=2)
offsetbox = TextArea(designation, minimumdescent=False)
ab = AnnotationBbox(offsetbox, xy,
xybox=(-20, 40+random()*10-10),
xycoords='data',
boxcoords="offset points",
arrowprops=dict(arrowstyle="->"))
axis.add_artist(ab)
Explanation: Annotate plot axis with coordinate positions and designations
Invoked by show_image()
End of explanation
def standardised_magnitudes(instr_mag_df_trans, star_names, row_names, catalog_mags):
# exclude target star and check star to get list of possible comparison star names
comp_names = star_names[2:]
# obtain available comparison star names and magnitudes, ignoring any star not in catalog
avail_comp_names = [name for name in comp_names if name in catalog_mags.keys()]
avail_comp_mags = [catalog_mags[name] for name in comp_names if name in catalog_mags.keys()]
target_name = star_names[0]
#print(avail_comp_names, avail_comp_mags, target_name)
std_mags = np.array([])
for row_name in row_names:
# get instrumental magnitudes for the current row of data and compute
# standardised magnitude of the target star
comp_instr_mags = [instr_mag_df_trans.loc[row_name][comp_name] for comp_name in avail_comp_names]
target_mag = standardised_magnitude(instr_mag_df_trans.loc[row_name][target_name],
np.array(comp_instr_mags),
np.array(avail_comp_mags))
# collect standardised magnitudes for each row
std_mags = np.append(std_mags, target_mag)
# TODO: also compute/return check star mags and look at std error; is that what spreadsheet uses?
return std_mags
Explanation: Compute standardised magnitudes given a data frame of all instrumental magnitudes, a list of all star names, a list of row names of interest (e.g. stk-median-g*) in the instrumental magnitude data frame, and a dictionary of comparison star magnitudes
End of explanation
def standardised_magnitude(target_instr_mag, comp_instr_mags, catalog_comp_mags):
deltas = target_instr_mag - comp_instr_mags
mags = deltas + catalog_comp_mags
return mags.mean()
Explanation: Compute standardised magnitude given target's instrumental magnitude, a numpy array of comparison star instrumental magnitudes and catalog comparison star magnitudes
End of explanation
def write_webobs_file(path, obscode, cal_software, target, check, airmass, results, chart_id, comment):
header_template = #TYPE=EXTENDED
#OBSCODE={0}
#SOFTWARE={1}, Python scripts, Jupyter notebook
#DELIM=,
#DATE=JD
#OBSTYPE=DSLR
#NAME,DATE,MAG,MERR,FILT,TRANS,MTYPE,CNAME,CMAG,KNAME,KMAG,AMASS,GROUP,CHART,NOTES
if type(airmass) is float:
airmass = "{0:1.6f}".format(airmass)
result_template = "{0},{1:1.6f},{2:1.6f},{3:1.6f},{4},NO,STD,ENSEMBLE,NA,{5},{6:1.6f},{7},NA,{8},{9}\n"
with open(path, "w") as webobs:
webobs.write(header_template.format(obscode, cal_software))
for result in results:
jd, mag, mag_err, band, check_instr_mag = result
webobs.write(result_template.format(target, jd, mag, mag_err, band, check, check_instr_mag,
airmass, chart_id, comment))
Explanation: Write AAVSO Extended Upload Format file suitable for upload to WebObs
the results parameter is a list of tuples containing jd, mag, mag_err, band, check, check_instr_mag for each photometry result
End of explanation
# Output file directory
output_file_root = "/Users/david/aavso/dslr-photometry/working"
# WebObs file
webobs_file = "webobs.csv"
# Instrumental magnitude output file path
instr_mag_csv_file = "instr_mags.csv"
# FITS file directory
fits_root = "/Users/david/aavso/dslr-photometry/working"
# Plate-solved FITS file name
wcs_file = "stk-median-g1-wcs.fit"
# B, G, and R FITS file prefixes to identify files,
# e.g. stk-median-g matches stk-median-g1.fit, stk-median-g2.fit, ...
fits_prefixes = ["stk-median-b", "stk-median-g", "stk-median-r"]
# FITS file data HDU index
data_index = 0
# Time column name
time_name = "JD"
Explanation: Inputs
Change these to suit your environment
File settings
End of explanation
names = ["eta Car","000-BBR-533","000-BBR-603","000-BBS-066","000-BBR-573","000-BBR-998","000-BBR-795","000-BBR-563"]
Explanation: Names or AUIDs for target and comparison stars
End of explanation
maglimit = 7
Explanation: Magnitude limit for comparison star lookups
End of explanation
# FWHM (e.g. from PSF in IRIS)
fwhm = 6
# Aperture radii
measurement_aperture = 9
inner_sky_annulus = 12
outer_sky_annulus = 20
# ph/ADU
# Note: PythonPhot's aperture photometry function takes a phadu parameter.
# Assumption: this is photons/ADU or e-/ADU, i.e. gain.
gain=1.67
Explanation: Aperture radii and gain
End of explanation
target_comp_ra_dec = get_ra_and_dec(names, maglimit=maglimit)
# Question: why does 000-BBR-563 have no bands?
target_comp_ra_dec
Explanation: Outputs
Obtain RA and Dec for selected AUIDs
End of explanation
sources, wcs = extract_all_sources(wcs_file, fwhm=fwhm)
sources
Explanation: Extract all sources from plate-solved image
End of explanation
position_map = get_local_coords(target_comp_ra_dec, wcs)
position_map
Explanation: Convert RA and Dec to local coordinates
End of explanation
files = os.listdir(fits_root)
fits_files = []
for fits_prefix in fits_prefixes:
fits_files += sorted([file for file in files if fits_prefix in file and file.find("wcs") == -1])
Explanation: Find B, G, R files in the FITS file directory
End of explanation
fits_file = fits_files[5]
print(fits_file)
hdus = fits.open(os.path.join(fits_root, fits_file))
image_data = hdus[data_index].data
median = np.median(image_data)
show_image(image_data, position_map, measurement_aperture, annotate=True, vmin=10, vmax=median*4)
Explanation: Aperture location sanity check by visual inspection
Arbitrarily choose the first G FITS file
End of explanation
# Create empty table with time and object headers
pd.options.display.float_format = '{:,.6f}'.format
instr_mag_df = pd.DataFrame()
names = [name for name in position_map]
instr_mag_df['name'] = [time_name] + names
instr_mag_df.set_index('name', inplace=True)
# Carry out photometry on B, G, R FITS files, yielding instrumental magnitudes
positions = position_map.values()
multi_file_photometry(fits_root, fits_files, data_index, positions, instr_mag_df,
measurement_aperture, inner_sky_annulus, outer_sky_annulus, gain)
# Save photometry table as CSV
instr_mag_csv_path = os.path.join(output_file_root, instr_mag_csv_file)
instr_mag_df.T.to_csv(instr_mag_csv_path)
# Display photometry table
instr_mag_df.T
Explanation: Aperture photometry, yielding instrumental magnitudes
End of explanation
b_row_names = [row_name for row_name in instr_mag_df.T.index if "-b" in row_name]
g_row_names = [row_name for row_name in instr_mag_df.T.index if "-g" in row_name]
r_row_names = [row_name for row_name in instr_mag_df.T.index if "-r" in row_name]
catalog_v_mags = get_ref_mags_for_band(target_comp_ra_dec, "V")
tg = standardised_magnitudes(instr_mag_df.T, names, g_row_names, catalog_v_mags)
tg.mean(), np.median(tg), tg.std()
catalog_b_mags = get_ref_mags_for_band(target_comp_ra_dec, "B")
tb = standardised_magnitudes(instr_mag_df.T, names, b_row_names, catalog_b_mags)
tb.mean(), np.median(tb), tb.std()
obscode = "BDJB"
cal_software = "IRIS"
target = names[0]
check = names[1]
airmass = "NA" # TODO: compute (look at AAVSO spreadsheet)
chart_id = "X15962DX"
comment = "Canon 1100D; 100mm; ISO 100; f2.0; 5 sec x 20 images median stacked in groups of 5"
jd = instr_mag_df.T.iloc[0]["JD"]
check_instr_b = instr_mag_df.T.loc[b_row_names][check].mean()
check_instr_g = instr_mag_df.T.loc[g_row_names][check].mean()
results = [(jd, tb.mean(), tb.std(), "TB", check_instr_b),
(jd, tg.mean(), tg.std(), "TG", check_instr_g)]
webobs_path = os.path.join(output_file_root, webobs_file)
write_webobs_file(webobs_path, obscode, cal_software, target, check, airmass, results, chart_id, comment)
# Questions:
# - is mean or median best per T[BGR] row?
# - std() or some other std dev function (e.g. population vs sample)
# - how to compute R; use catalog B-V, V-R? may just want to report TG, TB
# - is there a role for linear regression here or only for transformation coefficients?
# - can/should we do airmass correction independent of transformation?
Explanation: Differential Photometry and Standarised Magnitude
End of explanation |
218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic objects
A striplog depends on a hierarchy of objects. This notebook shows the objects and their basic functionality.
Lexicon
Step1: <hr />
Lexicon
Step2: Most of the lexicon works 'behind the scenes' when processing descriptions into Rock components.
Step3: <hr />
Component
A set of attributes. All are optional.
Step4: We define a new rock with a Python dict object
Step5: The Rock has a colour
Step6: And it has a summary, which is generated from its attributes.
Step7: We can format the summary if we wish
Step8: We can compare rocks with the usual == operator
Step9: In order to create a Component object from text, we need a lexicon to compare the text against. The lexicon describes the language we want to extract, and what it means.
Step10: <hr />
Interval
Intervals are where it gets interesting. An interval can have
Step11: I might make an Interval explicitly from a Component...
Step12: ... or I might pass a description and a lexicon and Striplog will parse the description and attempt to extract structured Component objects from it.
Step13: Notice I only got one Component, even though the description contains a subordinate lithology. This is the default behaviour, we have to ask for more components
Step14: Intervals have a primary attribute, which holds the first component, no matter how many components there are.
Step15: Ask for the summary to see the thickness and a Rock summary of the primary component. Note that the format code only applies to the Rock part of the summary.
Step16: We can compare intervals, based on their thickness. Let's make one which is 5 m thicker than the prvious one.
Step17: Technical aside
Step18: We can combine intervals with the + operator. (However, you cannot subtract intervals.)
Step19: If we add a number to an interval, it adds thickness to the base.
Step20: Adding a rock adds a (minor) component and adds to the description. | Python Code:
import striplog
striplog.__version__
Explanation: Basic objects
A striplog depends on a hierarchy of objects. This notebook shows the objects and their basic functionality.
Lexicon: A dictionary containing the words and word categories to use for rock descriptions.
Component: A set of attributes.
Interval: One element from a Striplog — consists of a top, base, a description, one or more Components, and a source.
Striplogs (a set of Intervals) are described in a separate notebook.
Decors and Legends are also described in another notebook.
End of explanation
from striplog import Lexicon
print(Lexicon.__doc__)
lexicon = Lexicon.default()
lexicon
lexicon.synonyms
Explanation: <hr />
Lexicon
End of explanation
lexicon.find_synonym('Halite')
s = "grysh gn ss w/ sp gy sh"
lexicon.expand_abbreviations(s)
Explanation: Most of the lexicon works 'behind the scenes' when processing descriptions into Rock components.
End of explanation
from striplog import Component
print(Component.__doc__)
Explanation: <hr />
Component
A set of attributes. All are optional.
End of explanation
r = {'colour': 'grey',
'grainsize': 'vf-f',
'lithology': 'sand'}
rock = Component(r)
rock
Explanation: We define a new rock with a Python dict object:
End of explanation
rock.colour
Explanation: The Rock has a colour:
End of explanation
rock.summary()
Explanation: And it has a summary, which is generated from its attributes.
End of explanation
rock.summary(fmt="My rock: {lithology} ({colour}, {GRAINSIZE})")
Explanation: We can format the summary if we wish:
End of explanation
rock2 = Component({'grainsize': 'VF-F',
'colour': 'Grey',
'lithology': 'Sand'})
rock == rock2
Explanation: We can compare rocks with the usual == operator:
End of explanation
rock3 = Component.from_text('Grey fine sandstone.', lexicon)
rock3
rock4 = Component.from_text('Grey, sandstone, vf-f ', lexicon)
rock4
Explanation: In order to create a Component object from text, we need a lexicon to compare the text against. The lexicon describes the language we want to extract, and what it means.
End of explanation
from striplog import Interval
print(Interval.__doc__)
Explanation: <hr />
Interval
Intervals are where it gets interesting. An interval can have:
a top
a base
a description (in natural language)
a list of Components
Intervals don't have a 'way up', it's implied by the order of top and base.
End of explanation
Interval(10, 20, components=[rock])
Explanation: I might make an Interval explicitly from a Component...
End of explanation
Interval(20, 40, "Grey sandstone with shale flakes.", lexicon=lexicon)
Explanation: ... or I might pass a description and a lexicon and Striplog will parse the description and attempt to extract structured Component objects from it.
End of explanation
interval = Interval(20, 40, "Grey sandstone with black shale flakes.", lexicon=lexicon, max_component=2)
interval
Explanation: Notice I only got one Component, even though the description contains a subordinate lithology. This is the default behaviour, we have to ask for more components:
End of explanation
interval.primary
Explanation: Intervals have a primary attribute, which holds the first component, no matter how many components there are.
End of explanation
interval.summary(fmt="{colour} {lithology} {amount}")
Explanation: Ask for the summary to see the thickness and a Rock summary of the primary component. Note that the format code only applies to the Rock part of the summary.
End of explanation
interval_2 = Interval(40, 65, "Red sandstone.", lexicon=lexicon)
Explanation: We can compare intervals, based on their thickness. Let's make one which is 5 m thicker than the prvious one.
End of explanation
print(interval_2 == interval)
print(interval_2 > interval)
print(max(interval, interval_2).summary())
Explanation: Technical aside: The Interval class is a functools.total_ordering, so providing __eq__ and one other comparison (such as __lt__) in the class definition means that instances of the class have implicit order. So you can use sorted on a Striplog, for example.
It wasn't clear to me whether this should compare tops (say), so that '>' might mean 'deeper', or if it should be keyed on thickness. I chose the latter, and implemented other comparisons instead.
End of explanation
interval_2 + interval
Explanation: We can combine intervals with the + operator. (However, you cannot subtract intervals.)
End of explanation
interval + 5
Explanation: If we add a number to an interval, it adds thickness to the base.
End of explanation
interval + rock3
Explanation: Adding a rock adds a (minor) component and adds to the description.
End of explanation |
219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis Ready Data Tutorial Part 2
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 4.2
Step6: Step 5
Step7: Step 5.2
Step8: Step 6 | Python Code:
from copy import copy
import datetime
import os
from pathlib import Path
from pprint import pprint
import shutil
import time
from zipfile import ZipFile
import numpy as np
from planet import api
from planet.api import downloader, filters
Explanation: Analysis Ready Data Tutorial Part 2: Use Case 1
Time-series analysis (e.g. change detection and trend detection) is a powerful application of satellite imagery. However, a great deal of processing is required to prepare imagery for analysis. Analysis Ready Data (ARD), preprocessed time-series stacks of overhead imagery, allow for time-series analysis without any additional processing of the imagery. See Analysis Data Defined for an excellent introduction and discussion on ARD.
In Part 1 of this tutorial, we introduced ARD and covered the how and whys of using the Data and Orders APIs to create and interpret ARD.
This second part of the tutorial focuses on the first of two use cases. The use case addressed in this tutorial is:
As a software engineer at an ag-tech company, I'd like to be able to order Planet imagery programmatically in a way that enables the data scientist at my organization to create time-series algorithms (e.g. monitoring ndvi curves over time) without further data cleaning and processing.
Please see the first part of the tutorial for an introduction to the Data and Orders APIs along with best practices. A lot of functionality developed in that tutorial will be copied here in a compact form.
Introduction
Two things are interesting about this use case. First, we are calculating NDVI, and second, we are compositing scenes together. What is NDVI and what is compositing and why do we want to do it?
Great questions!
First, NDVI stands for normalized difference vegitation index. It is used a LOT to find out if vegetation is growing. You can find out more about NDVI at USGS and Wikipedia. What we care about here is that NDVI uses the red and near-infrared bands of an image and returns one band with values that range from -1 to 1. So, we expect a single-band image for each order.
Compositing is a way to stitch multiple scenes together for maximum coverage. We want this because for a time series, we just want one image for each date and we want that one image to have the most coverage to minimize holes in our data. The composite tool takes in multiple scenes and returns one image. If we feed it scenes from a whole timestack, we still just get one image back! So, to avoid that disaster, we group our scenes by date and only composite the scenes that were collected on the same date.
Implementation
The use case we will cover is: As a software engineer at an ag-tech company, I'd like to be able to order Planet imagery programmatically in a way that enables the data scientist at my organization to create time-series algorithms (e.g. monitoring ndvi curves over time) without further data cleaning and processing.
For this use case, the area of interest and time range are not specified. The need for no further processing indicates we should specify a strict usable pixel data filter. For time-series analysis the daily coverage of PS satellites is ideal. For our time-series analysis, we would like a single image that covers the entire area of interest (AOI). However, it may take multiple scenes to cover the entire AOI. Therefore, we will use the Composite tool to make a composite for each day in the time series analysis. This is a little tricky because the Composite tool just composites all of the scenes associated with the ids ordered. So we need to parse the scene ids we got from the Data API to get scene ids for each day, then submit an order for each day.
To summarize, these are the steps:
1. Initialize API client
1. Search Data API
1. Group IDs by Date
1. Submit Orders
1. Download Orders
1. Unzip and Verify Orders
Note that, due to the processing-intensiveness of visualizing the NDVI images and UDM2s, we will be covering visualization in the next notebook, Analysis Ready Data Tutorial Part 2: Use Case 1 - Visualization
Import Dependencies
End of explanation
# if your Planet API Key is not set as an environment variable, you can paste it below
API_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE')
client = api.ClientV1(api_key=API_KEY)
Explanation: Step 1: Initialize API client
End of explanation
# these functions were developed in the best practices tutorial (part 1)
# create an api request from the search specifications
def build_request(aoi_geom, start_date, stop_date):
'''build a data api search request for clear PSScene imagery'''
query = filters.and_filter(
filters.geom_filter(aoi_geom),
filters.range_filter('clear_percent', gte=90),
filters.date_range('acquired', gt=start_date),
filters.date_range('acquired', lt=stop_date)
)
return filters.build_search_request(query, ['PSScene'])
def search_data_api(request, client, limit=500):
result = client.quick_search(request)
# this returns a generator
return result.items_iter(limit=limit)
# define test data for the filter
test_start_date = datetime.datetime(year=2019,month=4,day=1)
test_stop_date = datetime.datetime(year=2019,month=5,day=1)
# iowa crops aoi
test_aoi_geom = {
"type": "Polygon",
"coordinates": [
[
[-93.299129, 42.699599],
[-93.299674, 42.812757],
[-93.288436, 42.861921],
[-93.265332, 42.924817],
[-92.993873, 42.925124],
[-92.993888, 42.773637],
[-92.998396, 42.754529],
[-93.019154, 42.699988],
[-93.299129, 42.699599]
]
]
}
request = build_request(test_aoi_geom, test_start_date, test_stop_date)
print(request)
items = list(search_data_api(request, client))
print(len(items))
Explanation: Step 2: Search Data API
The goal of this step is to get the scene ids that meet the search criteria for this use case.
End of explanation
# check out an item just for fun
pprint(items[0])
# item = items[0]
# acquired_date = item['properties']['acquired'].split('T')[0]
# acquired_date
def get_acquired_date(item):
return item['properties']['acquired'].split('T')[0]
acquired_dates = [get_acquired_date(item) for item in items]
unique_acquired_dates = set(acquired_dates)
unique_acquired_dates
def get_date_item_ids(date, all_items):
return [i['id'] for i in all_items if get_acquired_date(i) == date]
def get_ids_by_date(items):
acquired_dates = [get_acquired_date(item) for item in items]
unique_acquired_dates = set(acquired_dates)
ids_by_date = dict((d, get_date_item_ids(d, items))
for d in unique_acquired_dates)
return ids_by_date
ids_by_date = get_ids_by_date(items)
pprint(ids_by_date)
Explanation: Step 3: Group IDs by Date
End of explanation
def build_order(ids, name, aoi_geom):
# specify the PSScene 4-Band surface reflectance product
# make sure to get the *_udm2 bundle so you get the udm2 product
# note: capitalization really matters in item_type when using planet client orders api
item_type = 'PSScene'
bundle = 'analytic_sr_udm2'
orders_request = {
'name': name,
'products': [{
'item_ids': ids,
'item_type': item_type,
'product_bundle': bundle
}],
'tools': get_tools(aoi_geom),
'delivery': {
'single_archive': True,
'archive_filename':'{{name}}_{{order_id}}.zip',
'archive_type':'zip'
},
'notifications': {
'email': False
},
}
return orders_request
def get_tools(aoi_geom):
# clip to AOI
clip_tool = {'clip': {'aoi': aoi_geom}}
# convert to NDVI
bandmath_tool = {'bandmath': {
"pixel_type": "32R",
"b1": "(b4 - b3) / (b4+b3)"
}}
# composite into one image
composite_tool = {
"composite":{}
}
tools = [clip_tool, bandmath_tool, composite_tool]
return tools
# uncomment to see what an order request would look like
# pprint(build_order(['id'], 'demo', test_aoi_geom), indent=4)
def get_orders_requests(ids_by_date, aoi_geom):
order_requests = [build_order(ids, date, aoi_geom)
for date, ids in ids_by_date.items()]
return order_requests
order_requests = get_orders_requests(ids_by_date, test_aoi_geom)
print(len(order_requests))
pprint(order_requests[0])
Explanation: Step 4: Submit Orders
Now that we have the scene ids for each collect date, we can create the orders for each date. The output of each order is a single zip file that contains one composited scene and one composited UDM2.
For this step we will just use the python api. See part 1 for a demonstration of how to use the CLI.
Step 4.1: Build Order Requests
End of explanation
def create_orders(order_requests, client):
orders_info = [client.create_order(r).get()
for r in order_requests]
order_ids = [i['id'] for i in orders_info]
return order_ids
# testing: lets just create two orders
order_limit = 2
order_ids = create_orders(order_requests[:order_limit], client)
order_ids
Explanation: Step 4.2: Submit Orders
In this section, for the sake of demonstration, we limit our orders to 2. Feel free to increase this limit if you want!
End of explanation
def poll_for_success(order_ids, client, num_loops=50):
count = 0
polling = copy(order_ids)
completed = []
while(count < num_loops):
count += 1
states = []
for oid in copy(polling):
order_info = client.get_individual_order(oid).get()
state = order_info['state']
states += state
print('{}:{}'.format(oid, state))
success_states = ['success', 'partial']
if state == 'failed':
raise Exception(response)
elif state in success_states:
polling.remove(oid)
completed += oid
if not len(polling):
print('done')
break
print('--')
time.sleep(30)
poll_for_success(order_ids, client)
Explanation: Step 5: Download Orders
Step 5.1: Wait Until Orders are Successful
Before we can download the orders, they have to be prepared on the server.
End of explanation
data_dir = os.path.join('data', 'use_case_1')
# make the download directory if it doesn't exist
Path(data_dir).mkdir(parents=True, exist_ok=True)
def poll_for_download(dest, endswith, num_loops=50):
count = 0
while(count < num_loops):
count += 1
matched_files = (f for f in os.listdir(dest)
if os.path.isfile(os.path.join(dest, f))
and f.endswith(endswith))
match = next(matched_files, None)
if match:
match = os.path.join(dest, match)
print('downloaded')
break
else:
print('waiting...')
time.sleep(10)
return match
def download_orders(order_ids, client, dest='.', limit=None):
files = []
for order_id in order_ids:
print('downloading {}'.format(order_id))
filename = download_order(order_id, dest, client, limit=limit)
if filename:
files.append(filename)
return files
def download_order(order_id, dest, client, limit=None):
'''Download an order by given order ID'''
# this returns download stats but they aren't accurate or informative
# so we will look for the downloaded file on our own.
dl = downloader.create(client, order=True)
urls = client.get_individual_order(order_id).items_iter(limit=limit)
dl.download(urls, [], dest)
endswith = '{}.zip'.format(order_id)
filename = poll_for_download(dest, endswith)
return filename
downloaded_files = download_orders(order_ids, client, data_dir)
downloaded_files
Explanation: Step 5.2: Run Download
For this step we will use the planet python orders API because we want to be able to download multiple orders at once, something the CLI does not yet support.
End of explanation
def unzip(filename, overwrite=False):
location = Path(filename)
zipdir = location.parent / location.stem
if os.path.isdir(zipdir):
if overwrite:
print('{} exists. overwriting.'.format(zipdir))
shutil.rmtree(zipdir)
else:
raise Exception('{} already exists'.format(zipdir))
with ZipFile(location) as myzip:
myzip.extractall(zipdir)
return zipdir
zipdirs = [unzip(f, overwrite=True) for f in downloaded_files]
pprint(zipdirs)
Explanation: Step 6: Unzip and Verify Orders
In this step we will simply unzip the orders and view one of the ordered composite images.
6.1: Unzip Order
In this section, we will unzip each order into a directory named after the downloaded zip file.
End of explanation |
220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Development of a Reader for Siemens Trend CSV files
This notebook was used for development of the final Siemens reader. The code was then exported out
out of here and put into the ddc_readers.py module.
Step2: Use this function to clean up point names and location names
Step3: Use the Python csv module to read the file
Step4: Create the Final DataFrame by concatenating a DataFrame for each Point
Step5: Plot a couple points, getting rid of gaps by using drop_na()
Step6: Export this Code to a File | Python Code:
import csv
import string
import datetime
import pandas as pd
import numpy as np
# import matplotlib pyplot commands
from matplotlib.pyplot import *
# Show Plots in the Notebook
%matplotlib inline
rcParams['figure.figsize']= (10, 8) # set Chart Size
rcParams['font.size'] = 14 # set Font size in Chart
# 'style' the plot using 'bmh' style
style.use('bmh')
Explanation: Development of a Reader for Siemens Trend CSV files
This notebook was used for development of the final Siemens reader. The code was then exported out
out of here and put into the ddc_readers.py module.
End of explanation
def clean_string(s):
Function that "cleans" a string by first stripping leading and trailing
whitespace and then substituting an underscore for all other whitepace
and punctuation. After that substitution is made, any consecutive occurrences
of the underscore character are reduced to one occurrence.
Finally, the string is converted to lower case.
Returns the cleaned string.
Input Parameters:
-----------------
s: The string to clean.
to_sub = string.whitespace + string.punctuation
trans_table = str.maketrans(to_sub, len(to_sub) * '_')
fixed = str.translate(s.strip(), trans_table)
while True:
new_fixed = fixed.replace('_' * 2, '_')
if new_fixed == fixed:
break
fixed = new_fixed
return fixed.lower()
Explanation: Use this function to clean up point names and location names
End of explanation
file_name = 'data/siemens_sample.csv'
reader = csv.reader(open(file_name))
include_location = False # if True include location in point ID
# For running in the notebook, this controls how many rows are shown
# for each execution of the cell below.
# Set to a very large number if you want to process the entire file
# in one execution of the cell below.
num_rows_to_show = 300000
# Going to put the data into a dictionary, keyed by the name of the
# point
data_dict = {}
# repeatedly execute this cell to step through chunks of the data
row_ct = 0
for row in reader:
f1 = row[0] # the first field
if '/' in f1: # Look for the / in the Date
# this is a row with a data point in it.
# create a date/time string and parse into a Python datetime
ts = '{} {}'.format(row[0], row[1])
ts = datetime.datetime.strptime(ts, '%m/%d/%Y %H:%M:%S')
# get the value, which is usually a number, but sometimes a string.
# first try to convert to a number, and if it errors, just return it as a string
try:
val = float(row[2])
except:
val = row[2]
tstamps, vals = data_dict.get(pt_id, ([], []))
tstamps.append(ts)
vals.append(val)
data_dict[pt_id] = (tstamps, vals)
elif f1.startswith('Point'):
# This row has a Point ID in it
pt_id = clean_string(row[1])
elif f1.startswith('Trend L'):
# This row has a Location code in it. If requested, add it
# to the point name.
if include_location:
pt_id = '{}_{}'.format(clean_string(row[1]), pt_id)
row_ct += 1
if row_ct == num_rows_to_show: break
Explanation: Use the Python csv module to read the file
End of explanation
df_final = pd.DataFrame()
for pt_id in data_dict.keys():
# for this point, retrieve the timestamps and values frome the dictionary
tstamps, vals = data_dict[pt_id]
# make a DataFrame, indexed on the timestamps, with the point ID as the column
# name.
df = pd.DataFrame(vals, index=tstamps, columns=[pt_id])
# Sometimes there are duplicate timestamps due to Alarms, I think.
# Only take the value from the last timestamp of the duplicate timestamps.
df = df.groupby(level=0).last()
# Add this DataFrame to the final DataFrame. Indexes are matched up
# or added if they don't already exist in the final frame.
df_final = pd.concat([df_final, df], axis=1)
# Save the final DataFrame to a CSV file to be viewed, perhaps by Excel.
df_final.to_csv('df_final.csv')
Explanation: Create the Final DataFrame by concatenating a DataFrame for each Point
End of explanation
df_final.bh_100w_tec_room_temp.dropna().plot()
df_final.bh_uhe_tec_room_temp.dropna().plot()
Explanation: Plot a couple points, getting rid of gaps by using drop_na()
End of explanation
# Convert the notebook to a script.
# I usually have this commented out
# !jupyter nbconvert --to script siemens_reader.ipynb
Explanation: Export this Code to a File
End of explanation |
221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simplex.py
En el siguiente tutorial, se van a ver todos los métodos con los que cuenta la librería Simplex.py. Por supuesto, una aplicación de muchos de ellos, siguiendo una secuencia, correcta, podría dar lugar a la resolución de un problema de programación lineal. Sin embargo, obtener una solcuión desde esta perspectiva, es algo mucho más largo y complejo, siendo mucho más fácil usar el programa SimplexSolver.py.
Para el uso de la librería, se ha creado una clase auxiliar llamada rational. Esta clase representa a los números racionales. Cada objeto de esa clase contará con un númerador y un denominador, de tal forma que si se quiere definir un número entero, habrá que asignarle denominador 1. La forma de definir un objeto rational es la siguiente
Step1: convertLineToRationalArray
Este método recibe un string, que contiene un conjunto de números separados por un espacio, y devuelve los números en un array de numpy con elementos rational.Si no recibe un string, devuelve None. Ejemplos
Step2: rationalToFloat
Este método recibe un objeto rational, y devuelve su valor en float. Lo que hace es realizar la división entre el númerador y el denominador. En caso de no pasar un rational como parámetro, devuelve None.
Step3: * listPointsRationalToFloat*
Este método recibe una lista de puntos, cuyas coordenadas son rational, y devuelve la misma lista de puntos, pero con las coordenadas en float. En caso de no introducir una lista de rational, devuelve None. Ejemplos
Step4: isAListOfRationalPoints
Este método recibe una lista, y devuelve True, si todos los elementos son puntos(tuplas)con coordenadas rational o False, si hay algún elemento que no es un punto cuyas coordenadas sean rational. En caso de no pasar una lista, devuelve None. Ejemplos
Step5: isAListOfPoints
Este método recibe una lista, y devuelve True, si todos los elementos son puntos(tuplas) o False, si hay algún elemento que no es un punto. En caso de no pasar una lista, devuelve None. Ejemplos
Step6: isARationalMatrix
Este método recibe una matriz de numpy o un array bidimensional de numpy, y comprueba si todos los elementos del mismo, son rational, en ese caso devuelve True. En otro caso devuelve False. Si no recibe una matriz o un array de numpy, devuelve None. Ejemplos
Step7: isARationalArray
Este método recibe un array de numpy, y comprueba si todos los elementos del mismo, son rational, en ese caso devuelve True. En otro caso devuelve False. Si no recibe una matriz o un array de numpy, devuelve None. Ejemplos
Step8: Operaciones con matrices
determinant
Este método recibe una matriz de numpy, con componentes rational, y devuelve el determinante de la matriz. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. También admite un array de numpy bidimensional.Ejemplos
Step9: coFactorMatrix
Este método recibe una matriz de numpy, con componentes rational, y devuelve la matriz de cofactores. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos
Step10: adjMatrix
Este método recibe una matriz de numpy, con componentes rational, y devuelve la matriz de adjuntos. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos
Step11: invertMatrix
Este método recibe una matriz de numpy, con componentes rational, y devuelve la matriz inversa. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos
Step12: initializeMatrix
Este método recibe unas dimensiones y devuelve una matriz de numpy, con elementos rational,de valor 0. Si los valores introducidos no son enteros, devuelve None. Ejemplos
Step13: createRationalIdentityMatrix
Este método recibe un número y devuelve una matriz identidad de numpy, con elementos rational. Si el valor introducido no es entero, devuelve None. Ejemplos
Step14: multNumMatrix
Este método recibe un número en forma rational y una matriz de numpy, con componentes rational, y devuelve la matriz del producto del número por la matriz introducida.Si se introduce algo que no es un rational como número o una matriz de numpy, con elementos rational,como matriz, devuelve None. Ejemplos
Step15: twoMatrixEqual
Este método recibe dos matrices de numpy, con componentes rational, y devuelve True,si son iguales, o False, si no lo son. Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. Ejemplos
Step16: printMatrix
Este método recibe una matriz de numpy, con componentes rational, y la pasa a formato string.Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. También admite un array de numpy bidimensional. Ejemplos
Step17: multMatrix
Este método recibe dos matrices de numpy, con componentes rational, y devuelve la matriz resultado del producto de las dos matrices introducidas. Si el número de columnas de la primera matriz, y el número de filas de la segunda, no son iguales, las matrices no se pueden multiplicar y devuelve None. Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. Ejemplos
Step18: Método Simplex
variablesNoiteration
Este método se utiliza para calcular las variables que no están en la iteración. Recibe como parámetro, una matrix numpy, que contiene las restricciones del problema y un array numpy, que contiene las variables que ya están en la iteración(estas variables no tienen porqué aparecer ordenadas en el array). El método funciona, con matrices de tipo entero, de tipo float y de tipo rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Si todo es correcto, devolverá array numpy, con las variables que no están en la iteración. Ejemplos
Step19: calcMinNoNan
Este método se utiliza para calcular cuál es el mínimo valor, de un conjunto de valores. Recibe un array de numpy, con los valores. El método selecciona aquellos valores que sean rational, y calcula el mínimo. En caso de que los parámetros introducidos no sean correctos, devolverá None. Si todo es correcto, devolverá el mínimo valor no negativo o None, en caso de que no haya valores rational. Ejemplos
Step20: calculateIndex
Este método recibe un array de numpy, y un valor, y devuelve la posición dentro del array donde se encuentra la primera ocurrencia de dicho valor. En caso de que dicho valor no aparezca en el array, se devolverá None. El método funciona con conjuntos de números enteros y con conjuntos de rational. En caso de que los parámetros introducidos no sean correctos, devolverá None.Ejemplos
Step21: calculateBaseIteration
Este método calcula la base de la iteración, y la devuelve en una matriz numpy. Para ello, recibe la matriz que contiene todas las restricciones del problema(sin signo ni recursos), y las columnas que forman parte de la iteración(no tienen porqué aparecer ordenadas en el array). La matriz, puede ser de valores enteros o rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step22: showBase
Este método recibe una matriz numpy con elementos rational, que se supone que será la base de una iteración, acompañado del nombre que se le quiera asignar, y la muestra por pantalla, con el nombre que se le asigna (B), dentro de la iteración. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step23: calculateIterationSolution
Este método calcula la solución de una iteración, para las variables de la misma, y la devuelve en un array de numpy. Para ello, recibe la base de la iteración, en una matriz numpy y también recibe el vector de recursos en un array de numpy. Los elementos de la matriz y el array, deben ser rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step24: showSolution
Este método recibe la solución de una iteración, y la muestra con el nombre que se le asigna en ella ("x"). La solución deberá ser pasada en un numpy array en forma de columna con elementos rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step25: calculateCB
Este método calcula el valor del vector función, para una iteración. Para ello recibe en un array numpy, las columnas de la iteración, y en otro array numpy, el vector de función completo del problema. Si todo es correcto, se devuelve en un array numpy, el vector de la función para las columnas introducidas. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step26: showCB
Este método, recibe un array numpy de elementos rational, que contiene el valor del vector función, y simplemente lo muestra por pantalla, con el correspondiente nombre que se le asigna("CB"). En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step27: calculateFunctionValueOfIteration
Este método recibe la solución de la iteración, y el vector de la función para la misma, y devuelve una matriz numpy que contiene el valor de la función para dicha iteración. Es necesario que la solución se pase como un array numpy en forma de columna(como muestra el ejemplo). El vector de la función debe ser un array de numpy, en forma de fila. Ambos arrays, deben ser de elementos rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step28: showFunctionValue
Este método recibe una matriz numpy que contiene la solución de la función, para la iteración, y la muestra por pantalla con su nombre("z"). El método funciona tambiñen si se pasa la matriz con elementos rational En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step29: calculateYValues
Este método calcula los valores de y, para una iteración. Para ello recibe la base de la iteración en una matriz numpy, la matriz total que contiene todas las restricciones del problema (sin signo, ni recursos) en una matriz numpy y las variables que no pertenecen a la iteración, en un array numpy. Los elementos de ambas matrices, deben ser rational. Si todos los parámetros introducidos son correctos, se devuelve en un array de numpy los valores, de cada una de las y para la iteración. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step30: showYValues
Este método recibe un array numpy que contiene las variables que no pertenecen a la iteración, y los valores de y en un array de numpy con elementos rational, y los muestra por pantalla con su nombre("y"+número de la variable). En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step31: calculateZC
Este método calcula los valores de la regla de entrada, y los devuelve en un array de numpy. Para ello recibe el vector de la función completo, en un array de numpy; las variables que no están dentro de la iteración, en un array de numpy; el vector de la función para la iteración, en un array de numpy y por último, los valores de y para la iteración, en un numpy array. Todos los arrays deben tener elementos rational, excepto en el de las variables que no están en la iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step32: showZCValues
Este método recibe en un array de numpy los valores de la regla de entrada(Z_C) y en otro array de numpy,las variables que no pertenecen a la iteración. Si todos los parámetros son correctos, muestra por pantalla los valores de la regla de entrada con su nombre asociado("Z_C"+número de la variable). El método funciona tanto con elementos rational, como con elementos enteros.. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step33: thereIsAnotherIteration
Este método recibe los valores de la regla de entrada en un array de numpy. Devuelve True, si hay otra iteración; -1, si hay infinitas soluciones o False, si no hay más iteraciones. El método funciona tanto con elementos rational, como con elementos enteros. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step34: showNextIteration
Este método muestra mediante una explicación, cuál es la solución dada por el método anterior. Si recibe True, muestra la explicación para cuando el problema no ha terminado, y hay más iteraciones; si recibe False, muestra la expliación para cuando el problema ha terminado y si recibe -1, muestra la explicación para cuando hay infinitas soluciones. En caso de que reciba algo distinto a esto, devuelve None. Ejemplos
Step35: calculateVarWhichEnter
Este método recibe un array de numpy que contiene las variables que no están en la iteración, y otro array de numpy que contiene los valores de la regla de entrada. Si los parámetros de entrada son correctos, se devuelve la variable que debe entrar en la siguiente iteración(el que tenga el valor mínimo). El método funciona tanto con elementos rational, como con elementos enteros. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos
Step36: showVarWhichEnter
Este método recibe la variable que entra y la muestra por pantalla, indicando que esa es la variable que entra. En caso de no recibir un número por parámetro, devuelve None. Ejemplos
Step37: calculateExitValues
Este método recibe los valores de la regla de entrada en un array de numpy, los valores de y en otro array de numpy, y la solución de esa iteración en un array de numpy, en forma de columna. Todos los elementos de los arrays deben ser rational. Si todos los parámetros se introducen de forma correcta, se devuelven los valores de la regla de salida. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step38: showExitValues
Este método recibe en un array de numpy con elementos rational los valores de la regla de salida, y los muestra por pantalla, acompañados de el nombre que reciben("O"), y de cuál será el criterio de elección del valor de salida(min). En caso de que no reciba un array de numpy, devuelve None. Ejemplos
Step39: calculateO
Este método calcula el valor de O, para un conjunto de valores de salida que recibe por parámetro como un array de numpy. Este valor será el de los valores recibidos. El cálculo de qué valores tienen denominador negativo o 0, se hace en el método calculateExitValues, luego aquí se recibirá un array con valores rational y Nan.Si todos los valores son Nan, devolverá None. En caso de que no reciba un array de numpy, devuelve None. Ejemplos
Step40: showOValue
Este método recibe el valor de O, y simplemente lo muestra por pantalla, con su nombre asociado("O"). En caso de no recibir un número por parámetro, devuelve None. Ejemplos
Step41: calculateVarWhichExit
Este método recibe en un array de numpy las variables o columnas que pertenecen a la iteración(deben aparecer ordenadas en función de lo que se esté realizando en el problema), y en otro array de numpy, los valores de la regla de salida, que deben ser rational o Nan. Si los parámetros introducidos son correctos, devuelve el valor de la variable que saldrá en esta iteración, o None, en caso de que todos los valores sean Nan. En caso de no recibir como parámetro un array de numpy, devolverá None. Ejemplos
Step42: showVarWhichExit
Este método recibe la variable que sale por parámetro, y la muestra por pantalla, acompañado de una indicación de que esa es la variable que saldrá en esta iteración. En caso de no recibir un número por parámetro, devuelve None. Ejemplos
Step43: showIterCol
Este método recibe un array de numpy con las columnas o variables de la iteración, y simplemente las muestra por pantalla, acompañado de una indicación de que esas son las variables de la iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step44: solveIteration
Este método recibe por parámetro la matriz completa de las restricciones del problema(sin signos ni recursos) en una matriz de numpy, y luego recibe tres arrays de numpy, que contienen el vector de recursos,el valor de todas las variables en la función, y las columnas o variables de la presente iteración. Los elementos de la matriz, los recursos y el vector de la función deben ser rational. En caso de que todos los parámetros introducidos sean correctos, muestra por pantalla el desarrollo de la iteración, y finalmente devuelve, la solución de la iteración,el valor de la función para la iteración, cuál sería la variable que entraría, cuál la variable que saldría y un valor que indica si habría más iteraciones(True),no hay más iteraciones(False) o el número de soluciones es elevado(-1). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step45: identityColumnIsInMatrix
Este método recibe una matriz de numpy con elementos rational, y un número que se corresponde, con el índice de una columna de la matriz identidad. Si todos los parámetros son correctos, devolverá el índice de la columna de la matriz pasada, donde se encuentra la columna de la matriz identidad. En caso de que la columna de la matriz identidad indicada no se encuentre en la matriz, devolverá None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step46: variablesFirstIteration
Este método recibe una matriz de numpy, que será la matriz completa del problema y que debe tener elementos rational. Si todos los parámetros son correctos, calcula cuáles son las variables de la primera iteración del problema(es decir, donde están las columnas de la matriz identidad, en la matriz pasada)en un array de numpy. En caso de que alguna de las columnas de la matriz identidad no aparezca, devuelve None en su posición. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step47: calculateColumnsOfIteration
Este método recibe la variable que entrará en la siguiente iteración, la variable que saldrá en la siguiente iteración, y en un array de numpy, las variables de la iteración anterior. Si los parámetros son correctos, devuelve en un array de numpy, las variables de la iteración actual. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step48: completeSolution
Este método recibe las variables de la iteración en un array de numpy, el número total de variables del problema, y la solución de la iteración en un array de numpy, con todos sus elementos rational. Si todos los parámetros se introducen de forma correcta, devolverá la solución completa, es decir, el valor de cada una de las variables para dicha iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step49: addIdentityColumns
Este método recibe una matriz de numpy con elementos rational, y devuelve en una matriz de numpy, cuáles son las columnas de la matriz identidad que no tiene. En caso de que ya tenga todas las columnas de la matriz identidad, devuelve un array vacío. En caso de recibir algo que no sea una matriz de numpy, devuelve None. Ejemplos
Step50: isStringList
Este método recibe una lista y comprueba si todos los elementos de la misma son strings, en ese caso devuelve True. Si algún elemento de la lista no es un string devuelve False.Se utiliza principalmente para comprobar que los parámetros de entrada de algunos métodos son correctos. En caso de no introducir una lista, devuelve None. Ejemplos
Step51: calculateArtificialValueInFunction
Este método calcula y devuelve el coeficiente de la variable artificial para la función objetivo. Aunque como sabemos este valor será infinito y se añadirá con coeficiente negativo, basta con que este valor sea superior a la suma de los valores absolutos de los coeficientes que ya están en el vector función. El método funciona tanto con valores enteros, como con rational, pero siempre devolverá un rational. En caso de recibir algo que no es un array de numpy, devuelve None. Ejemplos
Step52: addArtificialVariablesToFunctionVector
Este método recibe un array de numpy con elementos rational, que contiene los coeficientes de la función objetivo(vector función), y un número, que será el número de variables artificiales que se desea añadir. Si se introducen los parámetros de forma correcta, devolverá un array de numpy, que contendrá el vector función completo, ya con los coeficientes de las variables artificiales añadidos. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step53: calculateWhichAreArtificialVariables
Este método recibe un array de numpy, que contiene los coeficientes de la función objetivo, con las variables artificiales incluidas(en orden), y un número que representa el número de variables artificiales que hay. Si los parámetros son correctos, devolverá cuáles son las variables artificiales. El método funciona tanto con elementos rational, como con enteros. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step54: checkValueOfArtificialVariables
Este método recibe una lista que contiene las variables artificiales del problema, y en un array de numpy con elementos rational, la solución al mismo. Si los parámetros se introducen correctamente, el método comprueba si alguna de las variables artificiales, toma un valor positivo, y en ese caso las devuelve en una lista(si esto ocurriera el problema no tendría solución). Este método es algo especial, puesto que no sigue le funcionamiento de los demás. En este caso recibe las variables artificiales, pero empezando a contar desde la 0,(en el primer ejemplo entonces, 4 y 5, serán las dos últimas). Sin embargo, las variables que devuelve, son empezando a contar desde la 1. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step55: omitComments
Este método recibe una lista de strings, y lo que hace es eliminar aquellas ocurrencias que comiencen por el caracter "//" o "#". También en aquellas ocurrencias que estos caracteres aparezcan en cualquier parte de la cadena, elimina la subcadena a partir de estos caracteres. Devolverá la lista, ya con estas ocurrencias eliminadas. Se utiliza para eliminar comentarios. En caso de recibir algo que no sea una lista, devuelve None.Ejemplos
Step56: proccessFile
Este método recibe un archivo por parámetro, que debe contener un problema de programación lineal en el siguiente formato
Step57: convertFunctionToMax
Este método recibe un string que contiene la función objetivo del problema en el siguiente formato
Step58: invertSign
Este método recibe un string que contiene un signo (debe ser <,<=,>,>=,=) y devuelve en otro string su signo opuesto. En caso de no recibir un string por parámetro, devuelve None. Ejemplos
Step59: negativeToPositiveResources
Este método se utiliza para cambiar a positivos, los recursos que sean negativos, ya que esto no debe darse. Para ello, realiza las transformaciones necesarias, devolviendo un matriz de numpy con elementos rational que contiene las restricciones, un array de numpy con elementos rational que contiene los recursos, y una lista de strings con los signos de cada restricción, con todos los cambios ya realizados. Los parámetros de entrada son los mismos que las salidas que proporciona, pero con las transformaciones sin realizar, es decir, una matriz de numpy, un array de numpy y una lista de strings. Para los recursos que sean positivos, no se realiza transformación alguna, sino que simplemente devuelve lo que recibe. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step60: convertToStandardForm
Este método recibe una martriz de numpy con elementos rational que contendrá las restricciones del problema, un array de numpy con elementos rational, que contendrá el vector de recursos, una lista de strings que contiene los signos de las restricciones y un string que contendrá la función en el formato "max/min 2 -3". Si todos los parámetros introducidos son correctos, el método devolverá los parámetros que ha recibido, pero transformados a la forma estándar(en el caso de la función la devuelve ya en un array de numpy con elementos rational, en su forma de maximización). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step61: showStandarForm
Este método recibe una matriz de numpy con elementos rational que es la matriz de coeficientes, un array de numpy con elementos rational que es el vector de recursos y un array de numpy con elementos rational que es el vector de la función a optimizar. Todos los parámetros son introducidos en forma estándar y son mostrados, en un formato más visual. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step62: solveProblem
Este método resuelve el problema de programación lineal que se le pasa por parámetro. Para ello, recibe una matriz de numpy con elementos rational que contiene las restricciones, sin signos ni recursos, un array de numpy con elementos rational que contiene los recursos, una lista de strings, que contienen los signos de las restricciones, un string que contiene la función en el formato "max/min 2 -3" y un valor True o False, que determina si se quiere obtener también la solución del problema dual al introducido. El método devuelve en este orden la solución del problema(valor de las variables),el valor de la función objetivo para esa solución, una explicación del tipo de problema y el valor de las variables de la solución del problema dual, en caso de que se introduzca True, como último parámetro. No es necesario que se introduzca el problema en forma estándar puesto que el método ya realiza la transformación internamente.En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step63: dualProblem
Este método recibe un problema de programación lineal y devuelve el problema dual del pasado por parámetro. Para ello, recibe una matriz de numpy con elementos rational que contiene las restricciones, sin signos ni recursos, un array de numpy con elementos rational que contiene los recursos, una lista de strings, que contienen los signos de las restricciones y un string que contiene la función en el formato "max/min 2 -3". El método devuelve el problema dual en este orden una matriz de numpy que contiene las restricciones, sin signos ni recursos, un array de numpy que contiene los recursos, una lista de strings, que contienen los signos de las restricciones y un string que contiene la función en el formato "max/min 2 -3". No es necesario que se introduzca el problema en forma estándar(tampoco en forma simétrica de maximización) puesto que el método ya realiza la transformación internamente. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step64: calculateSolutionOfDualProblem
Este método recibe las columnas o variables de la última iteración del problema en un array de numpy, el vector de la función en su forma de maximización en un array de numpy, y la matriz inicial con las restricciones del problema, en una matriz de numpy. Es necesario que tanto la matriz como la función, se encuentren en la forma estándar. Si la introducción de parámetros es correcta, se devuelve la solución del problema dual, en un array de numpy. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step65: Solución gráfica
convertToPlotFunction
Este método transforma una restricción en una función para ser representada. Para ello, recibe un array de numpy que contiene la restricción(todos los coeficientes deben ser rational), sin signo ni recurso,un string que contiene el signo, un rational que es el recurso que contiene los recursos, y una variable que será el linespace para su representación. Además de devolver la función, devuelve un string, con la función. Si el valor de y en la restricción es 0, devuelve un rational, en lugar de una función. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step66: * showFunction*
Este método recibe una función y la representa. Para ello recibe una función,o un número si la función es de tipo y=n, una variable que será el linespace para representarlo y un string que será la etiqueta que se le dará a la función. Es necesario después de ejecutar este método hacer plt.show(). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step67: * eliminateRepeatedPoints*
Este método recibe una lista de puntos(en forma de tupla) y devuelve la misma lista, con los puntos repetidos eliminados. Con enteros y rational, funciona exactamente, no así con float si los números tienen muchos decimales, puesto que podría considerar por ejemplo 5.33333 y 5.33334 como dos números distintos, cuando podrían ser el mismo. En caso de no recibir una lista, devuelve None. Ejemplos
Step68: * eliminatePoints*
Este método recibe dos listas, y devuelve una lista con los elementos de la primera lista que no están en la segunda. Se puede utilizar para eliminar puntos(tuplas) o cualquier elemento. Igual que el método anterior, con float no funciona exactamente.Si no recibe dos listas, devuelve None. Ejemplos
Step69: calculatePointOfSolution
Est método recibe un array de numpy con los coeficientes de la función a optimizar(en forma de maximización),una lista de puntos cuyas coordenadas son rational, y un rational con el valor de la función objetivo optimizada. El método devuelve cuál es el punto que alcanza el valor pasado. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step70: calculateSolution
Este método recibe una función a optimizar en un string, en el formato que se puede ver en los ejemplos. Recibe un conjunto de puntos cuyas coordenas son rational. El método devuelve el valor de la función optimizada, y cuál es el punto de los pasados que la optimiza.Si la lista no tiene puntos, devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step71: intersectionPoint
Este método calcula el punto de intersección entre dos restricciones de tipo "=". Recibe dos array de numpy, cuyos componenetes deben ser rational, que contienen los coeficientes de las restricciones, y recibe también los recursos de cada restricción en dos rational. En caso de que no haya punto de intersección entre ellas, devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step72: eliminateNegativePoints
Este método recibe una lista de puntos cuyas coordenadas son rational, y devuelve la lista, sin aquellos puntos con coordenadas negativas. Si recibe algo que no es una lista de puntos rational, devuelve None. Ejemplos
Step73: calculateAllIntersectionPoints
Este método recibe un array de arrays de numpy con todas las restricciones, sin signo ni recursos, y un array de numpy con los recursos de cada restricción. El método devuelve en una lista, todos los puntos de intersección entre las restricciones y de las restricciones con los ejes de coordenadas positivos. También añade el punto (0,0). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step74: calculateNotBoundedIntersectionPoints
Este método recibe un array de arrays de numpy con todas las restricciones, sin signo ni recursos,un array de numpy con los recursos de cada restricción y los máximos valores de x y de y que se van a representar, en dos ratioanl. El método devuelve en una lista, los puntos de intersección entre las restricciones y los ejes imaginarios constituidos en los máximos puntos representados. Por ejemplo si se pasa constX=3 y constY=4, devolverá los puntos de intersección entre las restricciones y los ejes y=3 y x=4 . También añade el punto de intersección entre los dos hipotéticos ejes(en el ejemplo anterior el punto (4,3). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step75: checkIfIsSolution
Este método recibe una restricción, con los coeficentes de la misma en una array de numpy, la solución a probar en una tupla, el signo en un string y el recurso en un número. El método devuelve True, si la solución satisface la restricción, o False si no la satisface. El método funciona con enteros y rational, perfectamente, pero con float, no es del todo exacto. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step76: calculateFeasibleRegion
Este método recibe un conjunto de puntos en una lista, un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos y una lista de string con los signos. El método devuelve la lista de puntos introducidos, que cumplen todas las restricciones, es decir pertenecen a la región factible. El método funciona tanto con rational, como con enteros, no siendo tan exacto con float. Si ningún punto pertenece a la región factible, devolverá una lista vacía. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step77: calculateMaxScale
Este método recibe una lista de puntos, y devuelve el máximo valor de la coordenada x y de la coordenada y. Se utiliza para saber cuál es el punto máximo que se debe representar. En caso de no recibir una lista, devuelve None. Ejemplos
Step78: calculateMinScale
Este método recibe una lista de puntos, y devuelve el mínimo valor de la coordenada x y de la coordenada y. Se utiliza para saber cuál es el punto mínimo que se debe representar. En caso de no recibir una lista, devuelve None. Ejemplos
Step79: checkIfPointInFeasibleRegion
Este método recibe un punto en una tupla, un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos y una lista de string con los signos. El método devuelve True, si el punto cumple todas las restricciones, es decir pertenece a la región factible, y False, si no pertenece. El método funciona tanto con rational, como con enteros, no siendo tan exacto con float. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step80: calculateIntegerPoints
Este método recibe un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos, una lista de string con los signos y dos tuplas, con el mínimo y el máximo punto a representar. El método devuelve una lista con todos los puntos enteros que pertenecen a esa región factible y que son menores que el punto máximo. Todos los elementos de las restricciones, recursos y de la tupla, deben ser rational. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step81: centre
Este método recibe una lista de puntos, y devuelve el punto que está en el centro del polígono que forman dichos puntos. Las coordenadas de los puntos deben ser rational. En caso de no pasar una lista de puntos rational, devuelve None. Ejemplos
Step82: isThePoint
Este método recibe una lista de puntos, cuyas coordenadas son rational, un valor, que es el cálculo de la distancia al centro, y el centro de los puntos de la lista. El método devuelve el punto de la lista cuya distancia al centro, es el valor introducido. Si ningún punto, cumple la distancia devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step83: calculateOrder
Este método recibe una lista de puntos, cuyas coordenadas son rational, y devuelve la misma lista de puntos, pero ordenadas en sentido horario. En caso de no introducir una lista de rational, devuelve None. Ejemplos
Step84: pointIsInALine
Este método recibe un punto en una tupla, una restricción sin signos ni recursos en un array de numpy, y el recurso, como un número. El método devuelve True, si el punto, esta sobre la línea que representa la restricción en el plano, en otro caso devuelve False. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step85: deleteLinePointsOfList
Este método recibe un conjunto de puntos en una lista, un array de numpy con un conjunto de restricciones sin signos, ni recursos, y un array de numpy con los recursos de las restricciones. El método devuelve la lista de puntos, pero sin aquellos puntos que están en la línea que representa alguna de las restricciones introducidas. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos
Step86: showProblemSolution
Este método resuelve el problema de programación lineal que se le pasa por parámetro, de manera gráfica. Para ello, recibe una matriz de numpy que contiene las restricciones, sin signos ni recursos, un array de numpy que contiene los recursos, una lista de strings, que contienen los signos de las restricciones, un string que contiene la función en el formato "max/min 2 -3" y un valor False o un nombre, que determina si se quiere guardar la imagen en el archivo con el nombre indicado. El método muestra la solución gráfica, siempre que el problema tenga solo 2 variables, en otro caso devuelve None. No es necesario que se introduzca el problema en forma estándar. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos | Python Code:
from PySimplex import Simplex
from PySimplex import rational
import numpy as np
number="2"
print(Simplex.convertStringToRational(number))
number="2/5"
print(Simplex.convertStringToRational(number))
# Si recibe algo que no es un string, devuelve None
number=2
print(Simplex.convertStringToRational(number))
Explanation: Simplex.py
En el siguiente tutorial, se van a ver todos los métodos con los que cuenta la librería Simplex.py. Por supuesto, una aplicación de muchos de ellos, siguiendo una secuencia, correcta, podría dar lugar a la resolución de un problema de programación lineal. Sin embargo, obtener una solcuión desde esta perspectiva, es algo mucho más largo y complejo, siendo mucho más fácil usar el programa SimplexSolver.py.
Para el uso de la librería, se ha creado una clase auxiliar llamada rational. Esta clase representa a los números racionales. Cada objeto de esa clase contará con un númerador y un denominador, de tal forma que si se quiere definir un número entero, habrá que asignarle denominador 1. La forma de definir un objeto rational es la siguiente:
rational(3,2) # Esto define el número 3/2
El tutorial se va a dividir en cuatro partes, las mismas en las que se divide la librería. La primera, muestra los métodos creados para realizar operaciones con racionales(muchos de ellos se utilizan simplemente para las comprobaciones de parámetros de entrada de otros métodos). La segunda parte, serán operaciones con matrices y arrays(tales como invertir una matriz), que han tenido que ser redefinidas para que puedan ser utilizadas con la clase rational. La tercera parte, son los métodos utilizados para alcanzar la solución mediante el método Simplex, y la cuarta, será la formada por aquellos métodos que permiten obtener la solución gráfica.
A continuación se exponen los métodos de la librería, con explicaciones y ejemplos de cada uno de ellos.
NOTA 1: Siempre que se hable de variables del problema, hay que considerar, que la primera variable será la 0, es decir x0.
NOTA 2: Los "imports" necesarios se realizan en la primera celda, para ejecutar cualquiera de las siguientes, sin errores, debe ejecutar primero la celda que contiene los "imports". Si realiza una ejecución en su propio entorno de programación, debe importar estas dos clases, para que los métodos se ejecuten sin errores(por favor, consulte en detalle el manual de instalación que hay en la misma localización que este manual):
from PySimplex import Simplex
from PySimplex import rational
import numpy as np
Operaciones con rational
convertStringToRational
Este método recibe un número en un string, y devuelve el número como un rational. Si no recibe un string, devuelve None. Ejemplos:
End of explanation
line="3 4 5"
print(Simplex.printMatrix((np.asmatrix(Simplex.convertLineToRationalArray(line)))))
line="3 4/5 5"
print(Simplex.printMatrix((np.asmatrix(Simplex.convertLineToRationalArray(line)))))
# Si se le pasa algo que no es un string, devuelve None
print(Simplex.convertLineToRationalArray(4))
Explanation: convertLineToRationalArray
Este método recibe un string, que contiene un conjunto de números separados por un espacio, y devuelve los números en un array de numpy con elementos rational.Si no recibe un string, devuelve None. Ejemplos:
End of explanation
a=rational(3,4)
Simplex.rationalToFloat(a)
a=rational(3,1)
Simplex.rationalToFloat(a)
# Si no se introduce un rational, devuelve None
a=3.0
print(Simplex.rationalToFloat(a))
Explanation: rationalToFloat
Este método recibe un objeto rational, y devuelve su valor en float. Lo que hace es realizar la división entre el númerador y el denominador. En caso de no pasar un rational como parámetro, devuelve None.
End of explanation
rationalList=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2)
,rational(4,5)),(rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]
Simplex.listPointsRationalToFloat(rationalList)
# Si recibe algo que no es una lista de puntos con coordenadas rational,devuelve None
rationalList=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)]
print(Simplex.listPointsRationalToFloat(rationalList))
Explanation: * listPointsRationalToFloat*
Este método recibe una lista de puntos, cuyas coordenadas son rational, y devuelve la misma lista de puntos, pero con las coordenadas en float. En caso de no introducir una lista de rational, devuelve None. Ejemplos:
End of explanation
lis=[(rational(1,2),rational(5,7)),(rational(4,5),rational(4,6)),(rational(4,9),rational(9,8))]
Simplex.isAListOfRationalPoints(lis)
lis=[(rational(1,2),rational(5,7)),(4,rational(4,6)),(rational(4,9),rational(9,8))]
Simplex.isAListOfRationalPoints(lis)
# Si recibe algo que no es una lista devuelve None
lis=np.array([(rational(1,2),rational(5,7)),(4,rational(4,6)),(rational(4,9),rational(9,8))])
print(Simplex.isAListOfRationalPoints(lis))
Explanation: isAListOfRationalPoints
Este método recibe una lista, y devuelve True, si todos los elementos son puntos(tuplas)con coordenadas rational o False, si hay algún elemento que no es un punto cuyas coordenadas sean rational. En caso de no pasar una lista, devuelve None. Ejemplos:
End of explanation
# Si todos los elementos son puntos(tuplas), devuelve True
lis=[(3,4),(5,6),(7,8),(8,10)]
Simplex.isAListOfPoints(lis)
# Si recibe una lista cuyos elementos, no son todos puntos(tuplas), devuelve False
lis=[3,5,6,(6,7)]
Simplex.isAListOfPoints(lis)
# Si recibe algo que no es una lista devuelve None
print(Simplex.isAListOfPoints(3))
Explanation: isAListOfPoints
Este método recibe una lista, y devuelve True, si todos los elementos son puntos(tuplas) o False, si hay algún elemento que no es un punto. En caso de no pasar una lista, devuelve None. Ejemplos:
End of explanation
mat=np.matrix([[rational(1,2),rational(5,7)],[rational(5,8),rational(9,3)]])
Simplex.isARationalMatrix(mat)
mat=np.array([[rational(1,2),rational(5,7)],[rational(5,8),rational(9,3)]])
Simplex.isARationalMatrix(mat)
mat=np.matrix([[1,rational(5,7)],[rational(5,8),rational(9,3)]])
Simplex.isARationalMatrix(mat)
# Si recibe algo que no es una matriz o un array de numpy
mat=[rational(1,2),rational(5,7)]
print(Simplex.isARationalMatrix(mat))
Explanation: isARationalMatrix
Este método recibe una matriz de numpy o un array bidimensional de numpy, y comprueba si todos los elementos del mismo, son rational, en ese caso devuelve True. En otro caso devuelve False. Si no recibe una matriz o un array de numpy, devuelve None. Ejemplos:
End of explanation
arr=np.array([rational(1,2),rational(5,7),rational(4,5)])
Simplex.isARationalArray(arr)
arr=np.array([rational(1,2),6,rational(4,5)])
Simplex.isARationalArray(arr)
# Si recibe algo que no es una matriz o un array de numpy
arr=[rational(1,2),rational(5,7),rational(4,5)]
print(Simplex.isARationalArray(arr))
Explanation: isARationalArray
Este método recibe un array de numpy, y comprueba si todos los elementos del mismo, son rational, en ese caso devuelve True. En otro caso devuelve False. Si no recibe una matriz o un array de numpy, devuelve None. Ejemplos:
End of explanation
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
det=Simplex.determinant(matrix)
print(det)
# Si la matriz no es cuadrada, devuelve None
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])
print(Simplex.determinant(matrix))
# También admite un array de numpy bidimensional
matrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
print(Simplex.determinant(matrix))
# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None
print(Simplex.determinant(3))
Explanation: Operaciones con matrices
determinant
Este método recibe una matriz de numpy, con componentes rational, y devuelve el determinante de la matriz. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. También admite un array de numpy bidimensional.Ejemplos:
End of explanation
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
m=Simplex.coFactorMatrix(matrix)
print(Simplex.printMatrix(m))
# Si la matriz no es cuadrada, devuelve None
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])
print(Simplex.coFactorMatrix(matrix))
# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None
matrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
print(Simplex.coFactorMatrix(matrix))
Explanation: coFactorMatrix
Este método recibe una matriz de numpy, con componentes rational, y devuelve la matriz de cofactores. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
m=Simplex.adjMatrix(matrix)
print(Simplex.printMatrix(m))
# Si la matriz no es cuadrada, devuelve None
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])
print(Simplex.adjMatrix(matrix))
# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None
matrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
print(Simplex.invertMatrix(matrix))
Explanation: adjMatrix
Este método recibe una matriz de numpy, con componentes rational, y devuelve la matriz de adjuntos. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
m=Simplex.invertMatrix(matrix)
print(Simplex.printMatrix(m))
# Si la matriz no es cuadrada, devuelve None
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])
print(Simplex.invertMatrix(matrix))
# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None
matrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
print(Simplex.invertMatrix(matrix))
Explanation: invertMatrix
Este método recibe una matriz de numpy, con componentes rational, y devuelve la matriz inversa. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
m=Simplex.initializeMatrix(3, 2)
print(Simplex.printMatrix(m))
# Si se introduce algo que no son enteros, devuelve None
print(Simplex.initializeMatrix(4.0,3.0))
Explanation: initializeMatrix
Este método recibe unas dimensiones y devuelve una matriz de numpy, con elementos rational,de valor 0. Si los valores introducidos no son enteros, devuelve None. Ejemplos:
End of explanation
m=Simplex.createRationalIdentityMatrix(3)
print(Simplex.printMatrix(m))
# Si se introduce algo que es un entero, devuelve None
print(Simplex.createRationalIdentityMatrix(4.0))
Explanation: createRationalIdentityMatrix
Este método recibe un número y devuelve una matriz identidad de numpy, con elementos rational. Si el valor introducido no es entero, devuelve None. Ejemplos:
End of explanation
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
num= rational(3,4)
m = Simplex.multNumMatrix(num, matrix)
print(Simplex.printMatrix(m))
# Si recibe algo que no es una matriz de numpy, con elementos rational, devuelve None
num = 4
print(Simplex.multNumMatrix(num, matrix))
Explanation: multNumMatrix
Este método recibe un número en forma rational y una matriz de numpy, con componentes rational, y devuelve la matriz del producto del número por la matriz introducida.Si se introduce algo que no es un rational como número o una matriz de numpy, con elementos rational,como matriz, devuelve None. Ejemplos:
End of explanation
matrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
Simplex.twoMatrixEqual(matrix1, matrix2)
matrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(9,6),rational(6,1)]])
Simplex.twoMatrixEqual(matrix1, matrix2)
# Si las dimensiones no son iguales, devuelve False
matrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
Simplex.twoMatrixEqual(matrix1, matrix2)
# Si recibe algo que no es una matriz de numpy, con elementos rational, devuelve None
print(Simplex.twoMatrixEqual(matrix1, 3))
Explanation: twoMatrixEqual
Este método recibe dos matrices de numpy, con componentes rational, y devuelve True,si son iguales, o False, si no lo son. Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(9,6),rational(6,1)]])
print(Simplex.printMatrix(matrix2))
# También admite un array de numpy bidimensional
matrix2=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(9,6),rational(6,1)]])
print(Simplex.printMatrix(matrix2))
# Si recibe algo que no es una matriz de numpy o un array bidimensional, con elementos rational, devuelve None
print(Simplex.printMatrix(matrix2))
Explanation: printMatrix
Este método recibe una matriz de numpy, con componentes rational, y la pasa a formato string.Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. También admite un array de numpy bidimensional. Ejemplos:
End of explanation
matrix1=np.matrix([[rational(4,7),rational(8,9),rational(2,5)],[rational(2,4),rational(3,4),rational(7,5)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
m=Simplex.multMatrix(matrix1, matrix2)
print(Simplex.printMatrix(m))
# Si el número de columnas de la primera matriz, y el número de filas de la segunda, no son iguales, devuelve None
matrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
print(Simplex.multMatrix(matrix1, matrix2))
# Si recibe algo que no es una matriz de numpy, con elementos rational, devuelve None
matrix1=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
matrix2=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
print(Simplex.multMatrix(matrix1, matrix2))
Explanation: multMatrix
Este método recibe dos matrices de numpy, con componentes rational, y devuelve la matriz resultado del producto de las dos matrices introducidas. Si el número de columnas de la primera matriz, y el número de filas de la segunda, no son iguales, las matrices no se pueden multiplicar y devuelve None. Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
matrix=np.matrix([[1,3,4,4,5],[12,45,67,78,9],[3,4,3,5,6]])
variablesIteration=np.array([1,3,4])
Simplex.variablesNoiteration(matrix,variablesIteration)
variablesIteration=np.array([3,4,1])
Simplex.variablesNoiteration(matrix,variablesIteration)
# EL método funciona con matrices con elementos rational
matrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),rational(1,3)],
[rational(4,1),rational(6,4),rational(9,2)]])
variablesIteration=np.array([3,4,1])
Simplex.variablesNoiteration(matrix,variablesIteration)
#Si le introduzco algo que no sea una matriz de numpy en el primer parámetro o algo que no sea un array de numpy en el segundo,me
#devuelve None
print(Simplex.variablesNoiteration(3,variablesIteration))
Explanation: Método Simplex
variablesNoiteration
Este método se utiliza para calcular las variables que no están en la iteración. Recibe como parámetro, una matrix numpy, que contiene las restricciones del problema y un array numpy, que contiene las variables que ya están en la iteración(estas variables no tienen porqué aparecer ordenadas en el array). El método funciona, con matrices de tipo entero, de tipo float y de tipo rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Si todo es correcto, devolverá array numpy, con las variables que no están en la iteración. Ejemplos:
End of explanation
setOfVal=np.array([rational(1,4),rational(4,7),rational(6,8),rational(6,4)])
print(Simplex.calcMinNoNan(setOfVal))
setOfVal=np.array([np.nan,rational(4,7),rational(6,8),rational(6,4)])
print(Simplex.calcMinNoNan(setOfVal))
#Si le paso un conjunto de valores, TODOS no rational, devuelve None
setOfVal=np.array([np.nan,np.nan,np.nan,np.nan])
print(Simplex.calcMinNoNan(setOfVal))
#Si le algo que no es array numpy, devuelve None
print(Simplex.calcMinNoNan(2))
Explanation: calcMinNoNan
Este método se utiliza para calcular cuál es el mínimo valor, de un conjunto de valores. Recibe un array de numpy, con los valores. El método selecciona aquellos valores que sean rational, y calcula el mínimo. En caso de que los parámetros introducidos no sean correctos, devolverá None. Si todo es correcto, devolverá el mínimo valor no negativo o None, en caso de que no haya valores rational. Ejemplos:
End of explanation
array=np.array([3,4,5,6,7,2,3,6])
value= 3
Simplex.calculateIndex(array,value)
#Si introduzco un valor que no está en el array, devuelve None
value=78
print(Simplex.calculateIndex(array,value))
# El método funciona también con rational
value=rational(4,7)
array=np.array([rational(1,4),rational(4,7),rational(6,8),rational(6,4)])
Simplex.calculateIndex(array,value)
#Si introduzco algo que no es un array en el primer parámetro o algo que no es un número en el segundo, devuelve None
print(Simplex.calculateIndex(4,value))
Explanation: calculateIndex
Este método recibe un array de numpy, y un valor, y devuelve la posición dentro del array donde se encuentra la primera ocurrencia de dicho valor. En caso de que dicho valor no aparezca en el array, se devolverá None. El método funciona con conjuntos de números enteros y con conjuntos de rational. En caso de que los parámetros introducidos no sean correctos, devolverá None.Ejemplos:
End of explanation
totalMatrix=np.matrix([[1,2,3,4,5],[2,6,7,8,9],[6,3,4,5,6]])
columnsOfIteration=np.array([1,2,0])
Simplex.calculateBaseIteration(totalMatrix,columnsOfIteration)
# El método funciona también con matrices con elementos rational
columnsOfIteration=np.array([1,2,0])
totalMatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1),rational(5,3),rational(2,1)],[rational(2,3),rational(7,6),
rational(1,3),rational(2,5),rational(9,5)], [rational(4,1),rational(6,4),rational(9,2),rational(4,5),
rational(3,1)]])
print(Simplex.printMatrix(Simplex.calculateBaseIteration(totalMatrix,columnsOfIteration)))
# Si le paso más columnas de las que hay en la matriz total, me devolverá None
columnsOfIteration=np.array([0,1,2,3,4,5,6])
print(Simplex.calculateBaseIteration(totalMatrix,columnsOfIteration))
# Si le introduzco algo que no sea una matriz de numpy en el primer parámetro o algo que no sea un array de numpy en el segundo
# ,me devuelve None
print(Simplex.calculateBaseIteration(4,columnsOfIteration))
Explanation: calculateBaseIteration
Este método calcula la base de la iteración, y la devuelve en una matriz numpy. Para ello, recibe la matriz que contiene todas las restricciones del problema(sin signo ni recursos), y las columnas que forman parte de la iteración(no tienen porqué aparecer ordenadas en el array). La matriz, puede ser de valores enteros o rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
base=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),rational(1,3)],
[rational(4,1),rational(6,4),rational(9,2)]])
Simplex.showBase(base,"B")
#Si se le pasa algo que no es una matriz de numpy con elementos rational en el primer parámetro, o un string en el segundo, me
# devuelve None
print(Simplex.showBase(3,"B"))
Explanation: showBase
Este método recibe una matriz numpy con elementos rational, que se supone que será la base de una iteración, acompañado del nombre que se le quiera asignar, y la muestra por pantalla, con el nombre que se le asigna (B), dentro de la iteración. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
base=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),rational(1,3)],
[rational(4,1),rational(6,4),rational(9,2)]])
resourcesVector=np.array([rational(2,1),rational(33,2),rational(52,8)])
print(Simplex.printMatrix(np.asmatrix(Simplex.calculateIterationSolution(base,resourcesVector))))
#Si le paso un vector de recursos, que tenga un longitud diferente al número de filas de la matriz, me duvuelve None
resourcesVector=np.array([rational(2,1),rational(33,2)])
print(Simplex.calculateIterationSolution(base,resourcesVector))
#Si le paso algo que no es una matriz de numpy de elementos rational en el primer parámetro o un array de numpy con elementos
# rational en el segundo, me devuelve None
print(Simplex.calculateIterationSolution(base,4))
Explanation: calculateIterationSolution
Este método calcula la solución de una iteración, para las variables de la misma, y la devuelve en un array de numpy. Para ello, recibe la base de la iteración, en una matriz numpy y también recibe el vector de recursos en un array de numpy. Los elementos de la matriz y el array, deben ser rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
sol=np.array([[rational(2,2)],[rational(5,3)],[rational(6,1)],[rational(7,8)]])
Simplex.showSolution(sol)
#Si le paso algo que no es un array numpy con elementos rational, me devuelve None
sol=np.array([[2],[5],[6],[7]])
print(Simplex.showSolution(sol))
Explanation: showSolution
Este método recibe la solución de una iteración, y la muestra con el nombre que se le asigna en ella ("x"). La solución deberá ser pasada en un numpy array en forma de columna con elementos rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
columnsOfIteration=np.array([0,2,3])
functionVector= np.array([0,1,2,3,5,5,6])
Simplex.calculateCB(columnsOfIteration,functionVector)
# El método también funciona con elementos rational
columnsOfIteration=np.array([0,2])
functionVector= np.array([rational(0,1),rational(2,3),rational(5,5)])
print(Simplex.printMatrix(np.asmatrix(Simplex.calculateCB(columnsOfIteration,functionVector))))
# Si meto más columnas de las que tiene el vector función, me devuelve None
columnsOfIteration=np.array([0,1,2])
functionVector= np.array([0,1])
print(Simplex.calculateCB(columnsOfIteration,functionVector))
# Si meto algo por parámetro que no es un array de numpy en cualquiera de los dos parámetros, me devuelve None
print(Simplex.calculateCB([0,1],functionVector))
Explanation: calculateCB
Este método calcula el valor del vector función, para una iteración. Para ello recibe en un array numpy, las columnas de la iteración, y en otro array numpy, el vector de función completo del problema. Si todo es correcto, se devuelve en un array numpy, el vector de la función para las columnas introducidas. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
CBValue= np.array([rational(0,1),rational(2,3),rational(5,5)])
Simplex.showCB(CBValue)
#Si se le pasa algo que no es un array numpy de elementos rational, devuelve None
CBValue= np.array([0,1,4,6])
print(Simplex.showCB(CBValue))
Explanation: showCB
Este método, recibe un array numpy de elementos rational, que contiene el valor del vector función, y simplemente lo muestra por pantalla, con el correspondiente nombre que se le asigna("CB"). En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
# La solución se debe pasar como un array en forma de columna
solution=np.array([[rational(2,1)],[rational(3,2)],[rational(2,5)]])
CB = np.array([rational(0,1),rational(2,3),rational(5,5)])
print(Simplex.printMatrix(Simplex.calculateFunctionValueOfIteration(solution,CB)))
#Si el tamaño de uno de los parámetros difiere del otro, devuelve None
solution=np.array([[rational(2,1)],[rational(3,2)],[rational(2,5)]])
CB = np.array([rational(0,1),rational(5,5)])
print(Simplex.calculateFunctionValueOfIteration(solution,CB))
#Si recibe algo que no es un array numpy con elementos rational en cualquiera de los dos parámetros, devuelve None
print(Simplex.calculateFunctionValueOfIteration(solution,3))
Explanation: calculateFunctionValueOfIteration
Este método recibe la solución de la iteración, y el vector de la función para la misma, y devuelve una matriz numpy que contiene el valor de la función para dicha iteración. Es necesario que la solución se pase como un array numpy en forma de columna(como muestra el ejemplo). El vector de la función debe ser un array de numpy, en forma de fila. Ambos arrays, deben ser de elementos rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
functionValue=np.matrix([34])
Simplex.showFunctionValue(functionValue)
# El método funciona también con metrices rational
functionValue=np.matrix([rational(34,1)])
Simplex.showFunctionValue(functionValue)
#En caso de recibir algo que no es una matriz numpy, devuelve None
functionValue=np.matrix([34])
print(Simplex.showFunctionValue(4))
Explanation: showFunctionValue
Este método recibe una matriz numpy que contiene la solución de la función, para la iteración, y la muestra por pantalla con su nombre("z"). El método funciona tambiñen si se pasa la matriz con elementos rational En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
variablesNoIteration=np.array([3,4])
iterationBase=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),
rational(1,3)], [rational(4,1),rational(6,4),rational(9,2)]])
totalMatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1),rational(5,3),rational(2,1)],[rational(2,3),rational(7,6),
rational(1,3),rational(2,5),rational(9,5)], [rational(4,1),rational(6,4),rational(9,2),rational(4,5),
rational(3,1)]])
print(Simplex.printMatrix(Simplex.calculateYValues(variablesNoIteration,iterationBase,totalMatrix)))
#Si el número de variables fuera de la iteración, es mayor que el número total de variables, se devuelve None
variablesNoIteration=np.array([0,1,2,3,4,5])
print(Simplex.calculateYValues(variablesNoIteration,iterationBase,totalMatrix))
#Si el la base tiene más o menos filas, que la matriz total, devuelve None
variablesNoIteration=np.array([3,4])
iterationBase=np.matrix([[rational(6,7),rational(4,5),rational(3,1)], [rational(4,1),rational(6,4),rational(9,2)]])
totalMatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1),rational(5,3),rational(2,1)],[rational(2,3),rational(7,6),
rational(1,3),rational(2,5),rational(9,5)], [rational(4,1),rational(6,4),rational(9,2),rational(4,5),
rational(3,1)]])
print(Simplex.calculateYValues(variablesNoIteration,iterationBase,totalMatrix))
#Si se introduce algo que no sea una matriz numpy de rational en el segundo y tercer parámetro, o un array numpy en el primer
# parámetro, devuelve None
print(Simplex.calculateYValues(variablesNoIteration,4,totalMatrix))
Explanation: calculateYValues
Este método calcula los valores de y, para una iteración. Para ello recibe la base de la iteración en una matriz numpy, la matriz total que contiene todas las restricciones del problema (sin signo, ni recursos) en una matriz numpy y las variables que no pertenecen a la iteración, en un array numpy. Los elementos de ambas matrices, deben ser rational. Si todos los parámetros introducidos son correctos, se devuelve en un array de numpy los valores, de cada una de las y para la iteración. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
variablesNoIteration=np.array([1,3])
y = np.array([[rational(2,3),rational(4,6)],[rational(3,2),rational(4,1)]])
Simplex.showYValues(variablesNoIteration,y)
#Si se pasa algo que no sea un array numpy en cualquiera de los dos parámetros,siendo el segundo de elementos rational,
# devuelve None
print(Simplex.showYValues(690,y))
Explanation: showYValues
Este método recibe un array numpy que contiene las variables que no pertenecen a la iteración, y los valores de y en un array de numpy con elementos rational, y los muestra por pantalla con su nombre("y"+número de la variable). En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
functionVector= np.array([rational(1,1),rational(3,1),rational(4,1),rational(5,1),rational(5,1)])
variablesNoIteration= np.array([0,2,3])
CB = np.array([rational(2,1),rational(0,1)])
y = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)
,rational(-1,1)]])
print(Simplex.printMatrix(np.asmatrix(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))))
# Si se le pasa algo que no es un array numpy en cualquiera de los parámetros, devuelve None
print(Simplex.calculateZC(89,variablesNoIteration,CB,y))
# Si el tamaño del vector de recursos para la iteración, es mayor que el tamaño de los resultados de y, devuelve None
functionVector= np.array([rational(1,1),rational(3,1),rational(4,1),rational(5,1),rational(5,1)])
variablesNoIteration= np.array([0,2,3])
CB = np.array([rational(2,1),rational(0,1),rational(3,2),rational(2,1),rational(4,3)])
y = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)
,rational(-1,1)]])
print(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))
# Si hay más variables fuera de la iteración que variables en el vector de función total,se devuelve None
functionVector= np.array([rational(1,1),rational(3,1),rational(4,1),rational(5,1),rational(5,1)])
variablesNoIteration= np.array([0,1,2,3,4,5,6])
CB = np.array([rational(2,1),rational(0,1)])
y = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)
,rational(-1,1)]])
print(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))
# Si el tamaño del vector función para la iteración es mayor que el del vector total de la función, devuelve None:
functionVector= np.array([rational(1,1),rational(3,1)])
variablesNoIteration= np.array([0,1,2,3,4,5,6])
CB = np.array([rational(2,1),rational(0,1),rational(4,1),rational(5,1),rational(5,1)])
y = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)
,rational(-1,1)]])
print(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))
# Si se introduce algo que no es un array de numpy, devuelve None(el primer, tercer y cuarto parámetro deben tener elementos
# rational)
functionVector=np.array([3,-6,-3])
print(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))
Explanation: calculateZC
Este método calcula los valores de la regla de entrada, y los devuelve en un array de numpy. Para ello recibe el vector de la función completo, en un array de numpy; las variables que no están dentro de la iteración, en un array de numpy; el vector de la función para la iteración, en un array de numpy y por último, los valores de y para la iteración, en un numpy array. Todos los arrays deben tener elementos rational, excepto en el de las variables que no están en la iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
variablesNoIteration= np.array([0,2,3])
Z_C=np.array([3,-6,-3])
Simplex.showZCValues(variablesNoIteration,Z_C)
# También funciona con rational
variablesNoIteration= np.array([0,2,3])
Z_C=np.array([rational(3,5),rational(-6,2),rational(-3,1)])
Simplex.showZCValues(variablesNoIteration,Z_C)
# Si la longitud de los valores de la regla de entrada, es diferente del número de valores que hay en la iteración, devuelve None
Z_C=np.array([3,-6])
print(Simplex.showZCValues(variablesNoIteration,Z_C))
# Si lo que se introduce no es un array de numpy, en cualquiera de los dos parámetros, devuelve None
print(Simplex.showZCValues(3,Z_C))
Explanation: showZCValues
Este método recibe en un array de numpy los valores de la regla de entrada(Z_C) y en otro array de numpy,las variables que no pertenecen a la iteración. Si todos los parámetros son correctos, muestra por pantalla los valores de la regla de entrada con su nombre asociado("Z_C"+número de la variable). El método funciona tanto con elementos rational, como con elementos enteros.. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
inputRuleValues=np.array([3,-6])
Simplex.thereIsAnotherIteration(inputRuleValues)
inputRuleValues=np.array([0,-6])
Simplex.thereIsAnotherIteration(inputRuleValues)
inputRuleValues=np.array([0,6])
Simplex.thereIsAnotherIteration(inputRuleValues)
inputRuleValues=np.array([1,6])
Simplex.thereIsAnotherIteration(inputRuleValues)
# El método funciona también con rational
inputRuleValues=np.array([rational(1,3),rational(-2,3)])
Simplex.thereIsAnotherIteration(inputRuleValues)
#Si se le pasa algo que no sea un array de numpy, devuelve None
print(Simplex.thereIsAnotherIteration(2))
Explanation: thereIsAnotherIteration
Este método recibe los valores de la regla de entrada en un array de numpy. Devuelve True, si hay otra iteración; -1, si hay infinitas soluciones o False, si no hay más iteraciones. El método funciona tanto con elementos rational, como con elementos enteros. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
Simplex.showNextIteration(True)
Simplex.showNextIteration(False)
Simplex.showNextIteration(-1)
# Si recibe algo distinto a True,False o -1, devuelve None
print(Simplex.showNextIteration(-2))
Explanation: showNextIteration
Este método muestra mediante una explicación, cuál es la solución dada por el método anterior. Si recibe True, muestra la explicación para cuando el problema no ha terminado, y hay más iteraciones; si recibe False, muestra la expliación para cuando el problema ha terminado y si recibe -1, muestra la explicación para cuando hay infinitas soluciones. En caso de que reciba algo distinto a esto, devuelve None. Ejemplos:
End of explanation
variablesNoIteration=np.array([0,2,3])
inputRuleValues=np.array([3,-6,-3])
Simplex.calculateVarWhichEnter(variablesNoIteration,inputRuleValues)
# El método también funciona con elementos rational
variablesNoIteration=np.array([0,2,3])
inputRuleValues=np.array([rational(3,9),rational(-6,2),rational(-3,2)])
Simplex.calculateVarWhichEnter(variablesNoIteration,inputRuleValues)
# Si se recibe algo que no es un array de numpy en cualquiera de los dos parámetros, devuelve None
print(Simplex.calculateVarWhichEnter(variablesNoIteration,5))
Explanation: calculateVarWhichEnter
Este método recibe un array de numpy que contiene las variables que no están en la iteración, y otro array de numpy que contiene los valores de la regla de entrada. Si los parámetros de entrada son correctos, se devuelve la variable que debe entrar en la siguiente iteración(el que tenga el valor mínimo). El método funciona tanto con elementos rational, como con elementos enteros. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
variableWhichEnter= 2
Simplex.showVarWhichEnter(variableWhichEnter)
#Si lo que recibe por parámetro no es un número, devuelve None
print(Simplex.showVarWhichEnter("adsf"))
Explanation: showVarWhichEnter
Este método recibe la variable que entra y la muestra por pantalla, indicando que esa es la variable que entra. En caso de no recibir un número por parámetro, devuelve None. Ejemplos:
End of explanation
inputRuleValues=np.array([rational(2,1),rational(-3,1),rational(-4,3)])
yValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),],[rational(3,1),
rational(5,1),rational(6,1)]])
sol=np.array([[rational(1,1)],[rational(0,1)],[rational(-4,2)]])
Simplex.calculateExitValues(inputRuleValues,yValues,sol)
#Si el número de valores de la regla de entrada es diferente que el número de valores de y, devuelve None
inputRuleValues=np.array([rational(2,1),rational(-3,1)])
yValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),],[rational(3,1),
rational(5,1),rational(6,1)]])
sol=np.array([[rational(1,1)],[rational(0,1)],[rational(-4,2)]])
print(Simplex.calculateExitValues(inputRuleValues,yValues,sol))
#Si el número de valores de la regla de entrada es diferente que el número de valores de y, devuelve None
inputRuleValues=np.array([rational(2,1),rational(-3,1),rational(-4,3)])
yValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),]])
sol=np.array([[rational(1,1)],[rational(0,1)],[rational(-4,2)]])
print(Simplex.calculateExitValues(inputRuleValues,yValues,sol))
#Si la longitud de la solución es menor que el número de valores de algún conjunto de y, devuelve None
inputRuleValues=np.array([rational(2,1),rational(-3,1),rational(-4,3)])
yValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),],[rational(3,1),
rational(5,1),rational(6,1)]])
sol=np.array([[rational(1,1)],[rational(0,1)]])
print(Simplex.calculateExitValues(inputRuleValues,yValues,sol))
#Si recibe algo que no sea un array de numpy con elementos rational en cualquiera de los parámetros, devuelve None
print(Simplex.calculateExitValues(inputRuleValues,66,sol))
Explanation: calculateExitValues
Este método recibe los valores de la regla de entrada en un array de numpy, los valores de y en otro array de numpy, y la solución de esa iteración en un array de numpy, en forma de columna. Todos los elementos de los arrays deben ser rational. Si todos los parámetros se introducen de forma correcta, se devuelven los valores de la regla de salida. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
exitValues=np.array([rational(1,2),rational(-3,2),rational(0,1),rational(5,2)])
Simplex.showExitValues(exitValues)
#Si recibe algo que no es una array de numpy con elementos rational, devuelve None
exitValues=np.array([1,-3,0,5])
print(Simplex.showExitValues(exitValues))
Explanation: showExitValues
Este método recibe en un array de numpy con elementos rational los valores de la regla de salida, y los muestra por pantalla, acompañados de el nombre que reciben("O"), y de cuál será el criterio de elección del valor de salida(min). En caso de que no reciba un array de numpy, devuelve None. Ejemplos:
End of explanation
exitValues=np.array([rational(1,3),rational(-3,2),rational(0,1),rational(5,4)])
print(Simplex.calculateO(exitValues))
#Si todos los valores recibidos son Nan, se omitirán y devolverá None
exitValues=np.array([np.nan,np.nan,np.nan,np.nan])
print(Simplex.calculateO(exitValues))
#Si recibe algo que no es una array de numpy con elementos rational o Nan, devuelve None
exitValues=np.array([-1,-3,-3,-5])
print(Simplex.calculateO(exitValues))
Explanation: calculateO
Este método calcula el valor de O, para un conjunto de valores de salida que recibe por parámetro como un array de numpy. Este valor será el de los valores recibidos. El cálculo de qué valores tienen denominador negativo o 0, se hace en el método calculateExitValues, luego aquí se recibirá un array con valores rational y Nan.Si todos los valores son Nan, devolverá None. En caso de que no reciba un array de numpy, devuelve None. Ejemplos:
End of explanation
O = 3
Simplex.showOValue(O)
O = rational(3,4)
Simplex.showOValue(O)
#Si lo que recibe por parámetro no es un nuúmero, devuelve None
print(Simplex.showOValue([4,3]))
Explanation: showOValue
Este método recibe el valor de O, y simplemente lo muestra por pantalla, con su nombre asociado("O"). En caso de no recibir un número por parámetro, devuelve None. Ejemplos:
End of explanation
outputRuleValues=np.array([rational(1,2),rational(-3,-2),rational(0,1),rational(5,7)])
columnsOfIteration=np.array([0,2,3])
Simplex.calculateVarWhichExit(columnsOfIteration,outputRuleValues)
#Si los valores de la regla de salida, son todos negativos o divididos por 0, es decir, le pasamos Nan, devuelve None
outputRuleValues=np.array([np.nan,np.nan,np.nan,np.nan])
print(Simplex.calculateVarWhichExit(columnsOfIteration,outputRuleValues))
# Si recibe algo que no es un array de numpy en ambos parámetros, devuelve None
outputRuleValues=np.array([1,-3,0,5])
print(Simplex.calculateVarWhichExit(4,outputRuleValues))
Explanation: calculateVarWhichExit
Este método recibe en un array de numpy las variables o columnas que pertenecen a la iteración(deben aparecer ordenadas en función de lo que se esté realizando en el problema), y en otro array de numpy, los valores de la regla de salida, que deben ser rational o Nan. Si los parámetros introducidos son correctos, devuelve el valor de la variable que saldrá en esta iteración, o None, en caso de que todos los valores sean Nan. En caso de no recibir como parámetro un array de numpy, devolverá None. Ejemplos:
End of explanation
varWhichExit=4
Simplex.showVarWhichExit(varWhichExit)
# Si lo que recibe por parámetro no es un número, devuelve None.
print(Simplex.showVarWhichExit(np.array([3,4])))
Explanation: showVarWhichExit
Este método recibe la variable que sale por parámetro, y la muestra por pantalla, acompañado de una indicación de que esa es la variable que saldrá en esta iteración. En caso de no recibir un número por parámetro, devuelve None. Ejemplos:
End of explanation
columnsOfIteration=np.array([3,4,5])
Simplex.showIterCol(columnsOfIteration)
# Si recibe algo que no sea un array de numpy, devuelve None
print(Simplex.showIterCol(3))
Explanation: showIterCol
Este método recibe un array de numpy con las columnas o variables de la iteración, y simplemente las muestra por pantalla, acompañado de una indicación de que esas son las variables de la iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])
b = np.array([rational(2,1),rational(4,1),rational(1,1)])
columnsOfIteration=np.array([3,4,5])
Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration)
# Si hay distinto número de recursos(b), que restricciones, devuelve None
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])
b = np.array([[rational(2,1)],[rational(4,1)]])
columnsOfIteration=np.array([3,4,5])
print(Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration))
# Si la función tiene diferente número de variables que las restricciones, devuelve None
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1)])
b = np.array([[rational(2,1)],[rational(4,1)],[rational(1,1)]])
columnsOfIteration=np.array([3,4,5])
print(Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration))
# Si el número de columnas o variables de la iteración, no se corresponde con el número de restricciones, devuelve None
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])
b = np.array([[rational(2,1)],[rational(4,1)],[rational(1,1)]])
columnsOfIteration=np.array([3,4])
print(Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration))
# Si recibe por parámetro, algo que no es una matriz de numpy con elementos rational en el primer parámetro, o un array de numpy
# con elementos rational(excepto en las columnas de la iteración, que son valores enteros) en el resto, devuelve None.
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])
b = np.array([[rational(2,1)],[rational(4,1)],[rational(1,1)]])
columnsOfIteration=np.array([3,4,5])
print(Simplex.solveIteration(4,b,functionVector,columnsOfIteration))
Explanation: solveIteration
Este método recibe por parámetro la matriz completa de las restricciones del problema(sin signos ni recursos) en una matriz de numpy, y luego recibe tres arrays de numpy, que contienen el vector de recursos,el valor de todas las variables en la función, y las columnas o variables de la presente iteración. Los elementos de la matriz, los recursos y el vector de la función deben ser rational. En caso de que todos los parámetros introducidos sean correctos, muestra por pantalla el desarrollo de la iteración, y finalmente devuelve, la solución de la iteración,el valor de la función para la iteración, cuál sería la variable que entraría, cuál la variable que saldría y un valor que indica si habría más iteraciones(True),no hay más iteraciones(False) o el número de soluciones es elevado(-1). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
matrix=np.matrix([[rational(3,2),rational(0,1),rational(1,1)],[rational(3,5),rational(4,5),rational(0,1)],[rational(5,6),
rational(7,8),rational(0,1)]])
column=0
'''Se busca la columna 0 de la matriz identidad: [[1],
[0],
[0]]'''
Simplex.identityColumnIsInMatrix(matrix,column)
# Si la columna de la matriz identidad no está en la matriz, devuelve None
column=2
print(Simplex.identityColumnIsInMatrix(matrix,column))
# Si la columna pasada, aparece más de una vez, devolverá la primera
matrix=np.matrix([[rational(1,1),rational(0,1),rational(1,1)],[rational(0,1),rational(4,5),rational(0,1)],[rational(0,1),
rational(7,8),rational(0,1)]])
column=0
Simplex.identityColumnIsInMatrix(matrix,column)
# Si se pasa un número mayor o igual que el número de columnas que tiene la matriz, devuelve None
matrix=np.matrix([[rational(1,1),rational(0,1),rational(1,1)],[rational(0,1),rational(4,5),rational(0,1)],[rational(0,1),
rational(7,8),rational(0,1)]])
column=4
print(Simplex.identityColumnIsInMatrix(matrix,column))
# Si se pasa algo que no es una matriz de numpy con elementos rational en el primer parámetro o algo que no es un número en el
# segundo parámetro, devuelve None
print(Simplex.identityColumnIsInMatrix(matrix,"[2,3]"))
Explanation: identityColumnIsInMatrix
Este método recibe una matriz de numpy con elementos rational, y un número que se corresponde, con el índice de una columna de la matriz identidad. Si todos los parámetros son correctos, devolverá el índice de la columna de la matriz pasada, donde se encuentra la columna de la matriz identidad. En caso de que la columna de la matriz identidad indicada no se encuentre en la matriz, devolverá None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
totalMatrix=np.matrix([[rational(1,1),rational(2,1),rational(3,1),rational(4,1),rational(0,1)],[rational(0,1),rational(3,1),
rational(4,1),rational(7,1),rational(1,1)]])
Simplex.variablesFirstIteration(totalMatrix)
# En caso de que una de las columnas de la matriz identidad, no aparezca, devuelve None
totalMatrix=np.matrix([[rational(1,1),rational(2,1),rational(3,1),rational(4,1),rational(0,1)],[rational(1,1),rational(3,1),
rational(4,1),rational(7,1),rational(1,1)]])
Simplex.variablesFirstIteration(totalMatrix)
# En caso de que una columna de la matriz identidad aparezca más de una vez, solo devuelve la primera
totalMatrix=np.matrix([[rational(1,1),rational(1,1),rational(3,1),rational(4,1),rational(0,1)],[rational(0,1),rational(0,1),
rational(4,1),rational(7,1),rational(1,1)]])
Simplex.variablesFirstIteration(totalMatrix)
# Si recibe algo que no es una matriz de numpy de elementos rational, devuelve None
print(Simplex.variablesFirstIteration(4))
Explanation: variablesFirstIteration
Este método recibe una matriz de numpy, que será la matriz completa del problema y que debe tener elementos rational. Si todos los parámetros son correctos, calcula cuáles son las variables de la primera iteración del problema(es decir, donde están las columnas de la matriz identidad, en la matriz pasada)en un array de numpy. En caso de que alguna de las columnas de la matriz identidad no aparezca, devuelve None en su posición. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
variableWhichEnters=4
variableWhichExits=3
previousVariables=np.array([1,3,5])
Simplex.calculateColumnsOfIteration(variableWhichEnters,variableWhichExits,previousVariables)
# Si se intenta sacar una variable que no está, no saca nada
variableWhichEnters=4
variableWhichExits=6
previousVariables=np.array([1,3,5])
Simplex.calculateColumnsOfIteration(variableWhichEnters,variableWhichExits,previousVariables)
# Si se mete algo que no es un array de numpy en el tercer parámetro,o algo que no es un número en los dos primeros, devuelve
# None
print(Simplex.calculateColumnsOfIteration(variableWhichEnters,variableWhichExits,3))
Explanation: calculateColumnsOfIteration
Este método recibe la variable que entrará en la siguiente iteración, la variable que saldrá en la siguiente iteración, y en un array de numpy, las variables de la iteración anterior. Si los parámetros son correctos, devuelve en un array de numpy, las variables de la iteración actual. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
variablesOfLastIter=np.array([2,3,4])
numberOfVariables=6
iterationSolution=np.array([rational(4,1),rational(6,4),rational(7,3)])
print(Simplex.printMatrix(Simplex.completeSolution(variablesOfLastIter,numberOfVariables,iterationSolution)))
# Si el número de variables de la última iteración es diferente que la longitud de la solución, devuelve None
variablesOfLastIter=np.array([3,4])
numberOfVariables=6
iterationSolution=np.array([rational(4,1),rational(6,4),rational(7,3)])
print(Simplex.completeSolution(variablesOfLastIter,numberOfVariables,iterationSolution))
# Si recibe algo que no es un array de numpy en el primer y tercer parámetro(este debe ser de elementos rational), o algo que
# no es un número en el segundo, devuelve None
print(Simplex.completeSolution(variablesOfLastIter,[9,9],iterationSolution))
Explanation: completeSolution
Este método recibe las variables de la iteración en un array de numpy, el número total de variables del problema, y la solución de la iteración en un array de numpy, con todos sus elementos rational. Si todos los parámetros se introducen de forma correcta, devolverá la solución completa, es decir, el valor de cada una de las variables para dicha iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
matrixInitial=np.matrix([[rational(3,2),rational(4,3),rational(6,3)],[rational(6,9),rational(7,3),rational(8,5)],[rational(4,3),
rational(5,4),rational(7,5)]])
print(Simplex.printMatrix(Simplex.addIdentityColumns(matrixInitial)))
# Si ya hay alguna columna de la matriz identidad, devuelve solo las que faltan
matrixInitial=np.matrix([[rational(3,4),rational(1,1),rational(6,3)],[rational(6,4),rational(0,1),rational(8,9)],[rational(4,5),
rational(0,1),rational(7,6)]])
print(Simplex.printMatrix(Simplex.addIdentityColumns(matrixInitial)))
# Si ya están todas las columnas de la mtriz identidad, devuelve un array vacío
matrixInitial=np.matrix([[rational(0,1),rational(1,1),rational(0,1)],[rational(1,1),rational(0,1),rational(0,1)],[rational(0,1),
rational(0,1),rational(1,1)]])
Simplex.addIdentityColumns(matrixInitial)
# Si se pasa algo que no es una matriz de numpy con elementos rational, devuelve None
print(Simplex.addIdentityColumns(4))
Explanation: addIdentityColumns
Este método recibe una matriz de numpy con elementos rational, y devuelve en una matriz de numpy, cuáles son las columnas de la matriz identidad que no tiene. En caso de que ya tenga todas las columnas de la matriz identidad, devuelve un array vacío. En caso de recibir algo que no sea una matriz de numpy, devuelve None. Ejemplos:
End of explanation
lis=["hola","adios","hasta luego"]
Simplex.isStringList(lis)
lis=["hola",4,"hasta luego"]
Simplex.isStringList(lis)
# Si recibe algo que no es una lista, devuelve None
print(Simplex.isStringList(4))
Explanation: isStringList
Este método recibe una lista y comprueba si todos los elementos de la misma son strings, en ese caso devuelve True. Si algún elemento de la lista no es un string devuelve False.Se utiliza principalmente para comprobar que los parámetros de entrada de algunos métodos son correctos. En caso de no introducir una lista, devuelve None. Ejemplos:
End of explanation
array=np.array([2,3,4,5])
print(Simplex.calculateArtificialValueInFunction(array))
array=np.array([2,3,4,-5])
print(Simplex.calculateArtificialValueInFunction(array))
array=np.array([rational(2,5),rational(3,4),rational(4,9),rational(-5,7)])
print(Simplex.calculateArtificialValueInFunction(array))
#Si recibe algo que no es una rray de Numpy, devuelve None
print(Simplex.calculateArtificialValueInFunction(4))
Explanation: calculateArtificialValueInFunction
Este método calcula y devuelve el coeficiente de la variable artificial para la función objetivo. Aunque como sabemos este valor será infinito y se añadirá con coeficiente negativo, basta con que este valor sea superior a la suma de los valores absolutos de los coeficientes que ya están en el vector función. El método funciona tanto con valores enteros, como con rational, pero siempre devolverá un rational. En caso de recibir algo que no es un array de numpy, devuelve None. Ejemplos:
End of explanation
vector=np.array([rational(3,1),rational(4,1),rational(5,1),rational(6,1)])
numOfArtificialVariables= 2
print(Simplex.printMatrix(np.asmatrix(Simplex.addArtificialVariablesToFunctionVector
(vector,numOfArtificialVariables))))
#Si se pasa algo que no es un array de numpy con elementos rational en el primer parámetro, o algo que no es un número en
# el segundo, devuelve None
print(Simplex.addArtificialVariablesToFunctionVector(vector,[2,3]))
Explanation: addArtificialVariablesToFunctionVector
Este método recibe un array de numpy con elementos rational, que contiene los coeficientes de la función objetivo(vector función), y un número, que será el número de variables artificiales que se desea añadir. Si se introducen los parámetros de forma correcta, devolverá un array de numpy, que contendrá el vector función completo, ya con los coeficientes de las variables artificiales añadidos. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
vector=np.array([3,4,5,6,-20,-40])
numOfArtificialVariables= 2
Simplex.calculateWhichAreArtificialVariables(vector,numOfArtificialVariables)
# Si no se han incluido las variables artificiales, supone que son las últimas
vector=np.array([3,4,5,6])
numOfArtificialVariables= 2
Simplex.calculateWhichAreArtificialVariables(vector,numOfArtificialVariables)
vector=np.array([rational(3,2),rational(4,4),rational(5,6),rational(6,9),rational(-20,1),rational(-40,1)])
numOfArtificialVariables= 2
Simplex.calculateWhichAreArtificialVariables(vector,numOfArtificialVariables)
#Si se introduce algo que no es un array de numpy en el primer valor, o algo que no es un número en el segundo, devuelve None
numOfArtificialVariables= 2
print(Simplex.calculateWhichAreArtificialVariables(2,numOfArtificialVariables))
Explanation: calculateWhichAreArtificialVariables
Este método recibe un array de numpy, que contiene los coeficientes de la función objetivo, con las variables artificiales incluidas(en orden), y un número que representa el número de variables artificiales que hay. Si los parámetros son correctos, devolverá cuáles son las variables artificiales. El método funciona tanto con elementos rational, como con enteros. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
varArtificial=[4,5]
solution=np.array([[rational(34,2)],[rational(56,4)],[rational(7,8)],[rational(89,7)],[rational(3,1)],[rational(9,1)]])
Simplex.checkValueOfArtificialVariables(varArtificial,solution)
varArtificial=[4,5]
solution=np.array([[rational(34,2)],[rational(56,4)],[rational(7,8)],[rational(89,7)],[rational(-3,1)],[rational(-9,1)]])
Simplex.checkValueOfArtificialVariables(varArtificial,solution)
varArtificial=[4,5]
solution=np.array([[rational(34,2)],[rational(56,4)],[rational(7,8)],[rational(89,7)],[rational(0,1)],[rational(9,1)]])
Simplex.checkValueOfArtificialVariables(varArtificial,solution)
# Si recibe algo que no sea una lista en el primer parámetro o un array de numpy de elementos rational en el segundo, devuelve
# None
print(Simplex.checkValueOfArtificialVariables(5,solution))
Explanation: checkValueOfArtificialVariables
Este método recibe una lista que contiene las variables artificiales del problema, y en un array de numpy con elementos rational, la solución al mismo. Si los parámetros se introducen correctamente, el método comprueba si alguna de las variables artificiales, toma un valor positivo, y en ese caso las devuelve en una lista(si esto ocurriera el problema no tendría solución). Este método es algo especial, puesto que no sigue le funcionamiento de los demás. En este caso recibe las variables artificiales, pero empezando a contar desde la 0,(en el primer ejemplo entonces, 4 y 5, serán las dos últimas). Sin embargo, las variables que devuelve, son empezando a contar desde la 1. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
listOfstrings=["//hola","2 3 4 <=4 //first","#hola","adios"]
Simplex.omitComments(listOfstrings)
# En caso de no recibir una lista de strings, devuelve None
print(Simplex.omitComments([5,3]))
Explanation: omitComments
Este método recibe una lista de strings, y lo que hace es eliminar aquellas ocurrencias que comiencen por el caracter "//" o "#". También en aquellas ocurrencias que estos caracteres aparezcan en cualquier parte de la cadena, elimina la subcadena a partir de estos caracteres. Devolverá la lista, ya con estas ocurrencias eliminadas. Se utiliza para eliminar comentarios. En caso de recibir algo que no sea una lista, devuelve None.Ejemplos:
End of explanation
# Introducir aquí la ruta del archivo a abrir
file = open('../Files/file2.txt','r')
problem=Simplex.proccessFile(file)
print(Simplex.printMatrix(problem[0]))
print(Simplex.printMatrix(np.asmatrix(problem[1])))
print(problem[2])
print(problem[3])
#En caso de que se le pase algo que no sea un archivo, devuelve None
print(Simplex.proccessFile(4))
Explanation: proccessFile
Este método recibe un archivo por parámetro, que debe contener un problema de programación lineal en el siguiente formato:
Y devuelve en este orden, la matriz de restricciones en una matriz numpy,el vector de recursos en un array de numpy, los signos de las restricciones en una lista de strings y un string que contiene la función objetivo a optimizar. Para ver como abrir un archivo consultar los ejemplos. Si se le pasa algo que no es un archivo, devuelve None Ejemplos:
End of explanation
function="max 2 -3"
print(Simplex.printMatrix(np.asmatrix(Simplex.convertFunctionToMax(function))))
function="min 2 -3\n"
print(Simplex.printMatrix(np.asmatrix(Simplex.convertFunctionToMax(function))))
# Si recibe algo que no es un string devuelve None
function="min 2 -3\n"
print(Simplex.convertFunctionToMax(3))
Explanation: convertFunctionToMax
Este método recibe un string que contiene la función objetivo del problema en el siguiente formato:
max/min 2 -3
El método devuelve en un array de numpy de elementos rational con los coeficientes de la función, en forma de maximización, puesto que es como se utiliza en la forma estándar, luego si introduzco una función de minimización, me devolverá los coeficientes cambiados de signo. En caso de que lo que le pase no sea un string, devuelve None. Ejemplo:
End of explanation
previousSign="<"
Simplex.invertSign(previousSign)
previousSign=">"
Simplex.invertSign(previousSign)
previousSign="<="
Simplex.invertSign(previousSign)
previousSign=">="
Simplex.invertSign(previousSign)
previousSign="="
Simplex.invertSign(previousSign)
#Si introduzco algo que no sea un string, me devuelve None
previousSign=3
print(Simplex.invertSign(previousSign))
Explanation: invertSign
Este método recibe un string que contiene un signo (debe ser <,<=,>,>=,=) y devuelve en otro string su signo opuesto. En caso de no recibir un string por parámetro, devuelve None. Ejemplos:
End of explanation
matrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],
[rational(3,1),rational(4,2),rational(6,4)]])
resources=np.array([rational(1,4),rational(-4,1),rational(5,2)])
sign=["<=","<",">"]
std=Simplex.negativeToPositiveResources(matrix,resources,sign)
print(Simplex.printMatrix(std[0]))
print(Simplex.printMatrix(np.asmatrix(std[1])))
print(std[2])
matrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],
[rational(3,1),rational(4,2),rational(6,4)]])
resources=np.array([rational(1,4),rational(4,1),rational(5,2)])
sign=["<=","<",">"]
std=Simplex.negativeToPositiveResources(matrix,resources,sign)
print(Simplex.printMatrix(std[0]))
print(Simplex.printMatrix(np.asmatrix(std[1])))
print(std[2])
# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None
matrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],
[rational(3,1),rational(4,2),rational(6,4)]])
resources=np.array([rational(1,4),rational(-4,1)])
sign=["<=","<",">"]
std=Simplex.negativeToPositiveResources(matrix,resources,sign)
print(Simplex.negativeToPositiveResources(matrix,resources,sign))
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],
[rational(3,1),rational(4,2),rational(6,4)]])
resources=np.array([rational(1,4),rational(-4,1),rational(5,2)])
sign=["<=","<"]
std=Simplex.negativeToPositiveResources(matrix,resources,sign)
print(Simplex.negativeToPositiveResources(matrix,resources,sign))
# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un
# array de numpy con elementos rational en el segundo, o algo que no es una lista de strings, en el tercero,devuelve None
resources=np.array([1,-4,5])
sign=["<=","<",">"]
print(Simplex.negativeToPositiveResources(matrix,resources,sign))
Explanation: negativeToPositiveResources
Este método se utiliza para cambiar a positivos, los recursos que sean negativos, ya que esto no debe darse. Para ello, realiza las transformaciones necesarias, devolviendo un matriz de numpy con elementos rational que contiene las restricciones, un array de numpy con elementos rational que contiene los recursos, y una lista de strings con los signos de cada restricción, con todos los cambios ya realizados. Los parámetros de entrada son los mismos que las salidas que proporciona, pero con las transformaciones sin realizar, es decir, una matriz de numpy, un array de numpy y una lista de strings. Para los recursos que sean positivos, no se realiza transformación alguna, sino que simplemente devuelve lo que recibe. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1)])
sign=["<=",">="]
function="min -2 -3 -4 "
std=Simplex.convertToStandardForm(matrix,resources,sign,function)
print(Simplex.printMatrix(std[0]))
print(Simplex.printMatrix(np.asmatrix(std[1])))
print(std[2])
print(Simplex.printMatrix(np.asmatrix(std[3])))
# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1),rational(52,1)])
sign=["<=",">="]
function="min -2 -3 -4 "
print(Simplex.convertToStandardForm(matrix,resources,sign,function))
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1)])
sign=["<=",">=","="]
function="min -2 -3 -4 "
print(Simplex.convertToStandardForm(matrix,resources,sign,function))
# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un
# array de numpy con elementos rational en el segundo,algo que no es una lista de strings en el tercero o algo que no es un
# string en el cuarto,devuelve None
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1)])
function="min -2 -3 -4 "
print(Simplex.convertToStandardForm(matrix,resources,[4,0],function))
Explanation: convertToStandardForm
Este método recibe una martriz de numpy con elementos rational que contendrá las restricciones del problema, un array de numpy con elementos rational, que contendrá el vector de recursos, una lista de strings que contiene los signos de las restricciones y un string que contendrá la función en el formato "max/min 2 -3". Si todos los parámetros introducidos son correctos, el método devolverá los parámetros que ha recibido, pero transformados a la forma estándar(en el caso de la función la devuelve ya en un array de numpy con elementos rational, en su forma de maximización). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1)])
function=np.array([rational(14,6),rational(25,2)])
Simplex.showStandarForm(matrix,resources,function)
# Si recibe algo que no es una matriz de numpy con elementos rational, en el primer parámetro, algo que no es un array de numpy
# con elementos rational en el segundo y tercer parámetro, devuelve None
function=np.array([3,4])
print(Simplex.showStandarForm(matrix,resources,function))
Explanation: showStandarForm
Este método recibe una matriz de numpy con elementos rational que es la matriz de coeficientes, un array de numpy con elementos rational que es el vector de recursos y un array de numpy con elementos rational que es el vector de la función a optimizar. Todos los parámetros son introducidos en forma estándar y son mostrados, en un formato más visual. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
# Si se pasa False no devuelve la solución dual
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
signs=["<=","<=",">="]
function="max 2 1"
solutionOfDualProblem=False
sol=Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem)
print(Simplex.printMatrix(np.asmatrix(sol[0])))
print(Simplex.printMatrix(sol[1]))
print(sol[2])
# Si se pasa True devolverá la soución dual
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
signs=["<=","<=",">="]
function="max 2 1"
solutionOfDualProblem=True
sol=Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem)
print(Simplex.printMatrix(np.asmatrix(sol[0])))
print(Simplex.printMatrix(sol[1]))
print(sol[2])
print(Simplex.printMatrix(np.asmatrix(sol[3])))
# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1)])
signs=["<=","<=",">="]
function="max 2 1"
solutionOfDualProblem=True
print(Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem))
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
sign=["<=","<=",">=","="]
function="max 2 1"
solutionOfDualProblem=True
print(Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem))
# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un
# array de numpy con elementos rational en el segundo,algo que no es una lista de strings en el tercero,algo que no es un string
# en el cuarto o algo que no sea True o False en el quinto,devuelve None
matrix=np.matrix([[2,1],[1,-1],[5,2]])
resources=np.array([18,8,4])
sign=["<=","<=",">="]
function="max 2 1"
print(Simplex.solveProblem(matrix,resources,sign,function,True))
Explanation: solveProblem
Este método resuelve el problema de programación lineal que se le pasa por parámetro. Para ello, recibe una matriz de numpy con elementos rational que contiene las restricciones, sin signos ni recursos, un array de numpy con elementos rational que contiene los recursos, una lista de strings, que contienen los signos de las restricciones, un string que contiene la función en el formato "max/min 2 -3" y un valor True o False, que determina si se quiere obtener también la solución del problema dual al introducido. El método devuelve en este orden la solución del problema(valor de las variables),el valor de la función objetivo para esa solución, una explicación del tipo de problema y el valor de las variables de la solución del problema dual, en caso de que se introduzca True, como último parámetro. No es necesario que se introduzca el problema en forma estándar puesto que el método ya realiza la transformación internamente.En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
sign=["<=","<=",">="]
function="max 2 1"
dual=Simplex.dualProblem(matrix,resources,sign,function)
print(Simplex.printMatrix(dual[0]))
print(Simplex.printMatrix(np.asmatrix(dual[1])))
print(dual[2])
print(dual[3])
# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1)])
sign=["<=","<=",">="]
function="max 2 1"
print(Simplex.dualProblem(matrix,resources,sign,function))
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
sign=["<=","<=",">=","<="]
function="max 2 1"
print(Simplex.dualProblem(matrix,resources,sign,function))
# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un
# array de numpy con elementos rational en el segundo,algo que no es una lista de strings en el tercero o algo que no es un
# string en el cuarto
matrix=np.matrix([[2,1,4],[6,-4,-7],[8,12,9]])
resources=np.array([[1],[8],[10]])
sign=["<=","<=",">="]
function="min 3 10 0"
print(Simplex.dualProblem(matrix,resources,sign,function))
Explanation: dualProblem
Este método recibe un problema de programación lineal y devuelve el problema dual del pasado por parámetro. Para ello, recibe una matriz de numpy con elementos rational que contiene las restricciones, sin signos ni recursos, un array de numpy con elementos rational que contiene los recursos, una lista de strings, que contienen los signos de las restricciones y un string que contiene la función en el formato "max/min 2 -3". El método devuelve el problema dual en este orden una matriz de numpy que contiene las restricciones, sin signos ni recursos, un array de numpy que contiene los recursos, una lista de strings, que contienen los signos de las restricciones y un string que contiene la función en el formato "max/min 2 -3". No es necesario que se introduzca el problema en forma estándar(tampoco en forma simétrica de maximización) puesto que el método ya realiza la transformación internamente. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
colsOfIteration=np.array([3,4,1])
totalMatrix = np.matrix([[rational(2,1),rational(3,1),rational(4,1),rational(0,1),rational(1,1)],
[rational(3,1),rational(4,1),rational(7,1),rational(0,1),rational(0,1)],[rational(2,1),rational(6,1),
rational(7,1),rational(1,1),rational(0,1)]])
function=np.array([rational(3,1),rational(6,1),rational(-7,1),rational(0,1),rational(0,1)])
print(Simplex.printMatrix(np.asmatrix(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,
totalMatrix))))
# Si se pasa un número mayor de columnas(variables) del que hay en la matriz o en la función devuelve None
colsOfIteration=np.array([3,4,1,5,6,2])
totalMatrix = np.matrix([[rational(2,1),rational(3,1),rational(4,1),rational(0,1),rational(1,1)],
[rational(3,1),rational(4,1),rational(7,1),rational(0,1),rational(0,1)],[rational(2,1),rational(6,1),
rational(7,1),rational(1,1),rational(0,1)]])
function=np.array([rational(3,1),rational(6,1),rational(-7,1),rational(0,1),rational(0,1)])
print(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,totalMatrix))
# Si el número de columnas(variables) de la función es mayor que el de la matriz, devuelve None
colsOfIteration=np.array([3,4,1])
totalMatrix = np.matrix([[rational(2,1),rational(3,1),rational(4,1),rational(0,1),rational(1,1)],
[rational(3,1),rational(4,1),rational(7,1),rational(0,1),rational(0,1)],[rational(2,1),rational(6,1),
rational(7,1),rational(1,1),rational(0,1)]])
function=np.array([rational(3,1),rational(6,1),rational(-7,1),rational(0,1),rational(0,1),rational(7,1)])
print(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,totalMatrix))
# Si se pasa algo que no es un array de numpy en el primer o el segundo parámetro(este debe ser de elementos rational), o algo
# que no es una matriz de numpy con elementos rational en el tercero, devuelve None
colsOfIteration=np.array([3,4,1])
totalMatrix = np.matrix([[2,3,4,0,1],[3,4,7,0,0],[2,6,7,1,0]])
function=np.array([3,6,-7,0,0,4])
print(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,totalMatrix))
Explanation: calculateSolutionOfDualProblem
Este método recibe las columnas o variables de la última iteración del problema en un array de numpy, el vector de la función en su forma de maximización en un array de numpy, y la matriz inicial con las restricciones del problema, en una matriz de numpy. Es necesario que tanto la matriz como la función, se encuentren en la forma estándar. Si la introducción de parámetros es correcta, se devuelve la solución del problema dual, en un array de numpy. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
# Si se le pasa todo correcto, devuelve una función, y un string con la función
lineOfMatrix=np.array([rational(3,4),rational(2,1)])
sign="<="
resource=rational(4,1)
x = np.linspace(0, 10)
Simplex.convertToPlotFunction(lineOfMatrix, sign, resource, x)
# Si se le pasa una restricción con la segunda componente 0, devuelve un número
lineOfMatrix=np.array([rational(3,4),rational(0,1)])
sign="<="
resource=rational(4,1)
x = np.linspace(0, 10)
Simplex.convertToPlotFunction(lineOfMatrix, sign, resource, x)
# Si se le pasa una restricción que no tiene 2 componentes o tiene más de 2,devuelve None
lineOfMatrix=np.array([rational(3,4)])
print(Simplex.convertToPlotFunction(lineOfMatrix, sign,
resource, x))
# Si se le pasa algo que no es un array de numpy de rational en el primer parámetro, algo que no es un string en el segundo, algo
#que no es un rational en el tercero o algo que no es un array de numpy en el tercero,devuelve None
print(Simplex.convertToPlotFunction(lineOfMatrix, sign,
4, x))
Explanation: Solución gráfica
convertToPlotFunction
Este método transforma una restricción en una función para ser representada. Para ello, recibe un array de numpy que contiene la restricción(todos los coeficientes deben ser rational), sin signo ni recurso,un string que contiene el signo, un rational que es el recurso que contiene los recursos, y una variable que será el linespace para su representación. Además de devolver la función, devuelve un string, con la función. Si el valor de y en la restricción es 0, devuelve un rational, en lugar de una función. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
% matplotlib inline
import matplotlib.pyplot as plt
function=lambda x: 3*x+1
x=np.linspace(0, 10)
label="3x+1 = 2"
Simplex.showFunction(function, x, label)
plt.show()
# Se le puede pasar un número si la función es de tipo y=n
x=np.linspace(0, 10)
label="3x+1 = 2"
Simplex.showFunction(4,x, label)
plt.show()
# Si se le pasa algo que no es una función o un número en el primer elemento, algo que no es un array de numpy en el segundo, o
# algo que no es un string en el tercero, devuelve None
print(Simplex.showFunction(np.array([3,4,5]),x, label))
Explanation: * showFunction*
Este método recibe una función y la representa. Para ello recibe una función,o un número si la función es de tipo y=n, una variable que será el linespace para representarlo y un string que será la etiqueta que se le dará a la función. Es necesario después de ejecutar este método hacer plt.show(). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
# Como vemos en este caso elimina un punto que está repetido
seq=[(rational(2,1),rational(3,4)),(rational(6,1),rational(7,4)),(rational(2,1),rational(3,4)),(rational(5,2),rational(3,4)),]
Simplex.eliminateRepeatedPoints(seq)
# Con enteros funciona perfectamente
seq=[(3,1),(4,5),(4,5),(2,1)]
Simplex.eliminateRepeatedPoints(seq)
# Con float no funciona exactamente
seq=[(3.0,1.1),(4.0,5.0),(4.000001,5.0),(2.0,1.0)]
Simplex.eliminateRepeatedPoints(seq)
# Si no se introduce un lista, devuelve None
print(Simplex.eliminateRepeatedPoints(4))
Explanation: * eliminateRepeatedPoints*
Este método recibe una lista de puntos(en forma de tupla) y devuelve la misma lista, con los puntos repetidos eliminados. Con enteros y rational, funciona exactamente, no así con float si los números tienen muchos decimales, puesto que podría considerar por ejemplo 5.33333 y 5.33334 como dos números distintos, cuando podrían ser el mismo. En caso de no recibir una lista, devuelve None. Ejemplos:
End of explanation
# Con enteros funciona perfectamente
list1=[(3,1),(4,5),(6,7)]
list2=[(2,5),(4,5),(4,8)]
Simplex.eliminatePoints(list1, list2)
# Con rational funciona perfectamente
list1=[rational(5,1),rational(2,5),rational(6,1)]
list2=[rational(8,7),rational(2,5),rational(10,8)]
Simplex.eliminatePoints(list1, list2)
# Con float no funciona exactamente
list1=[(3.0,1.0),(4.0,5.0),(6.0,7.0)]
list2=[(2.0,5.0),(4.000001,5.0),(4.0,8.0)]
Simplex.eliminatePoints(list1, list2)
# Si recibe algo que no sean dos listas, devuelve None
print(Simplex.eliminatePoints(3, list2))
Explanation: * eliminatePoints*
Este método recibe dos listas, y devuelve una lista con los elementos de la primera lista que no están en la segunda. Se puede utilizar para eliminar puntos(tuplas) o cualquier elemento. Igual que el método anterior, con float no funciona exactamente.Si no recibe dos listas, devuelve None. Ejemplos:
End of explanation
functionVector=np.array([rational(2,1),rational(3,1)])
points=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]
solution = rational(19,4)
Simplex.calculatePointOfSolution(functionVector, points, solution)
functionVector=np.array([rational(2,1),rational(3,1)])
points=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]
solution = rational(18,3)
print(Simplex.calculatePointOfSolution(functionVector, points, solution))
# Si recibe algo que no sea un array de numpy en el primer parámetro, una lista de puntos rational en el segundo, o un rational
# en el tercero, devuelve None
print(Simplex.calculatePointOfSolution(functionVector, points, 3.0))
Explanation: calculatePointOfSolution
Est método recibe un array de numpy con los coeficientes de la función a optimizar(en forma de maximización),una lista de puntos cuyas coordenadas son rational, y un rational con el valor de la función objetivo optimizada. El método devuelve cuál es el punto que alcanza el valor pasado. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
function="max 2 3"
points=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]
sol=Simplex.calculateSolution(function, points)
print(sol[0])
print(sol[1])
function="min 2 3"
points=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]
sol=Simplex.calculateSolution(function, points)
print(sol[0])
print(sol[1])
# Si la lista esta vacía, devuelve None
print(Simplex.calculateSolution(function,[]))
# Si recibe algo que no es un string en el primer parámetro o una lista de puntos rational en el segundo devuelve None
print(Simplex.calculateSolution(function, 4))
Explanation: calculateSolution
Este método recibe una función a optimizar en un string, en el formato que se puede ver en los ejemplos. Recibe un conjunto de puntos cuyas coordenas son rational. El método devuelve el valor de la función optimizada, y cuál es el punto de los pasados que la optimiza.Si la lista no tiene puntos, devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
line1=np.array([rational(2,1),rational(3,4)])
line2=np.array([rational(8,3),rational(7,9)])
resource1=rational(3,1)
resource2=rational(4,1)
point=Simplex.intersectionPoint(line1, line2, resource1, resource2)
print("("+str(point[0])+","+str(point[1])+")")
# Si no hay punto de intersección, devuelve None
line1=np.array([rational(2,1),rational(3,4)])
line2=np.array([rational(2,1),rational(3,4)])
resource1=rational(3,1)
resource2=rational(4,1)
print(Simplex.intersectionPoint(line1, line2, resource1, resource2))
# Si se introduce algo que no es un array de rational de longitud 2 en los dos primeros parámetros, o algo que no es un rational,
# en los dos últimos, devuelve None
print(Simplex.intersectionPoint(3, line2, resource1, resource2))
Explanation: intersectionPoint
Este método calcula el punto de intersección entre dos restricciones de tipo "=". Recibe dos array de numpy, cuyos componenetes deben ser rational, que contienen los coeficientes de las restricciones, y recibe también los recursos de cada restricción en dos rational. En caso de que no haya punto de intersección entre ellas, devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
points=[(rational(4,2),rational(-3,4)),(rational(5,4),rational(6,-8)),(rational(1,4),rational(6,1))]
Simplex.eliminateNegativePoints(points)
# Si recibe algo que no es una lista de puntos rational, devuelve None
points=[(4,2),(6,-8),(6,1)]
print(Simplex.eliminateNegativePoints(points))
Explanation: eliminateNegativePoints
Este método recibe una lista de puntos cuyas coordenadas son rational, y devuelve la lista, sin aquellos puntos con coordenadas negativas. Si recibe algo que no es una lista de puntos rational, devuelve None. Ejemplos:
End of explanation
matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)],[rational(6,1),rational(0,1)]])
resources=np.array([rational(3,1),rational(2,1),rational(4,1)])
Simplex.calculateAllIntersectionPoints(matrix, resources)
# Si el número de restricciones es distinto del de recursos, devuelve None
matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)],[rational(6,1),rational(0,1)]])
resources=np.array([rational(3,1),rational(2,1)])
print(Simplex.calculateAllIntersectionPoints(matrix, resources))
# Si recibe algo que no sea un array de numpy, con elementos rational, devuelve None
print(Simplex.calculateAllIntersectionPoints(matrix, 4))
Explanation: calculateAllIntersectionPoints
Este método recibe un array de arrays de numpy con todas las restricciones, sin signo ni recursos, y un array de numpy con los recursos de cada restricción. El método devuelve en una lista, todos los puntos de intersección entre las restricciones y de las restricciones con los ejes de coordenadas positivos. También añade el punto (0,0). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)],[rational(6,1),rational(0,1)]])
resources=np.array([rational(3,1),rational(2,1),rational(4,1)])
constX= rational(10,1)
constY= rational(8,1)
Simplex.calculateNotBoundedIntersectionPoints(matrix, resources, constX, constY)
matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)]])
resources=np.array([rational(3,1),rational(2,1),rational(4,1)])
constX= rational(10,1)
constY= rational(8,1)
print(Simplex.calculateNotBoundedIntersectionPoints(matrix, resources, constX, constY))
# Si recibe algo que no sea un array de numpy, con elementos rational, en los dos primeros parámetros o algo que no sea un
# rational en los dos últimos, devuelve None
print(Simplex.calculateNotBoundedIntersectionPoints(matrix, resources, np.array([rational(4,5)]), constY))
Explanation: calculateNotBoundedIntersectionPoints
Este método recibe un array de arrays de numpy con todas las restricciones, sin signo ni recursos,un array de numpy con los recursos de cada restricción y los máximos valores de x y de y que se van a representar, en dos ratioanl. El método devuelve en una lista, los puntos de intersección entre las restricciones y los ejes imaginarios constituidos en los máximos puntos representados. Por ejemplo si se pasa constX=3 y constY=4, devolverá los puntos de intersección entre las restricciones y los ejes y=3 y x=4 . También añade el punto de intersección entre los dos hipotéticos ejes(en el ejemplo anterior el punto (4,3). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
# Si cumple la inecuación
inecuation=np.array([3,4])
solution=(1,1)
sign=">="
resource=6
Simplex.checkIfIsSolution(inecuation, solution, sign, resource)
# Con rational también funciona
inecuation=np.array([rational(3,2),rational(4,3)])
solution=(rational(2,1),rational(1,1))
sign="<="
resource=rational(5,1)
Simplex.checkIfIsSolution(inecuation, solution, sign, resource)
# Si la inecuación no se cumple
inecuation=np.array([3,4])
solution=(1,1)
sign="="
resource=6
Simplex.checkIfIsSolution(inecuation, solution, sign, resource)
# No funciona exactamente con float
inecuation=np.array([3.0,4.0])
solution=(1.0,1.0)
sign="="
resource=7.00001
Simplex.checkIfIsSolution(inecuation, solution, sign, resource)
# Si se introduce algo que no se un array de numpy de longitud 2 en el primer parámetro, una tupla en el segundo, un string en el
# tercero o un número en el último, devuelve None
print(Simplex.checkIfIsSolution(inecuation, solution, sign,np.array([3,4])))
Explanation: checkIfIsSolution
Este método recibe una restricción, con los coeficentes de la misma en una array de numpy, la solución a probar en una tupla, el signo en un string y el recurso en un número. El método devuelve True, si la solución satisface la restricción, o False si no la satisface. El método funciona con enteros y rational, perfectamente, pero con float, no es del todo exacto. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
# El método funciona con valores rational, eliminando los puntos que no pertencen a la región factible
points=[(rational(0,1),rational(5,1)),(rational(5,1),rational(0,1)),(rational(10,1),rational(12,1)),
(rational(-30,1),rational(1,2))]
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<=","<="]
Simplex.calculateFeasibleRegion(points, inecuations, resources, sign)
# El método funciona con valores enteros, eliminando los puntos que no pertencen a la región factible
points=[(0,5),(5,0),(10,12),(-30,1)]
inecuations=np.array([np.array([-7,10]),np.array([2,1]), np.array([8,-7])])
resources=np.array([50,32,40])
sign=["<=","<=","<="]
Simplex.calculateFeasibleRegion(points, inecuations, resources, sign)
# El número de restricciones tiene que ser igual que el de signos y el de recursos
points=[(0,5),(5,0),(10,12),(-30,1)]
inecuations=np.array([np.array([-7,10]),np.array([2,1]), np.array([8,-7])])
resources=np.array([50,32])
sign=["<=","<=","<="]
print(Simplex.calculateFeasibleRegion(points, inecuations, resources, sign))
# Si se introduce algo que no es una lista, en el primer parámetro, un array de numpy en el segundo y tercer parámetro, o una
# lista de strings, en el cuarto parámetro, devuelve None
inecuations=np.matrix([np.array([2,1]),np.array([1,-1]),np.array([5,2])])
print(Simplex.calculateFeasibleRegion(points, inecuations, resources, sign))
Explanation: calculateFeasibleRegion
Este método recibe un conjunto de puntos en una lista, un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos y una lista de string con los signos. El método devuelve la lista de puntos introducidos, que cumplen todas las restricciones, es decir pertenecen a la región factible. El método funciona tanto con rational, como con enteros, no siendo tan exacto con float. Si ningún punto pertenece a la región factible, devolverá una lista vacía. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
points=[(4,3),(5,6),(1,-2)]
Simplex.calculateMaxScale(points)
points=[(rational(0,1),rational(5,1)),(rational(5,1),rational(0,1)),(rational(10,1),rational(12,1)),
(rational(-30,1),rational(1,2))]
Simplex.calculateMaxScale(points)
points=[(4.6,3.7),(5.0,6.5),(1.2,-2.5)]
Simplex.calculateMaxScale(points)
# Si recibe algo que no es una lista, devuelve None
print(Simplex.calculateMaxScale(3))
Explanation: calculateMaxScale
Este método recibe una lista de puntos, y devuelve el máximo valor de la coordenada x y de la coordenada y. Se utiliza para saber cuál es el punto máximo que se debe representar. En caso de no recibir una lista, devuelve None. Ejemplos:
End of explanation
points=[(4,3),(5,6),(1,-2)]
Simplex.calculateMinScale(points)
points=[(rational(0,1),rational(5,1)),(rational(5,1),rational(0,1)),(rational(10,1),rational(12,1)),
(rational(-30,1),rational(1,2))]
Simplex.calculateMinScale(points)
points=[(4.6,3.7),(5.0,6.5),(1.2,-2.5)]
Simplex.calculateMinScale(points)
# Si recibe algo que no es una lista, devuelve None
print(Simplex.calculateMinScale(3))
Explanation: calculateMinScale
Este método recibe una lista de puntos, y devuelve el mínimo valor de la coordenada x y de la coordenada y. Se utiliza para saber cuál es el punto mínimo que se debe representar. En caso de no recibir una lista, devuelve None. Ejemplos:
End of explanation
point=(rational(0,1),rational(5,1))
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<=","<="]
Simplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign)
point=(rational(-30,1),rational(1,2))
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<=","<="]
Simplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign)
# El método funciona con valores enteros, eliminando los puntos que no pertencen a la región factible
points=(0,5)
inecuations=np.array([np.array([-7,10]),np.array([2,1]), np.array([8,-7])])
resources=np.array([50,32,40])
sign=["<=","<=","<="]
Simplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign)
# El número de restricciones tiene que ser igual que el de signos y el de recursos
points=(0,5)
inecuations=np.array([np.array([-7,10]),np.array([2,1])])
resources=np.array([50,32,40])
sign=["<=","<=","<="]
print(Simplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign))
# Si se introduce algo que no es una tupla, en el primer parámetro, un array de numpy en el segundo y tercer parámetro, o una
# lista de strings, en el cuarto parámetro, devuelve None
print(Simplex.checkIfPointInFeasibleRegion(4, inecuations, resources, sign))
Explanation: checkIfPointInFeasibleRegion
Este método recibe un punto en una tupla, un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos y una lista de string con los signos. El método devuelve True, si el punto cumple todas las restricciones, es decir pertenece a la región factible, y False, si no pertenece. El método funciona tanto con rational, como con enteros, no siendo tan exacto con float. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
# Puntos calculados con rational
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<=","<="]
scale1=(rational(0,1),rational(0,1))
scale=(rational(10,1),rational(10,1))
Simplex.calculateIntegerPoints(inecuations, resources, sign, scale1,scale)
# El número de restricciones tiene que ser igual que el de signos y el de recursos
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<="]
scale=(rational(10,1),rational(10,1))
print(Simplex.calculateIntegerPoints(inecuations, resources, sign, scale1, scale))
# Si se introduce algo que no es un array de numpy de rational en el primer y segundo parámetro,una lista de strings, en el
# tercer parámetro,o una tupla en el último parámetro devuelve None
print(Simplex.calculateIntegerPoints(inecuations, resources, sign, scale1, 4))
Explanation: calculateIntegerPoints
Este método recibe un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos, una lista de string con los signos y dos tuplas, con el mínimo y el máximo punto a representar. El método devuelve una lista con todos los puntos enteros que pertenecen a esa región factible y que son menores que el punto máximo. Todos los elementos de las restricciones, recursos y de la tupla, deben ser rational. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
points=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2),rational(4,5)),
(rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]
point=Simplex.centre(points)
print("("+str(point[0])+","+str(point[1])+")")
# Si recibe algo que no es una lista de puntos rational, devuelve None
points=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)]
print(Simplex.centre(points))
Explanation: centre
Este método recibe una lista de puntos, y devuelve el punto que está en el centro del polígono que forman dichos puntos. Las coordenadas de los puntos deben ser rational. En caso de no pasar una lista de puntos rational, devuelve None. Ejemplos:
End of explanation
listPoints=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2)
,rational(4,5)),(rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]
M = (1.811574074074074,1.1288359788359787)
value = 2.7299657524245156
point=Simplex.isThePoint(listPoints, value, M)
print("("+str(point[0])+","+str(point[1])+")")
# En caso de no recibir una lista de puntos rational, en el primer parámetro, un número en el segundo o una tupla en el tercero,
# devuelve None(ver si coge float en el centro)
print(Simplex.isThePoint(listPoints, value, 4))
Explanation: isThePoint
Este método recibe una lista de puntos, cuyas coordenadas son rational, un valor, que es el cálculo de la distancia al centro, y el centro de los puntos de la lista. El método devuelve el punto de la lista cuya distancia al centro, es el valor introducido. Si ningún punto, cumple la distancia devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
listPoints=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2),
rational(4,5)), (rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]
Simplex.calculateOrder(listPoints)
# Si recibe algo que no es una lista de puntos con coordenadas rational
listPoints=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)]
print(Simplex.calculateOrder(listPoints))
Explanation: calculateOrder
Este método recibe una lista de puntos, cuyas coordenadas son rational, y devuelve la misma lista de puntos, pero ordenadas en sentido horario. En caso de no introducir una lista de rational, devuelve None. Ejemplos:
End of explanation
# Si el punto está en la línea, devuelve True
point = (3,4)
line = np.array([3,2])
resource = 17
Simplex.pointIsInALine(point, line, resource)
# El método funciona con rational
point = (rational(3,1),rational(4,2))
line = np.array([rational(3,3),rational(2,1)])
resource = rational(7,1)
Simplex.pointIsInALine(point, line, resource)
# Si el punto no está en la línea, devuelve False
point = (3,4)
line = np.array([3,2])
resource = 10
Simplex.pointIsInALine(point, line, resource)
# El método no funciona exactamente con float
point = (3.0,4.0)
line = np.array([3.0,2.0])
resource = 17.00001
Simplex.pointIsInALine(point, line, resource)
# En caso de no recibir una tupla,en el primer parámetro, un array de numpy en el segundo o un número en el tercero, devuelve
# None
print(Simplex.pointIsInALine(point, 3, resource))
Explanation: pointIsInALine
Este método recibe un punto en una tupla, una restricción sin signos ni recursos en un array de numpy, y el recurso, como un número. El método devuelve True, si el punto, esta sobre la línea que representa la restricción en el plano, en otro caso devuelve False. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
# Elimina el último punto que está en una línea
listPoints=[(rational(3,1),rational(5,7)),(rational(5,8),rational(6,2)),(rational(4,6),rational(8,9)),(rational(8,1),
rational(2,1))]
matrix=np.array([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
Simplex.deleteLinePointsOfList(listPoints, matrix, resources)
# Si recibe algo que no es una lista de puntos con coordenadas rational,o algo que no es un array de numpy con elementos rational
# en el segundo y tercer parámetro,devuelve None
print(Simplex.deleteLinePointsOfList(listPoints, 4, resources))
Explanation: deleteLinePointsOfList
Este método recibe un conjunto de puntos en una lista, un array de numpy con un conjunto de restricciones sin signos, ni recursos, y un array de numpy con los recursos de las restricciones. El método devuelve la lista de puntos, pero sin aquellos puntos que están en la línea que representa alguna de las restricciones introducidas. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
%matplotlib inline
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
signs=["<=","<=",">="]
function="max 2 1"
save= False
Simplex.showProblemSolution(matrix, resources, signs, function, save)
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[2,1],[1,-1],[5,2]])
resources=np.array([[18],[8]])
signs=["<=","<=",">="]
function="max 2 1"
save=False
print(Simplex.showProblemSolution(matrix, resources, signs, function, save))
# Si se pasa por parámetro algo que no es una matriz de numpy en el primer parámetro con elementos rational, algo que no es un
# array de numpy con elementos rationalen el segundo,algo que no es una lista de strings en el tercero,algo que no es un string
# en el cuarto o algo que no sea False o un string en el quinto,devuelve None
matrix=np.matrix([[2,1],[1,-1],[5,2]])
resources=np.array([[18],[8],[4]])
signs=["<=","<=",">="]
function="max 2 1"
print(Simplex.showProblemSolution(matrix, resources, signs, function, False))
Explanation: showProblemSolution
Este método resuelve el problema de programación lineal que se le pasa por parámetro, de manera gráfica. Para ello, recibe una matriz de numpy que contiene las restricciones, sin signos ni recursos, un array de numpy que contiene los recursos, una lista de strings, que contienen los signos de las restricciones, un string que contiene la función en el formato "max/min 2 -3" y un valor False o un nombre, que determina si se quiere guardar la imagen en el archivo con el nombre indicado. El método muestra la solución gráfica, siempre que el problema tenga solo 2 variables, en otro caso devuelve None. No es necesario que se introduzca el problema en forma estándar. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation |
222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Multimodale Versuche der Alignierung historischer Texte
Andreas Wagner und Manuela Bragagnolo, Max-Planck-Institut für europäische Rechtsgeschichte, Frankfurt/M.
<wagner@rg.mpg.de> <bragagnolog@rg.mpg.de>
Table of Contents
<p><div class="lev1 toc-item"><a href="#Multimodale-Versuche-der-Alignierung-historischer-Texte" data-toc-modified-id="Multimodale-Versuche-der-Alignierung-historischer-Texte-1"><span class="toc-item-num">1 </span>Multimodale Versuche der Alignierung historischer Texte</a></div><div class="lev2 toc-item"><a href="#Introduction" data-toc-modified-id="Introduction-11"><span class="toc-item-num">1.1 </span>Introduction</a></div><div class="lev1 toc-item"><a href="#Preparations" data-toc-modified-id="Preparations-2"><span class="toc-item-num">2 </span>Preparations</a></div><div class="lev1 toc-item"><a href="#TF/IDF-" data-toc-modified-id="TF/IDF--3"><span class="toc-item-num">3 </span>TF/IDF </a></div><div class="lev1 toc-item"><a href="#Translations?" data-toc-modified-id="Translations?-4"><span class="toc-item-num">4 </span>Translations?</a></div><div class="lev2 toc-item"><a href="#New-Approach
Step3: Unlike in the previous case, where we had word files that we could export as plaintext, in this case Manuela has prepared a sample chapter with four editions transcribed in parallel in an office spreadsheet. So we first of all make sure that we have good UTF-8 comma-separated-value files, e.g. by uploading a csv export of our office program of choice to a CSV Linting service. (As a side remark, in my case, exporting with LibreOffice provided me with options to select UTF-8 encoding and choose the field delimiter and resulted in a valid csv file. MS Excel did neither of those.) Below, we expect the file at the following position
Step4: Then, we can go ahead and open the file in python's csv reader
Step5: And next, we read each line into new elements of four respective lists (since we're dealing with one sample chapter, we try to handle it all in memory first and see if we run into problems)
Step6: Actually, let's define two more list variables to hold information about the different editions - language and year of print
Step7: TF/IDF <a name="tfidf"></a>
In the previous (i.e. Solórzano) analyses, things like tokenization, lemmatization and stop-word lists filtering are explained step by step. Here, we rely on what we have found there and feed it all into functions that are ready-made and available in suitable libraries...
First, we build our lemmatization resource and "function"
Step8: Again, a quick test
Step9: And we are going to need the stopwords lists
Step10: (In contrast to simpler numbers that have been filtered out by the stopwords filter, I have left numbers representing years like "1610" in place.)
And, later on when we try sentence segmentation, we are going to need the list of abbreviations - words where a subsequent period not necessarily means a new sentence
Step11: Next, we should find some very characteristic words for each segment for each edition. (Let's say we are looking for the "Top 20".) We should build a vocabulary for each edition individually and only afterwards work towards a common vocabulary of several "Top n" sets.
Step12: Translations?
Maybe there is an approach to inter-lingual comparison after all. After a first unsuccessful try with conceptnet.io, I next want to try Babelnet in order to lookup synonyms, related terms and translations. I still have to study the API...
For example, let's take this single segment 19
Step13: And then first let's see how this segment compares in the different editions
Step14: Now we look up the "concepts" associated to those words in babelnet. Then we look up the concepts associated with the words of the present segment from another edition/language, and see if the concepts are the same.
But we have to decide on some particular editions to get things started. Let's take the Spanish and Latin ones
Step15: And then we can continue...
Step16: Actually I think this is somewhat promising - an overlap of four independent, highly meaning-bearing words, or of forty-something related concepts. At first glance, they should be capable of distinguishing this section from all the other ones. However, getting this result was made possible by quite a bit of manual tuning the stopwords and lemmatization dictionaries before, so this work is important and cannot be eliminated.
New Approach
Step17: ... lemmatize/stopwordize it---
Step18: With these preparations made, Hunaligning 1552 and 1556 reports "Quality 0.63417" for unlemmatized and "Quality 0.51392" for lemmatized versions of the texts for its findings which still contain many errors. Removing "
Step19: <div class="alert alertbox alert-success">Of course, in every set of documents, we will always find two that are similar in the sense of them being more similar to each other than to the other ones. Whether or not this actually *means* anything in terms of content is still up to scholarly interpretation. But at least it means that a scholar can look at the two documents and when she determines that they are not so similar after all, then perhaps there is something interesting to say about similar vocabulary used for different puproses. Or the other way round
Step23: In order to have a nicer overview over the many segments than is possible in this notebook, let's create a new html file listing some of the characteristics that we have found so far... | Python Code:
from typing import Dict
import lxml
from lxml import etree
document=etree.fromstring(
<TEI xmlns="http://www.tei-c.org/ns/1.0">
<text>
<body>
<div n="1">
<p>
... <milestone unit="number" n="9"/>aun que el amor de Dios ha de ser
grandissimo ..., como despues de. S. Tho.
<ref target="#nm-0406">b</ref><note xml:id="nm-0406"><p>1. Sec. quaestio
109. ar. 3.</p></note>, poco ha lo tratamos
<ref target="#nm-0407">c</ref><note xml:id="nm-0407"><p>in addit. ca.
Quoniam. de consec. disti. 1. nu. 10.</p></note>. Anadimos, (virtual)
<milestone unit="number" n="10"/>porque aquella basta, ...
<ref target="#nm-0408">d</ref><note xml:id="nm-0408"><p>in 4. dis. 14.
q. 1. art. 3.</p></note>, que pone exemplo ..., que Gabriel sigue
<ref target="#nm-0409">e</ref><note xml:id="nm-0409"><p>in 4. dis. 14.
q. 1. col. 12. & 13. & in. 3. di. 27. q. 1. co. 15.</p></note>.
<milestone unit="other" rendition="#asterisk"/> Y aun, aquel doctissimo,
... <ref target="#nm-040a">f</ref><note xml:id="nm-040a"><p>In Codice de
poeni. q. 2.</p></note>, y con razon, ..., el martyrio atribuya esto
<ref target="#nm-040b">g</ref><note xml:id="nm-040b"><p>Lib. 2. c. 16.
de natu. & gra.</p></note>, porque mas haze para esto el amor, ...
que lo que se padece <ref target="#nm-040c">h</ref><note xml:id="nm-040c">
<p>Arg. c. 13. 1. ad Corinth.</p></note>. Y puede ser que mas ame, ...,
como lo prueua bien Medina
<ref target="#nm-040d">i</ref><note xml:id="nm-040d"><p>in predi.
q. 2.</p></note>. Por lo qual largamente paresce quan lexos esta esto
dela opinion de Luthero<milestone unit="other" rendition="#asterisk"/>.
De lo dicho se collige la razon, ..., segun Syluestro
<ref target="#nm-040e">k</ref><note xml:id="nm-040e"><p>verb. Contritio.
q. 1.</p></note>. Diximos <milestone unit="number" n="11"/> (auer
pecado,) porque el arrepentimiento ...
</p>
</div>
</body>
</text>
</TEI>)
def segment(chapter: lxml.etree._Element) -> Dict[str, str]:
segments = {} # this will be returned
t = [] # this is a buffer
chap_label = str(chapter.get("n"))
sect_label = "0"
for element in chapter.iter():
if element.get("unit")=="number":
# milestone: fill and close the previous segment:
label = chap_label + "_" + sect_label
segments[label] = " ".join(t)
# reset buffer
t = []
# if there is text after the milestone,
# add it as first content to the buffer
if element.tail:
t.append(" ".join(str.replace(element.tail, "\n", " ").strip().split()))
# prepare for next labelmaking
sect_label = str(element.get("n"))
else:
if element.text:
t.append(" ".join(str.replace(element.text, "\n", " ").strip().split()))
if element.tail:
t.append(" ".join(str.replace(element.tail, "\n", " ").strip().split()))
# all elements are processed,
# add text remainder/current text buffer content
label = chap_label + "_" + sect_label
segments[label] = " ".join(t)
return segments
nsmap = {"tei": "http://www.tei-c.org/ns/1.0"}
xp_divs = etree.XPath("(//tei:body/tei:div)", namespaces = nsmap)
segmented = {}
divs = xp_divs(document)
segments = (segment(div) for div in divs)
for d in segments:
print(d)
document=etree.fromstring(
<TEI xmlns="http://www.tei-c.org/ns/1.0">
<text><body>
<div n="1">
<p>... <milestone unit="number" n="9"/>aa ab ac<ref target="#nm-0406">ad</ref><note xml:id="nm-0406"><p>ae af</p></note> ag
<ref target="#nm-0407">ah</ref><note xml:id="nm-0407"><p>ai aj</p></note> ak
<milestone unit="number" n="10"/>ba bb bc<ref target="#nm-0408">bd</ref><note xml:id="nm-0408"><p>be bf</p></note> bg
<ref target="#nm-0409">bh</ref><note xml:id="nm-0409"><p>bi bj</p></note><milestone unit="other" rendition="#asterisk"/> bk bl<ref target="#nm-040a">bm</ref><note xml:id="nm-040a"><p>bn bo</p></note> bp
<ref target="#nm-040b">bq</ref><note xml:id="nm-040b"><p>br bs</p></note> bt
<ref target="#nm-040c">bu</ref><note xml:id="nm-040c"><p>bv</p></note> bw<milestone unit="other" rendition="#asterisk"/>bx. by <milestone unit="number" n="11"/>ca cb ...</p>
</div>
</body></text>
</TEI>)
import lxml
from lxml import etree
def flatten(element: lxml.etree._Element):
t = ""
if element.text:
t += " ".join(str.replace(element.text, "\n", " ").strip().split())
if element.get("unit")=="number":
t += t + "+ms_" + str(element.get("n")) + "+"
if element.tail:
t += " ".join(str.replace(element.tail, "\n", " ").strip().split())
if element.getchildren():
t += " ".join((flatten(child)) for child in element.getchildren())
if element.tail and not(element.get("unit")=="number"):
t += " ".join(str.replace(element.tail, "\n", " ").strip().split())
# all elements are processed, add text remainder/current text buffer content
return t
nsmap = {"tei": "http://www.tei-c.org/ns/1.0"}
xp_divs = etree.XPath("(//tei:body/tei:div)", namespaces = nsmap)
divs = xp_divs(document)
segments = "".join(flatten(div) for div in divs)
print(segments)
Explanation: Multimodale Versuche der Alignierung historischer Texte
Andreas Wagner und Manuela Bragagnolo, Max-Planck-Institut für europäische Rechtsgeschichte, Frankfurt/M.
<wagner@rg.mpg.de> <bragagnolog@rg.mpg.de>
Table of Contents
<p><div class="lev1 toc-item"><a href="#Multimodale-Versuche-der-Alignierung-historischer-Texte" data-toc-modified-id="Multimodale-Versuche-der-Alignierung-historischer-Texte-1"><span class="toc-item-num">1 </span>Multimodale Versuche der Alignierung historischer Texte</a></div><div class="lev2 toc-item"><a href="#Introduction" data-toc-modified-id="Introduction-11"><span class="toc-item-num">1.1 </span>Introduction</a></div><div class="lev1 toc-item"><a href="#Preparations" data-toc-modified-id="Preparations-2"><span class="toc-item-num">2 </span>Preparations</a></div><div class="lev1 toc-item"><a href="#TF/IDF-" data-toc-modified-id="TF/IDF--3"><span class="toc-item-num">3 </span>TF/IDF </a></div><div class="lev1 toc-item"><a href="#Translations?" data-toc-modified-id="Translations?-4"><span class="toc-item-num">4 </span>Translations?</a></div><div class="lev2 toc-item"><a href="#New-Approach:-Use-Aligner-from-Machine-Translation-Studies-" data-toc-modified-id="New-Approach:-Use-Aligner-from-Machine-Translation-Studies--41"><span class="toc-item-num">4.1 </span>New Approach: Use Aligner from Machine Translation Studies </a></div><div class="lev1 toc-item"><a href="#Similarity-" data-toc-modified-id="Similarity--5"><span class="toc-item-num">5 </span>Similarity </a></div><div class="lev1 toc-item"><a href="#Word-Clouds-" data-toc-modified-id="Word-Clouds--6"><span class="toc-item-num">6 </span>Word Clouds </a></div>
## Introduction
This file is the continuation of preceding work. Previously, I have worked my way through a couple of text-analysing approaches - such as tf/idf frequencies, n-grams and the like - in the context of a project concerned with Juan de Solórzano Pereira's *Politica Indiana*. This can be seen [here](TextProcessing_Solorzano.ipynb).
In the former context, I got somewhat stuck when I was trying to automatically align corresponding passages of two editions of the same work ... where the one edition would be a **translation** of the other and thus we would have two different languages. In vector terminology, two languages means two almost orthogonal vectors and it makes little sense to search for similarities there.
The present file takes this up, tries to refine an approach taken there and to find alternative ways of analysing a text across several languages. This time, the work concerned is Martín de Azpilcueta's *Manual de confesores*, a work of the 16th century that has seen very many editions and translations, quite a few of them even by the work's original author and it is the subject of the research project ["Martín de Azpilcueta’s Manual for Confessors and the Phenomenon of Epitomisation"](http://www.rg.mpg.de/research/martin-de-azpilcuetas-manual-for-confessors) by Manuela Bragagnolo.
(There are a few DH-ey things about the project that are not directly of concern here, like a synoptic display of several editions or the presentation of the divergence of many actual translations of a given term. Such aspects are being treated with other software, like [HyperMachiavel](http://hyperprince.ens-lyon.fr/hypermachiavel) or [Lera](http://lera.uzi.uni-halle.de/).)
As in the previous case, the programming language used in the following examples is "python" and the tool used to get prose discussion and code samples together is called ["jupyter"](http://jupyter.org/). (A common way of installing both the language and the jupyter software, especially in windows, is by installing a python "distribution" like [Anaconda](https://www.anaconda.com/what-is-anaconda/).) In jupyter, you have a "notebook" that you can populate with text (if you want to use it, jupyter understands [markdown](http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html) code formatting) or code, and a program that pipes a nice rendering of the notebook to a web browser as you are reading right now. In many places in such a notebook, the output that the code samples produce is printed right below the code itself. Sometimes this can be quite a lot of output and depending on your viewing environment you might have to scroll quite some way to get to the continuation of the discussion.
You can save your notebook online (the current one is [here at github](https://github.com/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb)) and there is an online service, nbviewer, able to render any notebook that it can access online. So chances are you are reading this present notebook at the web address [https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb](https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb).
A final word about the elements of this notebook:
<div class="alert alertbox alert-success">At some points I am mentioning things I consider to be important decisions or take-away messages for scholarly readers. E.g. whether or not to insert certain artefacts into the very transcription of your text, what the methodological ramifications of a certain approach or parameter are, what the implications of an example solution are, or what a possible interpretation of a certain result might be. I am highlighting these things in a block like this one here or at least in <font color="green">**green bold font**</font>.</div>
<div class="alert alertbox alert-danger">**NOTE:** As I am continually improving the notebook on the side of the source text, wordlists and other parameters, it is sometimes hard to keep the prose description in sync. So while the actual descriptions still apply, the numbers that are mentioned in the prose (as where we have e.g. a "table with 20 rows and 1.672 columns") might no longer reflect the latest state of the sources, auxiliary files and parameters and you should take these with a grain of salt. Best double check them by reading the actual code ;-)
I apologize for the inconsistency.</div>
# Preparations
End of explanation
sourcePath = 'DHd2019/cap6_align_-_2018-01.csv'
Explanation: Unlike in the previous case, where we had word files that we could export as plaintext, in this case Manuela has prepared a sample chapter with four editions transcribed in parallel in an office spreadsheet. So we first of all make sure that we have good UTF-8 comma-separated-value files, e.g. by uploading a csv export of our office program of choice to a CSV Linting service. (As a side remark, in my case, exporting with LibreOffice provided me with options to select UTF-8 encoding and choose the field delimiter and resulted in a valid csv file. MS Excel did neither of those.) Below, we expect the file at the following position:
End of explanation
import csv
sourceFile = open(sourcePath, newline='', encoding='utf-8')
sourceTable = csv.reader(sourceFile)
Explanation: Then, we can go ahead and open the file in python's csv reader:
End of explanation
import re
# Initialize a list of lists, or two-dimensional list ...
Editions = [[]]
# ...with four sub-lists 0 to 3
for i in range(3):
a = []
Editions.append(a)
# Now populate it from our sourceTable
sourceFile.seek(0) # in repeated runs, restart from the beginning of the file
for row in sourceTable:
for i, field in enumerate(row): # We normalize quite a bit here already:
p = field.replace('¶', ' ¶ ') # spaces around ¶
p = re.sub("&([^c])"," & \\1", p) # always spaces around &, except for &c
p = re.sub("([,.:?/])(\S)","\\1 \\2", p) # always a space after ',.:?/'
p = re.sub("([0-9])([a-zA-Z])", "\\1 \\2", p) # always a space between numbers and word characters
p = re.sub("([a-z]) ?\\(\\1\\b", " (\\1", p) # if a letter is repeated on its own in a bracketed
# expression it's a note and we eliminate the character
# from the preceding word
p = " ".join(p.split()) # always only one space
Editions[i].append(p)
print(str(len(Editions[0])) + " rows read.\n")
# As an example, see the first seven sections of the third edition (1556 SPA):
for field in range(len(Editions[2])):
print(Editions[2][field])
Explanation: And next, we read each line into new elements of four respective lists (since we're dealing with one sample chapter, we try to handle it all in memory first and see if we run into problems):
(Note here and in the following that in most cases, when the program is counting, it does so beginning with zero. Which means that if we end up with 20 segments, they are going to be called segment 0, segment 1, ..., segment 19. There is not going to be a segment bearing the number twenty, although we do have twenty segments. The first one has the number zero and the twentieth one has the number nineteen. Even for more experienced coders, this sometimes leads to mistakes, called "off-by-one errors".)
End of explanation
numOfEds = 4
language = ["PT", "PT", "ES", "LA"] # I am using language codes that later on can be used in babelnet
year = [1549, 1552, 1556, 1573]
Explanation: Actually, let's define two more list variables to hold information about the different editions - language and year of print:
End of explanation
lemma = [{} for i in range(numOfEds)]
# lemma = {} # we build a so-called dictionary for the lookups
for i in range(numOfEds):
wordfile_path = 'Azpilcueta/wordforms-' + language[i].lower() + '.txt'
# open the wordfile (defined above) for reading
wordfile = open(wordfile_path, encoding='utf-8')
tempdict = []
for line in wordfile.readlines():
tempdict.append(tuple(line.split('>'))) # we split each line by ">" and append
# a tuple to a temporary list.
lemma[i] = {k.strip(): v.strip() for k, v in tempdict} # for every tuple in the temp. list,
# we strip whitespace and make a key-value
# pair, appending it to our "lemma"
# dictionary
wordfile.close
print(str(len(lemma[i])) + ' ' + language[i] + ' wordforms known to the system.')
Explanation: TF/IDF <a name="tfidf"></a>
In the previous (i.e. Solórzano) analyses, things like tokenization, lemmatization and stop-word lists filtering are explained step by step. Here, we rely on what we have found there and feed it all into functions that are ready-made and available in suitable libraries...
First, we build our lemmatization resource and "function":
End of explanation
lemma[language.index("PT")]['diremos']
Explanation: Again, a quick test: Let's see with which "lemma"/basic word the particular wordform "diremos" is associated, or, in other words, what value our lemma variable returns when we query for the key "diremos":
End of explanation
stopwords = []
for i in range(numOfEds):
stopwords_path = 'DHd2019/stopwords-' + language[i].lower() + '.txt'
stopwords.append(open(stopwords_path, encoding='utf-8').read().splitlines())
print(str(len(stopwords[i])) + ' ' + language[i]
+ ' stopwords known to the system, e.g.: ' + str(stopwords[i][100:119]) + '\n')
Explanation: And we are going to need the stopwords lists:
End of explanation
abbreviations = [] # As of now, this is one for all languages :-(
abbrs_path = 'DHd2019/abbreviations.txt'
abbreviations = open(abbrs_path, encoding='utf-8').read().splitlines()
print(str(len(abbreviations)) + ' abbreviations known to the system, e.g.: ' + str(abbreviations[100:119]))
Explanation: (In contrast to simpler numbers that have been filtered out by the stopwords filter, I have left numbers representing years like "1610" in place.)
And, later on when we try sentence segmentation, we are going to need the list of abbreviations - words where a subsequent period not necessarily means a new sentence:
End of explanation
import re
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
numTopTerms = 20
# So first we build a tokenising and lemmatising function (per language) to work as
# an input filter to the CountVectorizer function
def ourLaLemmatiser(str_input):
wordforms = re.split('\W+', str_input)
return [lemma[language.index("LA")][wordform].lower().strip() if wordform in lemma[language.index("LA")] else wordform.lower().strip() for wordform in wordforms ]
def ourEsLemmatiser(str_input):
wordforms = re.split('\W+', str_input)
return [lemma[language.index("ES")][wordform].lower().strip() if wordform in lemma[language.index("ES")] else wordform.lower().strip() for wordform in wordforms ]
def ourPtLemmatiser(str_input):
wordforms = re.split('\W+', str_input)
return [lemma[language.index("PT")][wordform].lower().strip() if wordform in lemma[language.index("PT")] else wordform.lower().strip() for wordform in wordforms ]
def ourLemmatiser(lang):
if (lang == "LA"):
return ourLaLemmatiser
if (lang == "ES"):
return ourEsLemmatiser
if (lang == "PT"):
return ourPtLemmatiser
def ourStopwords(lang):
if (lang == "LA"):
return stopwords[language.index("LA")]
if (lang == "ES"):
return stopwords[language.index("ES")]
if (lang == "PT"):
return stopwords[language.index("PT")]
topTerms = []
for i in range(numOfEds):
topTermsEd = []
# Initialize the library's function, specifying our
# tokenizing function from above and our stopwords list.
tfidf_vectorizer = TfidfVectorizer(stop_words=ourStopwords(language[i]), use_idf=True, tokenizer=ourLemmatiser(language[i]), norm='l2')
# Finally, we feed our corpus to the function to build a new "tfidf_matrix" object
tfidf_matrix = tfidf_vectorizer.fit_transform(Editions[i])
# convert your matrix to an array to loop over it
mx_array = tfidf_matrix.toarray()
# get your feature names
fn = tfidf_vectorizer.get_feature_names()
# now loop through all segments and get the respective top n words.
pos = 0
for j in mx_array:
# We have empty segments, i.e. none of the words in our vocabulary has any tf/idf score > 0
if (j.max() == 0):
topTermsEd.append([("", 0)])
# otherwise append (present) lemmatised words until numTopTerms or the number of words (-stopwords) is reached
else:
topTermsEd.append(
[(fn[x], j[x]) for x in ((j*-1).argsort()) if j[x] > 0] \
[:min(numTopTerms, len(
[word for word in re.split('\W+', Editions[i][pos]) if ourLemmatiser(language[i])(word) not in stopwords]
))])
pos += 1
topTerms.append(topTermsEd)
Explanation: Next, we should find some very characteristic words for each segment for each edition. (Let's say we are looking for the "Top 20".) We should build a vocabulary for each edition individually and only afterwards work towards a common vocabulary of several "Top n" sets.
End of explanation
segment_no = 18
Explanation: Translations?
Maybe there is an approach to inter-lingual comparison after all. After a first unsuccessful try with conceptnet.io, I next want to try Babelnet in order to lookup synonyms, related terms and translations. I still have to study the API...
For example, let's take this single segment 19:
End of explanation
print("Comparing words from segments " + str(segment_no) + " ...")
print(" ")
print("Here is the segment in the four editions:")
print(" ")
for i in range(numOfEds):
print("Ed. " + str(i) + ":")
print("------")
print(Editions[i][segment_no])
print(" ")
print(" ")
print(" ")
# Build List of most significant words for a segment
print("Most significant words in the segment:")
print(" ")
for i in range(numOfEds):
print("Ed. " + str(i) + ":")
print("------")
print(topTerms[i][segment_no])
print(" ")
Explanation: And then first let's see how this segment compares in the different editions:
End of explanation
startEd = 1
secondEd = 2
Explanation: Now we look up the "concepts" associated to those words in babelnet. Then we look up the concepts associated with the words of the present segment from another edition/language, and see if the concepts are the same.
But we have to decide on some particular editions to get things started. Let's take the Spanish and Latin ones:
End of explanation
import urllib
import json
from collections import defaultdict
babelAPIKey = '18546fd3-8999-43db-ac31-dc113506f825'
babelGetSynsetIdsURL = "https://babelnet.io/v5/getSynsetIds?" + \
"targetLang=LA&targetLang=ES&targetLang=PT" + \
"&searchLang=" + language[startEd] + \
"&key=" + babelAPIKey + \
"&lemma="
# Build lists of possible concepts
top_possible_conceptIDs = defaultdict(list)
for (word, val) in topTerms[startEd][segment_no]:
concepts_uri = babelGetSynsetIdsURL + urllib.parse.quote(word)
response = urllib.request.urlopen(concepts_uri)
conceptIDs = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))
for rel in conceptIDs:
top_possible_conceptIDs[word].append(rel.get("id"))
print(" ")
print("For each of the '" + language[startEd] + "' words, here are possible synsets:")
print(" ")
for word in top_possible_conceptIDs:
print(word + ":" + " " + ', '.join(c for c in top_possible_conceptIDs[word]))
print(" ")
print(" ")
print(" ")
print(" ")
babelGetSynsetIdsURL2 = "https://babelnet.io/v5/getSynsetIds?" + \
"targetLang=LA&targetLang=ES&targetLang=PT" + \
"&searchLang=" + language[secondEd] + \
"&key=" + babelAPIKey + \
"&lemma="
# Build list of 10 most significant words in the second language
top_possible_conceptIDs_2 = defaultdict(list)
for (word, val) in topTerms[secondEd][segment_no]:
concepts_uri = babelGetSynsetIdsURL2 + urllib.parse.quote(word)
response = urllib.request.urlopen(concepts_uri)
conceptIDs = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))
for rel in conceptIDs:
top_possible_conceptIDs_2[word].append(rel.get("id"))
print(" ")
print("For each of the '" + language[secondEd] + "' words, here are possible synsets:")
print(" ")
for word in top_possible_conceptIDs_2:
print(word + ":" + " " + ', '.join(c for c in top_possible_conceptIDs_2[word]))
print(" ")
# calculate number of overlapping terms
values_a = set([item for sublist in top_possible_conceptIDs.values() for item in sublist])
values_b = set([item for sublist in top_possible_conceptIDs_2.values() for item in sublist])
overlaps = values_a & values_b
print("Overlaps: " + str(overlaps))
babelGetSynsetInfoURL = "https://babelnet.io/v5/getSynset?key=" + babelAPIKey + \
"&targetLang=LA&targetLang=ES&targetLang=PT" + \
"&id="
for c in overlaps:
info_uri = babelGetSynsetInfoURL + c
response = urllib.request.urlopen(info_uri)
words = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))
senses = words['senses']
for result in senses[:1]:
lemma = result['properties'].get('fullLemma')
resultlang = result['properties'].get('language')
print(c + ": " + lemma + " (" + resultlang.lower() + ")")
# what's left: do a nifty ranking
Explanation: And then we can continue...
End of explanation
from nltk import sent_tokenize
## First, train the sentence tokenizer:
from pprint import pprint
from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktLanguageVars, PunktTrainer
class BulletPointLangVars(PunktLanguageVars):
sent_end_chars = ('.', '?', ':', '!', '¶')
trainer = PunktTrainer()
trainer.INCLUDE_ALL_COLLOCS = True
tokenizer = PunktSentenceTokenizer(trainer.get_params(), lang_vars = BulletPointLangVars())
for tok in abbreviations : tokenizer._params.abbrev_types.add(tok)
## Now we sentence-segmentize all our editions, printing results and saving them to files:
# folder for the several segment files:
outputBase = 'Azpilcueta/sentences'
dest = None
# Then, sentence-tokenize our segments:
for i in range(numOfEds):
dest = open(outputBase + '_' + str(year[i]) + '.txt',
encoding='utf-8',
mode='w')
print("Sentence-split of ed. " + str(i) + ":")
print("------")
for s in range(0, len(Editions[i])):
for a in tokenizer.tokenize(Editions[i][s]):
dest.write(a.strip() + '\n')
print(a)
dest.write('<p>\n')
print('<p>')
dest.close()
Explanation: Actually I think this is somewhat promising - an overlap of four independent, highly meaning-bearing words, or of forty-something related concepts. At first glance, they should be capable of distinguishing this section from all the other ones. However, getting this result was made possible by quite a bit of manual tuning the stopwords and lemmatization dictionaries before, so this work is important and cannot be eliminated.
New Approach: Use Aligner from Machine Translation Studies <a name="newApproach"/>
In contrast to what I thought previously, there is a couple of tools for automatically aligning parallel texts after all. After some investigation of the literature, the most promising candidate seems to be HunAlign. However, as this is a commandline tool written in C++ (there is LF Aligner, a GUI, available), it is not possible to run it from within this notebook.
First results were problematic, due to the different literary conventions that our editions follow: Punctuation was used inconsistently (but sentence length is one of the most relevant factors for aligning), as were abbreviations and notes.
My current idea is to use this notebook to preprocess the texts and to feed a cleaned up version of them to hunalign...
Coming back to this after a first couple of rounds with Hunalign, I have the feeling that the fact that literary conventions are so divergent probably means that Aligning via sentence lengths is a bad idea in our from the outset. Probably better to approach this with GMA or similar methods. Anyway, here are the first attempts with Hunalign:
End of explanation
# folder for the several segment files:
outputBase = 'Azpilcueta/sentences-lemmatized'
dest = None
# Then, sentence-tokenize our segments:
for i in range(numOfEds):
dest = open(outputBase + '_' + str(year[i]) + '.txt',
encoding='utf-8',
mode='w')
stp = set(stopwords[i])
print("Cleaned/lemmatized ed. " + str(i) + " [" + language[i] + "]:")
print("------")
for s in range(len(Editions[i])):
for a in tokenizer.tokenize(Editions[i][s]):
dest.write(" ".join([x for x in ourLemmatiser(language[i])(a) if x not in stp]) + '\n')
print(" ".join([x for x in ourLemmatiser(language[i])(a) if x not in stp]))
dest.write('<p>\n')
print('<p>')
dest.close()
Explanation: ... lemmatize/stopwordize it---
End of explanation
from sklearn.metrics.pairwise import cosine_similarity
similarities = pd.DataFrame(cosine_similarity(tfidf_matrix))
similarities[round(similarities, 0) == 1] = 0 # Suppress a document's similarity to itself
print("Pairwise similarities:")
print(similarities)
print("The two most similar segments in the corpus are")
print("segments", \
similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1), \
"and", \
similarities[similarities == similarities.values.max()].idxmax(axis=0)[ similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1) ].astype(int), \
".")
print("They have a similarity score of")
print(similarities.values.max())
Explanation: With these preparations made, Hunaligning 1552 and 1556 reports "Quality 0.63417" for unlemmatized and "Quality 0.51392" for lemmatized versions of the texts for its findings which still contain many errors. Removing ":" from the sentence end marks gives "Quality 0.517048/0.388377", but from a first impression with fewer errors. Results can be output in different formats, xls files are here and here.
Similarity <a name="DocumentSimilarity"/>
It seems we could now create another matrix replacing lemmata with concepts and retaining the tf/idf values (so as to keep a weight coefficient to the concepts). Then we should be able to calculate similarity measures across the same concepts...
The approach to choose would probably be the "cosine similarity" of concept vector spaces. Again, there is a library ready for us to use (but you can find some documentation here, here and here.)
However, this is where I have to take a break now. I will return to here soon...
End of explanation
from wordcloud import WordCloud
import matplotlib.pyplot as plt
# We make tuples of (lemma, tf/idf score) for one of our segments
# But we have to convert our tf/idf weights to pseudo-frequencies (i.e. integer numbers)
frq = [ int(round(x * 100000, 0)) for x in Editions[1][3]]
freq = dict(zip(fn, frq))
wc = WordCloud(background_color=None, mode="RGBA", max_font_size=40, relative_scaling=1).fit_words(freq)
# Now show/plot the wordcloud
plt.figure()
plt.imshow(wc, interpolation="bilinear")
plt.axis("off")
plt.show()
Explanation: <div class="alert alertbox alert-success">Of course, in every set of documents, we will always find two that are similar in the sense of them being more similar to each other than to the other ones. Whether or not this actually *means* anything in terms of content is still up to scholarly interpretation. But at least it means that a scholar can look at the two documents and when she determines that they are not so similar after all, then perhaps there is something interesting to say about similar vocabulary used for different puproses. Or the other way round: When the scholar knows that two passages are similar, but they have a low "similarity score", shouldn't that say something about the texts's rhetorics?</div>
Word Clouds <a name="WordClouds"/>
We can use a library that takes word frequencies like above, calculates corresponding relative sizes of words and creates nice wordcloud images for our sections (again, taking the fourth segment as an example) like this:
End of explanation
outputDir = "Azpilcueta"
htmlfile = open(outputDir + '/Overview.html', encoding='utf-8', mode='w')
# Write the html header and the opening of a layout table
htmlfile.write(<!DOCTYPE html>
<html>
<head>
<title>Section Characteristics</title>
<meta charset="utf-8"/>
</head>
<body>
<table>
)
a = [[]]
a.clear()
dicts = []
w = []
# For each segment, create a wordcloud and write it along with label and
# other information into a new row of the html table
for i in range(len(mx_array)):
# this is like above in the single-segment example...
a.append([ int(round(x * 100000, 0)) for x in mx_array[i]])
dicts.append(dict(zip(fn, a[i])))
w.append(WordCloud(background_color=None, mode="RGBA", \
max_font_size=40, min_font_size=10, \
max_words=60, relative_scaling=0.8).fit_words(dicts[i]))
# We write the wordcloud image to a file
w[i].to_file(outputDir + '/wc_' + str(i) + '.png')
# Finally we write the column row
htmlfile.write(
<tr>
<td>
<head>Section {a}: <b>{b}</b></head><br/>
<img src="./wc_{a}.png"/><br/>
<small><i>length: {c} words</i></small>
</td>
</tr>
<tr><td> </td></tr>
.format(a = str(i), b = label[i], c = len(tokenised[i])))
# And then we write the end of the html file.
htmlfile.write(
</table>
</body>
</html>
)
htmlfile.close()
Explanation: In order to have a nicer overview over the many segments than is possible in this notebook, let's create a new html file listing some of the characteristics that we have found so far...
End of explanation |
223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
Copyright 2018 Allen B. Downey
MIT License
Step3: The Geiger counter problem
I got the idea for the following problem from Tom Campbell-Ricketts, author of the Maximum Entropy blog. And he got the idea from E. T. Jaynes, author of the classic Probability Theory
Step4: MCMC
Implement this model using MCMC. As a starting place, you can use this example from the PyMC3 docs.
As a challege, try writing the model more explicitly, rather than using the GLM module.
Step6: Grid algorithm, version 2
Step12: Hierarchical version, as in the book | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite, Joint
from thinkbayes2 import MakePoissonPmf, EvalBinomialPmf, MakeMixture
import thinkplot
Explanation: Think Bayes
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
class Logistic(Suite, Joint):
def Likelihood(self, data, hypo):
data: k, number of particles detected
hypo: r, emission rate in particles per second
return 1
r = 160
k = 15
f = 0.1
pmf = MakePoissonPmf(r, high=500)
thinkplot.Hist(pmf)
total = 0
for n, p in pmf.Items():
total += p * EvalBinomialPmf(k, n, f)
total
def compute_likelihood(k, r, f):
pmf = MakePoissonPmf(r, high=500)
total = 0
for n, p in pmf.Items():
total += p * EvalBinomialPmf(k, n, f)
return total
compute_likelihood(k, r, f)
likes = pd.Series([])
for kk in range(0, 40):
likes[kk] = compute_likelihood(kk, r, f)
likes.plot()
thinkplot.decorate(xlabel='Counter particles (n)',
ylabel='PMF')
# Solution
class Logistic(Suite, Joint):
f = 0.1
def Likelihood(self, data, hypo):
data: k, number of particles detected
hypo: r, emission rate in particles per second
k = data
r = hypo
return compute_likelihood(k, r, self.f)
rs = np.linspace(0, 300, 51);
suite = Logistic(rs);
suite.Update(15)
thinkplot.Pdf(suite)
thinkplot.decorate(xlabel='Emission rate (particles/second)',
ylabel='PMF',
title='Posterior marginal distribution')
Explanation: The Geiger counter problem
I got the idea for the following problem from Tom Campbell-Ricketts, author of the Maximum Entropy blog. And he got the idea from E. T. Jaynes, author of the classic Probability Theory: The Logic of Science:
Suppose that a radioactive source emits particles toward a Geiger counter at an average rate of r particles per second, but the counter only registers a fraction, f, of the particles that hit it. If f is 10% and the counter registers 15 particles in a one second interval, what is the posterior distribution of n, the actual number of particles that hit the counter, and r, the average rate particles are emitted?
Grid algorithm
End of explanation
import pymc3 as pm
# Solution
f = 0.1
model = pm.Model()
with model:
r = pm.Uniform('r', 0, 500)
n = pm.Poisson('n', r)
k = pm.Binomial('k', n, f, observed=15)
trace = pm.sample_prior_predictive(1000)
thinkplot.Cdf(Cdf(trace['r']));
thinkplot.Cdf(Cdf(trace['n']));
thinkplot.Cdf(Cdf(trace['k']));
with model:
trace = pm.sample(1000, tune=3000)
pm.traceplot(trace);
n_sample = trace['n']
thinkplot.Cdf(Cdf(n_sample))
r_sample = trace['r']
thinkplot.Cdf(Cdf(r_sample))
thinkplot.Cdf(suite.MakeCdf())
thinkplot.Cdf(Cdf(r_sample))
Explanation: MCMC
Implement this model using MCMC. As a starting place, you can use this example from the PyMC3 docs.
As a challege, try writing the model more explicitly, rather than using the GLM module.
End of explanation
# Solution
class Logistic(Suite, Joint):
f = 0.1
def Likelihood(self, data, hypo):
data: k, number of particles detected
hypo: r, n
k = data
r, n = hypo
return EvalBinomialPmf(k, n, self.f)
rs = np.linspace(0, 300, 51);
suite = Logistic()
for r in rs:
pmf = MakePoissonPmf(r, high=500)
for n, p in pmf.Items():
suite[r, n] += p
suite.Normalize()
suite.Update(15)
pmf_r = suite.Marginal(0)
thinkplot.Pdf(pmf_r)
thinkplot.decorate(xlabel='Emission rate (particles/second)',
ylabel='PMF',
title='Posterior marginal distribution')
pmf_n = suite.Marginal(1)
thinkplot.Pdf(pmf_n)
thinkplot.decorate(xlabel='Number of particles (n)',
ylabel='PMF',
title='Posterior marginal distribution')
Explanation: Grid algorithm, version 2
End of explanation
class Detector(Suite):
Represents hypotheses about n.
def __init__(self, r, f, high=500):
Initializes the suite.
r: known emission rate, r
f: fraction of particles registered
high: maximum number of particles, n
pmf = MakePoissonPmf(r, high)
super().__init__(pmf)
self.r = r
self.f = f
def Likelihood(self, data, hypo):
Likelihood of the data given the hypothesis.
data: number of particles counted
hypo: number of particles hitting the counter, n
k = data
n = hypo
return EvalBinomialPmf(k, n, self.f)
r = 160
k = 15
f = 0.1
suite = Detector(r, f);
suite.Update(15)
class Emitter(Suite):
Represents hypotheses about r.
def Likelihood(self, data, hypo):
Likelihood of the data given the hypothesis.
data: number of counted per unit time
hypo: Detector object
return hypo.Update(data)
rs = np.linspace(0, 300, 51);
detectors = [Detector(r, f=0.1) for r in rs[1:]]
suite = Emitter(detectors);
suite.Update(15)
pmf_r = Pmf()
for detector, p in suite.Items():
pmf_r[detector.r] = p
thinkplot.Pdf(pmf_r)
mix = MakeMixture(suite);
thinkplot.Pdf(mix)
Explanation: Hierarchical version, as in the book
End of explanation |
224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Loading and exploring our data set
This is a database of customers of an insurance company. Each data point is one customer. The group represents the number of accidents the customer has been involved with in the past
0 - red
Step2: The embedding network
Network Architecture
Step3: Training
Step4: Evaluation
Step5: Looks quite nice, but how impressed are you by abstracting 4 variables into 32 / Let us try dimension of 2
Step6: relu makes it train
Step7: not too bad, but all groups close to 1, all off as do not add much to loss
Step8: Let's try normalizing the data first (caution
Step9: This makes it very hard to estimate how well we did our job, but we can still plot the embedding
Could be interpreted as a fancy version of dimensionality reduction
Step10: Look, ma, clusters
Step11: Clusters have actual meaning | Python Code:
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/2019_tf/autoencoders_tabular.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Autoencoders on Tabular Data
Normalizaing data: https://www.tensorflow.org/api_docs/python/tf/keras/utils/normalize
Keras functional API for more flexible models: https://keras.io/getting-started/functional-api-guide/
Batch Size should not matter too much, but larger batch sizes might require more training epochs: https://ieeexplore.ieee.org/document/8323035
End of explanation
!curl -O https://raw.githubusercontent.com/DJCordhose/deep-learning-crash-course-notebooks/master/data/insurance-customers-1500.csv
import pandas as pd
df = pd.read_csv('./insurance-customers-1500.csv', sep=';')
df.describe()
df.head()
import seaborn as sns
sample_df = df.sample(n=100, random_state=42)
sns.pairplot(sample_df,
hue="group", palette={0: '#AA4444', 1: '#006000', 2: '#EEEE44'},
# kind='reg',
size=5,
diag_kind='kde',
vars=['age', 'speed', 'miles'])
Explanation: Loading and exploring our data set
This is a database of customers of an insurance company. Each data point is one customer. The group represents the number of accidents the customer has been involved with in the past
0 - red: many accidents
1 - green: few or no accidents
2 - yellow: in the middle
End of explanation
from tensorflow import keras
from tensorflow.keras.layers import Input, Flatten, GlobalAveragePooling1D, Dense
from tensorflow.keras.models import Sequential, Model
encoding_dim = 32
# https://keras.io/getting-started/functional-api-guide/
input_data = Input(shape=(4,))
encoded = Dense(units=encoding_dim, activation='relu', name="encoder")(input_data)
decoded = Dense(units=4, activation='linear', name="decoder")(encoded)
autoencoder = Model(inputs=input_data, outputs=decoded)
autoencoder.compile(optimizer='adam', loss='mse')
autoencoder.summary();
Explanation: The embedding network
Network Architecture
End of explanation
X = df
BATCH_SIZE = 1 # larger batch size might force more epochs
%time history = autoencoder.fit(X, X, epochs=10, batch_size=BATCH_SIZE, verbose=1)
Explanation: Training
End of explanation
import matplotlib.pyplot as plt
plt.yscale('log')
plt.plot(history.history['loss'])
loss = autoencoder.evaluate(X, X, batch_size=BATCH_SIZE)
loss
samples = df.sample(10).reset_index(drop=True)
samples
predictions = pd.DataFrame(autoencoder.predict(samples), columns=["speed", "age", "miles", "group"])
predictions
samples.subtract(predictions)
Explanation: Evaluation
End of explanation
from tensorflow.keras.initializers import glorot_normal
encoding_dim = 2
seed = 13 # make training results more deterministic
input_data = Input(shape=(4,))
# notice tanh
encoded = Dense(units=encoding_dim, activation='tanh', name="encoder", kernel_initializer=glorot_normal(seed=seed))(input_data)
decoded = Dense(units=4, activation='linear', name="decoder", kernel_initializer=glorot_normal(seed=seed))(encoded)
autoencoder = Model(inputs=input_data, outputs=decoded)
# this does not get much better than 210 as a loss (bad), so we can just as well get there a bit faster (10 instead of 50 epochs with standard lr)
adam = keras.optimizers.Adam(lr=0.01)
# adam = keras.optimizers.Adam()
autoencoder.compile(optimizer=adam, loss='mse')
X = df
BATCH_SIZE = 1
%time history = autoencoder.fit(X, X, epochs=10, batch_size=BATCH_SIZE, verbose=1)
plt.yscale('log')
plt.plot(history.history['loss'])
autoencoder.evaluate(X, X, batch_size=BATCH_SIZE)
# just average over all of them :D
predictions = pd.DataFrame(autoencoder.predict(samples), columns=["speed", "age", "miles", "group"])
predictions
samples.describe()
Explanation: Looks quite nice, but how impressed are you by abstracting 4 variables into 32 / Let us try dimension of 2
End of explanation
from tensorflow.keras.initializers import glorot_normal
encoding_dim = 2
seed = 13 # if it trains still depens on initialization
input_data = Input(shape=(4,))
# notice relu
encoded = Dense(units=encoding_dim, activation='relu', name="encoder", kernel_initializer=glorot_normal(seed=seed))(input_data)
decoded = Dense(units=4, activation='linear', name="decoder", kernel_initializer=glorot_normal(seed=seed))(encoded)
autoencoder = Model(inputs=input_data, outputs=decoded)
# adam = keras.optimizers.Adam(lr=0.01)
adam = keras.optimizers.Adam()
autoencoder.compile(optimizer=adam, loss='mse')
X = df
BATCH_SIZE = 1
%time history = autoencoder.fit(X, X, epochs=10, batch_size=BATCH_SIZE, verbose=1)
plt.yscale('log')
plt.plot(history.history['loss'])
samples
Explanation: relu makes it train
End of explanation
predictions = pd.DataFrame(autoencoder.predict(samples), columns=["speed", "age", "miles", "group"])
predictions
samples.subtract(predictions)
Explanation: not too bad, but all groups close to 1, all off as do not add much to loss
End of explanation
df = pd.read_csv('./insurance-customers-1500.csv', sep=';')
normalized_df = (df - df.mean()) / df.std()
normalized_df.head()
X = normalized_df
encoding_dim = 2
seed = 13 # make results determinisitic
input_data = Input(shape=(4,))
encoded = Dense(units=encoding_dim, activation='relu', name="encoder", kernel_initializer=glorot_normal(seed=seed))(input_data)
decoded = Dense(units=4, activation='linear', name="decoder", kernel_initializer=glorot_normal(seed=seed))(encoded)
autoencoder = Model(inputs=input_data, outputs=decoded)
# adam = keras.optimizers.Adam(lr=0.01)
adam = keras.optimizers.Adam()
autoencoder.compile(optimizer=adam, loss='mse')
BATCH_SIZE = 1
%time history = autoencoder.fit(X, X, epochs=10, batch_size=BATCH_SIZE, verbose=1)
plt.yscale('log')
plt.plot(history.history['loss'])
Explanation: Let's try normalizing the data first (caution: losses do not compare any more)
End of explanation
encoder = Model(inputs=input_data, outputs=encoded)
latent_representation = encoder.predict(X)
latent_representation.shape
Explanation: This makes it very hard to estimate how well we did our job, but we can still plot the embedding
Could be interpreted as a fancy version of dimensionality reduction
End of explanation
latent_x = latent_representation[:, 0]
latent_y = latent_representation[:, 1]
plt.scatter(latent_x, latent_y, alpha=0.5)
Explanation: Look, ma, clusters
End of explanation
from matplotlib.colors import ListedColormap
# * 0 - red: many accidents
# * 1 - green: few or no accidents
# * 2 - yellow: in the middle
colors = X['group']
color_map = ListedColormap(['#AA4444', '#006000', '#EEEE44'])
plt.scatter(latent_x, latent_y, alpha=0.5, s=100, marker='o', edgecolors='w', cmap=color_map, c=colors)
Explanation: Clusters have actual meaning
End of explanation |
225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reinforcement Learning
This Jupyter notebook acts as supporting material for Chapter 21 Reinforcement Learning of the book Artificial Intelligence
Step1: CONTENTS
Overview
Passive Reinforcement Learning
Direct Utility Estimation
Adaptive Dynamic Programming
Temporal-Difference Agent
Active Reinforcement Learning
Q learning
OVERVIEW
Before we start playing with the actual implementations let us review a couple of things about RL.
Reinforcement Learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward.
Reinforcement learning differs from standard supervised learning in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
-- Source
Step2: The sequential_decision_environment is a GridMDP object as shown below. The rewards are +1 and -1 in the terminal states, and -0.04 in the rest. <img src="files/images/mdp.png"> Now we define actions and a policy similar to Fig 21.1 in the book.
Step3: Direction Utility Estimation Agent
The PassiveDEUAgent class in the rl module implements the Agent Program described in Fig 21.2 of the AIMA Book. PassiveDEUAgent sums over rewards to find the estimated utility for each state. It thus requires the running of a number of iterations.
Step4: The calculated utilities are
Step5: Adaptive Dynamic Programming Agent
The PassiveADPAgent class in the rl module implements the Agent Program described in Fig 21.2 of the AIMA Book. PassiveADPAgent uses state transition and occurrence counts to estimate $P$, and then $U$. Go through the source below to understand the agent.
Step6: We instantiate a PassiveADPAgent below with the GridMDP shown and train it over 200 iterations. The rl module has a simple implementation to simulate iterations. The function is called run_single_trial.
Step7: The calculated utilities are
Step8: Passive Temporal Difference Agent
PassiveTDAgent uses temporal differences to learn utility estimates. We learn the difference between the states and backup the values to previous states. Let us look into the source before we see some usage examples.
Step9: In creating the TDAgent, we use the same learning rate $\alpha$ as given in the footnote of the book on page 837.
Step10: Now we run 200 trials for the agent to estimate Utilities.
Step11: The calculated utilities are
Step12: Comparison with value iteration method
We can also compare the utility estimates learned by our agent to those obtained via value iteration.
Note that value iteration has a priori knowledge of the transition table $P$, the rewards $R$, and all the states $s$.
Step13: The values calculated by value iteration
Step14: Evolution of utility estimates over iterations
We can explore how these estimates vary with time by using plots similar to Fig 21.5a. We will first enable matplotlib using the inline backend. We also define a function to collect the values of utilities at each iteration.
Step15: Here is a plot of state $(2,2)$.
Step16: It is also possible to plot multiple states on the same plot. As expected, the utility of the finite state $(3,2)$ stays constant and is equal to $R((3,2)) = 1$.
Step17: ACTIVE REINFORCEMENT LEARNING
Unlike Passive Reinforcement Learning in Active Reinforcement Learning we are not bound by a policy pi and we need to select our actions. In other words the agent needs to learn an optimal policy. The fundamental tradeoff the agent needs to face is that of exploration vs. exploitation.
QLearning Agent
The QLearningAgent class in the rl module implements the Agent Program described in Fig 21.8 of the AIMA Book. In Q-Learning the agent learns an action-value function Q which gives the utility of taking a given action in a particular state. Q-Learning does not required a transition model and hence is a model free method. Let us look into the source before we see some usage examples.
Step18: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a mdp similar to the PassiveTDAgent.
Let us use the same GridMDP object we used above. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9. The class also implements an exploration function f which returns fixed Rplus until agent has visited state, action Ne number of times. This is the same as the one defined on page 842 of the book. The method actions_in_state returns actions possible in given state. It is useful when applying max and argmax operations.
Let us create our object now. We also use the same alpha as given in the footnote of the book on page 837. We use Rplus = 2 and Ne = 5 as defined on page 843. Fig 21.7
Step19: Now to try out the q_agent we make use of the run_single_trial function in rl.py (which was also used above). Let us use 200 iterations.
Step20: Now let us see the Q Values. The keys are state-action pairs. Where different actions correspond according to
Step21: The Utility U of each state is related to Q by the following equation.
U (s) = max <sub>a</sub> Q(s, a)
Let us convert the Q Values above into U estimates.
Step22: Let us finally compare these estimates to value_iteration results. | Python Code:
from rl import *
Explanation: Reinforcement Learning
This Jupyter notebook acts as supporting material for Chapter 21 Reinforcement Learning of the book Artificial Intelligence: A Modern Approach. This notebook makes use of the implementations in rl.py module. We also make use of implementation of MDPs in the mdp.py module to test our agents. It might be helpful if you have already gone through the Jupyter notebook dealing with Markov decision process. Let us import everything from the rl module. It might be helpful to view the source of some of our implementations. Please refer to the Introductory Jupyter notebook for more details.
End of explanation
from mdp import sequential_decision_environment
Explanation: CONTENTS
Overview
Passive Reinforcement Learning
Direct Utility Estimation
Adaptive Dynamic Programming
Temporal-Difference Agent
Active Reinforcement Learning
Q learning
OVERVIEW
Before we start playing with the actual implementations let us review a couple of things about RL.
Reinforcement Learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward.
Reinforcement learning differs from standard supervised learning in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
-- Source: Wikipedia
In summary we have a sequence of state action transitions with rewards associated with some states. Our goal is to find the optimal policy $\pi$ which tells us what action to take in each state.
PASSIVE REINFORCEMENT LEARNING
In passive Reinforcement Learning the agent follows a fixed policy $\pi$. Passive learning attempts to evaluate the given policy $pi$ - without any knowledge of the Reward function $R(s)$ and the Transition model $P(s'\ |\ s, a)$.
This is usually done by some method of utility estimation. The agent attempts to directly learn the utility of each state that would result from following the policy. Note that at each step, it has to perceive the reward and the state - it has no global knowledge of these. Thus, if a certain the entire set of actions offers a very low probability of attaining some state $s_+$ - the agent may never perceive the reward $R(s_+)$.
Consider a situation where an agent is given a policy to follow. Thus, at any point it knows only its current state and current reward, and the action it must take next. This action may lead it to more than one state, with different probabilities.
For a series of actions given by $\pi$, the estimated utility $U$:
$$U^{\pi}(s) = E(\sum_{t=0}^\inf \gamma^t R^t(s')$$)
Or the expected value of summed discounted rewards until termination.
Based on this concept, we discuss three methods of estimating utility:
Direct Utility Estimation (DUE)
The first, most naive method of estimating utility comes from the simplest interpretation of the above definition. We construct an agent that follows the policy until it reaches the terminal state. At each step, it logs its current state, reward. Once it reaches the terminal state, it can estimate the utility for each state for that iteration, by simply summing the discounted rewards from that state to the terminal one.
It can now run this 'simulation' $n$ times, and calculate the average utility of each state. If a state occurs more than once in a simulation, both its utility values are counted separately.
Note that this method may be prohibitively slow for very large statespaces. Besides, it pays no attention to the transition probability $P(s'\ |\ s, a)$. It misses out on information that it is capable of collecting (say, by recording the number of times an action from one state led to another state). The next method addresses this issue.
Adaptive Dynamic Programming (ADP)
This method makes use of knowledge of the past state $s$, the action $a$, and the new perceived state $s'$ to estimate the transition probability $P(s'\ |\ s,a)$. It does this by the simple counting of new states resulting from previous states and actions.<br>
The program runs through the policy a number of times, keeping track of:
- each occurrence of state $s$ and the policy-recommended action $a$ in $N_{sa}$
- each occurrence of $s'$ resulting from $a$ on $s$ in $N_{s'|sa}$.
It can thus estimate $P(s'\ |\ s,a)$ as $N_{s'|sa}/N_{sa}$, which in the limit of infinite trials, will converge to the true value.<br>
Using the transition probabilities thus estimated, it can apply POLICY-EVALUATION to estimate the utilities $U(s)$ using properties of convergence of the Bellman functions.
Temporal-difference learning (TD)
Instead of explicitly building the transition model $P$, the temporal-difference model makes use of the expected closeness between the utilities of two consecutive states $s$ and $s'$.
For the transition $s$ to $s'$, the update is written as:
$$U^{\pi}(s) \leftarrow U^{\pi}(s) + \alpha \left( R(s) + \gamma U^{\pi}(s') - U^{\pi}(s) \right)$$
This model implicitly incorporates the transition probabilities by being weighed for each state by the number of times it is achieved from the current state. Thus, over a number of iterations, it converges similarly to the Bellman equations.
The advantage of the TD learning model is its relatively simple computation at each step, rather than having to keep track of various counts.
For $n_s$ states and $n_a$ actions the ADP model would have $n_s \times n_a$ numbers $N_{sa}$ and $n_s^2 \times n_a$ numbers $N_{s'|sa}$ to keep track of. The TD model must only keep track of a utility $U(s)$ for each state.
Demonstrating Passive agents
Passive agents are implemented in rl.py as various Agent-Classes.
To demonstrate these agents, we make use of the GridMDP object from the MDP module. sequential_decision_environment is similar to that used for the MDP notebook but has discounting with $\gamma = 0.9$.
The Agent-Program can be obtained by creating an instance of the relevant Agent-Class. The __call__ method allows the Agent-Class to be called as a function. The class needs to be instantiated with a policy ($\pi$) and an MDP whose utility of states will be estimated.
End of explanation
# Action Directions
north = (0, 1)
south = (0,-1)
west = (-1, 0)
east = (1, 0)
policy = {
(0, 2): east, (1, 2): east, (2, 2): east, (3, 2): None,
(0, 1): north, (2, 1): north, (3, 1): None,
(0, 0): north, (1, 0): west, (2, 0): west, (3, 0): west,
}
Explanation: The sequential_decision_environment is a GridMDP object as shown below. The rewards are +1 and -1 in the terminal states, and -0.04 in the rest. <img src="files/images/mdp.png"> Now we define actions and a policy similar to Fig 21.1 in the book.
End of explanation
%psource PassiveDUEAgent
DUEagent = PassiveDUEAgent(policy, sequential_decision_environment)
for i in range(200):
run_single_trial(DUEagent, sequential_decision_environment)
DUEagent.estimate_U()
Explanation: Direction Utility Estimation Agent
The PassiveDEUAgent class in the rl module implements the Agent Program described in Fig 21.2 of the AIMA Book. PassiveDEUAgent sums over rewards to find the estimated utility for each state. It thus requires the running of a number of iterations.
End of explanation
print('\n'.join([str(k)+':'+str(v) for k, v in DUEagent.U.items()]))
Explanation: The calculated utilities are:
End of explanation
%psource PassiveADPAgent
Explanation: Adaptive Dynamic Programming Agent
The PassiveADPAgent class in the rl module implements the Agent Program described in Fig 21.2 of the AIMA Book. PassiveADPAgent uses state transition and occurrence counts to estimate $P$, and then $U$. Go through the source below to understand the agent.
End of explanation
ADPagent = PassiveADPAgent(policy, sequential_decision_environment)
for i in range(200):
run_single_trial(ADPagent, sequential_decision_environment)
Explanation: We instantiate a PassiveADPAgent below with the GridMDP shown and train it over 200 iterations. The rl module has a simple implementation to simulate iterations. The function is called run_single_trial.
End of explanation
print('\n'.join([str(k)+':'+str(v) for k, v in ADPagent.U.items()]))
Explanation: The calculated utilities are:
End of explanation
%psource PassiveTDAgent
Explanation: Passive Temporal Difference Agent
PassiveTDAgent uses temporal differences to learn utility estimates. We learn the difference between the states and backup the values to previous states. Let us look into the source before we see some usage examples.
End of explanation
TDagent = PassiveTDAgent(policy, sequential_decision_environment, alpha = lambda n: 60./(59+n))
Explanation: In creating the TDAgent, we use the same learning rate $\alpha$ as given in the footnote of the book on page 837.
End of explanation
for i in range(200):
run_single_trial(TDagent,sequential_decision_environment)
Explanation: Now we run 200 trials for the agent to estimate Utilities.
End of explanation
print('\n'.join([str(k)+':'+str(v) for k, v in TDagent.U.items()]))
Explanation: The calculated utilities are:
End of explanation
from mdp import value_iteration
Explanation: Comparison with value iteration method
We can also compare the utility estimates learned by our agent to those obtained via value iteration.
Note that value iteration has a priori knowledge of the transition table $P$, the rewards $R$, and all the states $s$.
End of explanation
U_values = value_iteration(sequential_decision_environment)
print('\n'.join([str(k)+':'+str(v) for k, v in U_values.items()]))
Explanation: The values calculated by value iteration:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
def graph_utility_estimates(agent_program, mdp, no_of_iterations, states_to_graph):
graphs = {state:[] for state in states_to_graph}
for iteration in range(1,no_of_iterations+1):
run_single_trial(agent_program, mdp)
for state in states_to_graph:
graphs[state].append((iteration, agent_program.U[state]))
for state, value in graphs.items():
state_x, state_y = zip(*value)
plt.plot(state_x, state_y, label=str(state))
plt.ylim([0,1.2])
plt.legend(loc='lower right')
plt.xlabel('Iterations')
plt.ylabel('U')
Explanation: Evolution of utility estimates over iterations
We can explore how these estimates vary with time by using plots similar to Fig 21.5a. We will first enable matplotlib using the inline backend. We also define a function to collect the values of utilities at each iteration.
End of explanation
agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n))
graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2)])
Explanation: Here is a plot of state $(2,2)$.
End of explanation
graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2), (3,2)])
Explanation: It is also possible to plot multiple states on the same plot. As expected, the utility of the finite state $(3,2)$ stays constant and is equal to $R((3,2)) = 1$.
End of explanation
%psource QLearningAgent
Explanation: ACTIVE REINFORCEMENT LEARNING
Unlike Passive Reinforcement Learning in Active Reinforcement Learning we are not bound by a policy pi and we need to select our actions. In other words the agent needs to learn an optimal policy. The fundamental tradeoff the agent needs to face is that of exploration vs. exploitation.
QLearning Agent
The QLearningAgent class in the rl module implements the Agent Program described in Fig 21.8 of the AIMA Book. In Q-Learning the agent learns an action-value function Q which gives the utility of taking a given action in a particular state. Q-Learning does not required a transition model and hence is a model free method. Let us look into the source before we see some usage examples.
End of explanation
q_agent = QLearningAgent(sequential_decision_environment, Ne=5, Rplus=2,
alpha=lambda n: 60./(59+n))
Explanation: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a mdp similar to the PassiveTDAgent.
Let us use the same GridMDP object we used above. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9. The class also implements an exploration function f which returns fixed Rplus until agent has visited state, action Ne number of times. This is the same as the one defined on page 842 of the book. The method actions_in_state returns actions possible in given state. It is useful when applying max and argmax operations.
Let us create our object now. We also use the same alpha as given in the footnote of the book on page 837. We use Rplus = 2 and Ne = 5 as defined on page 843. Fig 21.7
End of explanation
for i in range(200):
run_single_trial(q_agent,sequential_decision_environment)
Explanation: Now to try out the q_agent we make use of the run_single_trial function in rl.py (which was also used above). Let us use 200 iterations.
End of explanation
q_agent.Q
Explanation: Now let us see the Q Values. The keys are state-action pairs. Where different actions correspond according to:
north = (0, 1)
south = (0,-1)
west = (-1, 0)
east = (1, 0)
End of explanation
U = defaultdict(lambda: -1000.) # Very Large Negative Value for Comparison see below.
for state_action, value in q_agent.Q.items():
state, action = state_action
if U[state] < value:
U[state] = value
U
Explanation: The Utility U of each state is related to Q by the following equation.
U (s) = max <sub>a</sub> Q(s, a)
Let us convert the Q Values above into U estimates.
End of explanation
print(value_iteration(sequential_decision_environment))
Explanation: Let us finally compare these estimates to value_iteration results.
End of explanation |
226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chinmai Raman
Homework 3
A.4 Solving a system of difference equations
Computes the development of a loan over time.
The below function calculates the amount paid per month (the first array) and the amount left to be paid (the second array) at each month of the year at a principal of $10,000 to be paid over 1 year at annual interest rate of 6%
Step1: A.11 Testing different methods of root finding
$f(x) = Sin(x)$
Step2: $f(x) = x - sin(x)$
Step3: $f(x) = x^5 - sin x$
Step4: $f(x) = x^4sinx$
Step5: $f(x) = x^4 - 16$
Step6: $f(x) = x^{10} - 1$
Step7: $tanh(x) - x^{10}$
Step8: A.13 Computing the arc length of a curve
Step9: The arclength of the function f(x) from -2 to 2 is 4.18
Step10: A.14 Finding difference equations for computing sin(x)
The accuracy of a Taylor polynomial improves as x decreases (moves closer to zero).
Step11: The accuracy of a Taylor polynomial also improves as n increases. | Python Code:
p1.loan(6, 10000, 12)
Explanation: Chinmai Raman
Homework 3
A.4 Solving a system of difference equations
Computes the development of a loan over time.
The below function calculates the amount paid per month (the first array) and the amount left to be paid (the second array) at each month of the year at a principal of $10,000 to be paid over 1 year at annual interest rate of 6%
End of explanation
p2.graph(p2.f1, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f1, p2.f1prime, -4)
p2.bisect(p2.f1, -4, -2)
p2.secant(p2.f1, -4.5, -3.5)
Explanation: A.11 Testing different methods of root finding
$f(x) = Sin(x)$
End of explanation
p2.graph(p2.f2, 100, -np.pi, np.pi)
p2.Newton(p2.f2, p2.f2prime, 1)
p2.bisect(p2.f2, -1, 1)
p2.secant(p2.f2, -2, -1)
Explanation: $f(x) = x - sin(x)$
End of explanation
p2.graph(p2.f3, 100, -np.pi / 2, np.pi / 2)
p2.Newton(p2.f3, p2.f3prime, -1)
p2.bisect(p2.f3, -1, 1)
p2.secant(p2.f3, -1, -0.5)
Explanation: $f(x) = x^5 - sin x$
End of explanation
p2.graph(p2.f4, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f4, p2.f4prime, -4)
p2.bisect(p2.f4, -4, -2)
p2.secant(p2.f4, -5, -4)
Explanation: $f(x) = x^4sinx$
End of explanation
p2.graph(p2.f5, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f5, p2.f5prime, -3)
p2.bisect(p2.f5, -3, -1)
p2.secant(p2.f5, -4, -3)
Explanation: $f(x) = x^4 - 16$
End of explanation
p2.graph(p2.f6, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f6, p2.f6prime, 2)
p2.bisect(p2.f6, 0, 2)
p2.secant(p2.f6, 3, 2)
Explanation: $f(x) = x^{10} - 1$
End of explanation
p2.graph(p2.f7, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f7, p2.f7prime, 1)
p2.bisect(p2.f7, 0.5, 2)
p2.secant(p2.f7, 3, 2)
Explanation: $tanh(x) - x^{10}$
End of explanation
h1 = -4 * (x)**2
x = sp.Symbol('x')
h2 = sp.exp(h1)
h3 = 1 / np.sqrt(2 * np.pi) * h2
length = p3.arclength(h3, -2, 2, 10)
print length
Explanation: A.13 Computing the arc length of a curve
End of explanation
fig = plt.figure(1)
x = np.linspace(-2, 2, 100)
y = 1 / np.sqrt(2 * np.pi) * np.exp(-4 * x**2)
x1 = length[0]
y1 = length[1]
plt.plot(x, y, 'r-', x1, y1, 'b-')
plt.xlabel('x')
plt.ylabel('y')
plt.title('1/sqrt(2pi) * e^(-4t^2)')
plt.show(fig)
Explanation: The arclength of the function f(x) from -2 to 2 is 4.18
End of explanation
x = [-3 * np.pi / 4.0, -np.pi / 4.0, np.pi / 4.0, 3 * np.pi / 4]
N = [5, 5, 5, 5]
n = 0
Sn = []
while n < 4:
Sn.append(p4.sin_Taylor(x[n], N[n])[0])
n += 1
print Sn
Explanation: A.14 Finding difference equations for computing sin(x)
The accuracy of a Taylor polynomial improves as x decreases (moves closer to zero).
End of explanation
x = [np.pi / 4, np.pi / 4, np.pi / 4, np.pi / 4]
N = [1, 3, 5, 10]
n = 0
Sn = []
while n < 4:
Sn.append(p4.sin_Taylor(x[n], N[n])[0])
n += 1
print Sn
Explanation: The accuracy of a Taylor polynomial also improves as n increases.
End of explanation |
227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problemas
Defina una función que obtenga la cinemática inversa de un pendulo doble.
Step1: Obtenga las posiciones en el espacio articular, $q_1$ y $q_2$, necesarias para que el punto final del pendulo doble llegue a las coordenadas $p_1 = (0,1)$, $p_2 = (1,3)$ y $p_3 = (3,2)$.
Step2: Genere las trayectorias necesarias para que el pendulo doble se mueva del punto $p_1$ al punto $p_2$ en $2s$, del punto $p_2$ al punto $p_3$ en $2s$ y del punto $p_3$ al punto $p_1$ en $2s$.
Utiliza 100 puntos por segundo y asegurate de guardar las trayectorias generadas en las variables correctas para que q1s y q2s tengan las trayectorias completas.
Step3: Cree una animación con las trayectorias generadas y las funciones proporcionadas a continuación (algunas funciones estan marcadas con comentarios en donde hace falta agregar código). | Python Code:
def ci_pendulo_doble(x, y):
# tome en cuenta que las longitudes de los eslabones son 2 y 2
l1, l2 = 2, 2
from numpy import arccos, arctan2, sqrt
# YOUR CODE HERE
raise NotImplementedError()
return q1, q2
from numpy.testing import assert_allclose
assert_allclose(ci_pendulo_doble(4, 0), (0,0))
assert_allclose(ci_pendulo_doble(0, 4), (1.57079632,0))
Explanation: Problemas
Defina una función que obtenga la cinemática inversa de un pendulo doble.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
from numpy.testing import assert_allclose
assert_allclose((q11, q21),(0.25268 , 2.636232), rtol=1e-05, atol=1e-05)
from numpy.testing import assert_allclose
assert_allclose((q12, q22),(0.589988, 1.318116), rtol=1e-05, atol=1e-05)
from numpy.testing import assert_allclose
assert_allclose((q13, q23),(0.14017 , 0.895665), rtol=1e-05, atol=1e-05)
Explanation: Obtenga las posiciones en el espacio articular, $q_1$ y $q_2$, necesarias para que el punto final del pendulo doble llegue a las coordenadas $p_1 = (0,1)$, $p_2 = (1,3)$ y $p_3 = (3,2)$.
End of explanation
from generacion_trayectorias import grafica_trayectoria
# YOUR CODE HERE
raise NotImplementedError()
q1s = q1s1 + q1s2 + q1s3
q2s = q2s1 + q2s2 + q2s3
from numpy.testing import assert_allclose
assert_allclose((q1s[0], q1s[-1]),(0.25268, 0.25268), rtol=1e-05, atol=1e-05)
from numpy.testing import assert_allclose
assert_allclose((q2s[0], q2s[-1]),(2.636232, 2.636232), rtol=1e-05, atol=1e-05)
Explanation: Genere las trayectorias necesarias para que el pendulo doble se mueva del punto $p_1$ al punto $p_2$ en $2s$, del punto $p_2$ al punto $p_3$ en $2s$ y del punto $p_3$ al punto $p_1$ en $2s$.
Utiliza 100 puntos por segundo y asegurate de guardar las trayectorias generadas en las variables correctas para que q1s y q2s tengan las trayectorias completas.
End of explanation
from matplotlib.pyplot import figure, style
from matplotlib import animation, rc
rc('animation', html='html5')
from numpy import sin, cos, arange
fig = figure(figsize=(8, 8))
axi = fig.add_subplot(111, autoscale_on=False, xlim=(-0.6, 3.1), ylim=(-0.6, 3.1))
linea, = axi.plot([], [], "-o", lw=2, color='gray')
def cd_pendulo_doble(q1, q2):
l1, l2 = 2, 2
# YOUR CODE HERE
raise NotImplementedError()
return xs, ys
def inicializacion():
'''Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema'''
linea.set_data([], [])
return linea
def animacion(i):
'''Esta funcion se ejecuta para cada cuadro del GIF'''
# YOUR CODE HERE
raise NotImplementedError()
linea.set_data(xs, ys)
return linea
ani = animation.FuncAnimation(fig, animacion, arange(1, len(q1s)), interval=10, init_func=inicializacion)
ani
from numpy.testing import assert_allclose
assert_allclose(cd_pendulo_doble(0, 0), ([0,2,4], [0,0,0]), rtol=1e-05, atol=1e-05)
assert_allclose(cd_pendulo_doble(1.57079632,0), ([0, 0, 0],[0, 2, 4]), rtol=1e-05, atol=1e-05)
Explanation: Cree una animación con las trayectorias generadas y las funciones proporcionadas a continuación (algunas funciones estan marcadas con comentarios en donde hace falta agregar código).
End of explanation |
228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute spatial resolution metrics in source space
Compute peak localisation error and spatial deviation for the point-spread
functions of dSPM and MNE. Plot their distributions and difference of
distributions. This example mimics some results from
Step1: MNE
Compute resolution matrices, peak localisation error (PLE) for point spread
functions (PSFs), spatial deviation (SD) for PSFs
Step2: dSPM
Do the same for dSPM
Step3: Visualize results
Visualise peak localisation error (PLE) across the whole cortex for MNE PSF
Step4: And dSPM
Step5: Subtract the two distributions and plot this difference
Step6: These plots show that dSPM has generally lower peak localization error (red
color) than MNE in deeper brain areas, but higher error (blue color) in more
superficial areas.
Next we'll visualise spatial deviation (SD) across the whole cortex for MNE
PSF
Step7: And dSPM
Step8: Subtract the two distributions and plot this difference | Python Code:
# Author: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_resolution_matrix
from mne.minimum_norm import resolution_metrics
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
fname_fwd = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = meg_path / 'sample_audvis-cov.fif'
fname_evo = meg_path / 'sample_audvis-ave.fif'
# read forward solution
forward = mne.read_forward_solution(fname_fwd)
# forward operator with fixed source orientations
mne.convert_forward_solution(forward, surf_ori=True,
force_fixed=True, copy=False)
# noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# evoked data for info
evoked = mne.read_evokeds(fname_evo, 0)
# make inverse operator from forward solution
# free source orientation
inverse_operator = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward, noise_cov=noise_cov, loose=0.,
depth=None)
# regularisation parameter
snr = 3.0
lambda2 = 1.0 / snr ** 2
Explanation: Compute spatial resolution metrics in source space
Compute peak localisation error and spatial deviation for the point-spread
functions of dSPM and MNE. Plot their distributions and difference of
distributions. This example mimics some results from :footcite:HaukEtAl2019,
namely Figure 3 (peak localisation error for PSFs, L2-MNE vs dSPM) and Figure 4
(spatial deviation for PSFs, L2-MNE vs dSPM).
End of explanation
rm_mne = make_inverse_resolution_matrix(forward, inverse_operator,
method='MNE', lambda2=lambda2)
ple_mne_psf = resolution_metrics(rm_mne, inverse_operator['src'],
function='psf', metric='peak_err')
sd_mne_psf = resolution_metrics(rm_mne, inverse_operator['src'],
function='psf', metric='sd_ext')
del rm_mne
Explanation: MNE
Compute resolution matrices, peak localisation error (PLE) for point spread
functions (PSFs), spatial deviation (SD) for PSFs:
End of explanation
rm_dspm = make_inverse_resolution_matrix(forward, inverse_operator,
method='dSPM', lambda2=lambda2)
ple_dspm_psf = resolution_metrics(rm_dspm, inverse_operator['src'],
function='psf', metric='peak_err')
sd_dspm_psf = resolution_metrics(rm_dspm, inverse_operator['src'],
function='psf', metric='sd_ext')
del rm_dspm, forward
Explanation: dSPM
Do the same for dSPM:
End of explanation
brain_ple_mne = ple_mne_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=1,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_mne.add_text(0.1, 0.9, 'PLE MNE', 'title', font_size=16)
Explanation: Visualize results
Visualise peak localisation error (PLE) across the whole cortex for MNE PSF:
End of explanation
brain_ple_dspm = ple_dspm_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=2,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_dspm.add_text(0.1, 0.9, 'PLE dSPM', 'title', font_size=16)
Explanation: And dSPM:
End of explanation
diff_ple = ple_mne_psf - ple_dspm_psf
brain_ple_diff = diff_ple.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=3,
clim=dict(kind='value', pos_lims=(0., 1., 2.)))
brain_ple_diff.add_text(0.1, 0.9, 'PLE MNE-dSPM', 'title', font_size=16)
Explanation: Subtract the two distributions and plot this difference
End of explanation
brain_sd_mne = sd_mne_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=4,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_mne.add_text(0.1, 0.9, 'SD MNE', 'title', font_size=16)
Explanation: These plots show that dSPM has generally lower peak localization error (red
color) than MNE in deeper brain areas, but higher error (blue color) in more
superficial areas.
Next we'll visualise spatial deviation (SD) across the whole cortex for MNE
PSF:
End of explanation
brain_sd_dspm = sd_dspm_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=5,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_dspm.add_text(0.1, 0.9, 'SD dSPM', 'title', font_size=16)
Explanation: And dSPM:
End of explanation
diff_sd = sd_mne_psf - sd_dspm_psf
brain_sd_diff = diff_sd.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=6,
clim=dict(kind='value', pos_lims=(0., 1., 2.)))
brain_sd_diff.add_text(0.1, 0.9, 'SD MNE-dSPM', 'title', font_size=16)
Explanation: Subtract the two distributions and plot this difference:
End of explanation |
229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create Better Graphs with Bokeh
This guide walks through the process of creating better graphs with bokeh. The coolest thing about bokeh is that it lets you create interactive plots. We will test out this functionality below.
Basic Plotting
As always, lets begin by importing the libraries that we will be using for this tutorial.
Step1: You'll notice that we didn't import the entire bokeh library, rather, we imported specific modules and methods. This is to prevent your python script from using up more computation time and memory than is necessary. Bokeh is a large library, and importing the entire package is rather reckless. <br>
<br>
The first thing that we are going to do is allow the bokeh library to produce graphs in a python notebook.
Step2: Now lets plot something pretty. To start with, we are going to plot a Lorenz System. The code below is taken from a bokeh demo
Step3: Interactive Plotting
In order to make these plots interactive in a jupyter notebook, we make one more import. | Python Code:
import numpy as np
from scipy.integrate import odeint
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
Explanation: Create Better Graphs with Bokeh
This guide walks through the process of creating better graphs with bokeh. The coolest thing about bokeh is that it lets you create interactive plots. We will test out this functionality below.
Basic Plotting
As always, lets begin by importing the libraries that we will be using for this tutorial.
End of explanation
output_notebook()
Explanation: You'll notice that we didn't import the entire bokeh library, rather, we imported specific modules and methods. This is to prevent your python script from using up more computation time and memory than is necessary. Bokeh is a large library, and importing the entire package is rather reckless. <br>
<br>
The first thing that we are going to do is allow the bokeh library to produce graphs in a python notebook.
End of explanation
sigma = 10
rho = 28
beta = 8.0/3
theta = 3 * np.pi / 4
def lorenz(xyz, t):
x, y, z = xyz
x_dot = sigma * (y - x)
y_dot = x * rho - x * z - y
z_dot = x * y - beta* z
return [x_dot, y_dot, z_dot]
initial = (-10, -7, 35)
t = np.arange(0, 100, 0.006)
solution = odeint(lorenz, initial, t)
x = solution[:, 0]
y = solution[:, 1]
z = solution[:, 2]
xprime = np.cos(theta) * x - np.sin(theta) * y
colors = ["#C6DBEF", "#9ECAE1", "#6BAED6", "#4292C6", "#2171B5", "#08519C", "#08306B",]
p = figure(title="lorenz example")
p.multi_line(np.array_split(xprime, 7), np.array_split(z, 7),
line_color=colors, line_alpha=0.8, line_width=1.5)
show(p) # open a browser
Explanation: Now lets plot something pretty. To start with, we are going to plot a Lorenz System. The code below is taken from a bokeh demo
End of explanation
from ipywidgets import interact
from bokeh.io import push_notebook
Explanation: Interactive Plotting
In order to make these plots interactive in a jupyter notebook, we make one more import.
End of explanation |
230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Automatically rewrite TF 1.x and compat.v1 API symbols
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Clone the tensorflow/models git repository so you have some code to test on
Step3: Read the help
The script should be installed with TensorFlow. Here is the builtin help
Step4: Example TF1 code
Here is a simple TensorFlow 1.0 script
Step5: With TensorFlow 2.x installed it does not run
Step6: Single file
The script can be run on a single Python file
Step7: The script will print errors if it can not find a fix for the code.
Directory tree
Typical projects, including this simple example, will use much more than one file. Typically want to update an entire package, so the script can also be run on a directory tree
Step8: Note the one warning about the dataset.make_one_shot_iterator function.
Now the script works in with TensorFlow 2.x
Step9: Detailed report
The script also reports a list of detailed changes. In this example it found one possibly unsafe transformation and included a warning at the top of the file
Step10: Note again the one warning about the Dataset.make_one_shot_iterator function.
In other cases the output will explain the reasoning for non-trivial changes
Step11: Here is the modified file contents, note how the script adds argument names to deal with moved and renamed arguments
Step12: A larger project might contain a few errors. For example convert the deeplab model
Step13: It produced the output files
Step14: But there were errors. The report will help you pin-point what you need to fix before this will run. Here are the first three errors
Step15: "Safety" mode
The conversion script also has a less invasive SAFETY mode that simply changes the imports to use the tensorflow.compat.v1 module | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
print(tf.__version__)
Explanation: Automatically rewrite TF 1.x and compat.v1 API symbols
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/upgrade">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/upgrade.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/upgrade.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/upgrade.ipynb">
<img src="https://www.tensorflow.org/images/download_logo_32px.png" />
Download notebook</a>
</td>
</table>
TensorFlow 2.x includes many API changes from TF 1.x and the tf.compat.v1 APIs, such as reordering arguments, renaming symbols, and changing default values for parameters. Manually performing all of these modifications would be tedious and prone to error. To streamline the changes, and to make your transition to TF 2.x as seamless as possible, the TensorFlow team has created the tf_upgrade_v2 utility to help transition legacy code to the new API.
Note: tf_upgrade_v2 is installed automatically for TensorFlow 1.13 and later (including all TF 2.x builds).
Typical usage is like this:
<pre class="devsite-terminal devsite-click-to-copy prettyprint lang-bsh">
tf_upgrade_v2 \
--intree my_project/ \
--outtree my_project_v2/ \
--reportfile report.txt
</pre>
It will accelerate your upgrade process by converting existing TensorFlow 1.x Python scripts to TensorFlow 2.x.
The conversion script automates many mechanical API transformations, though many APIs cannot be automatically migrated. It is also not able to fully make your code compatible with TF2 behaviors and APIs. So, it is only a part of your migration journey.
Compatibility modules
Certain API symbols can not be upgraded simply by using a string replacement. Those that cannot be automatically upgraded will be mapped to their locations in the compat.v1 module. This module replaces TF 1.x symbols like tf.foo with the equivalent tf.compat.v1.foo reference. If you are already using compat.v1 APIs by importing TF via import tensorflow.compat.v1 as tf, the tf_upgrade_v2 script will attempt to convert these usages to the non-compat APIs where possible. Note that while some compat.v1 APIs are compatible with TF2.x behaviors, many are not. Therefore, it's recommended to manually proofread replacements and migrate them to new APIs in the tf.* namespace instead of tf.compat.v1 namespace as quickly as possible.
Because of TensorFlow 2.x module deprecations (for example, tf.flags and tf.contrib), some changes can not be worked around by switching to compat.v1. Upgrading this code may require using an additional library (for example, absl.flags) or switching to a package in tensorflow/addons.
Recommended upgrade process
The rest of this guide demonstrates how to use the symbol-rewriting script. While the script is easy to use, it is strongly recommended that you use the script as part of the following process:
Unit Test: Ensure that the code you’re upgrading has a unit test suite with reasonable coverage. This is Python code, so the language won’t protect you from many classes of mistakes. Also ensure that any dependency you have has already been upgraded to be compatible with TensorFlow 2.x.
Install TensorFlow 1.15: Upgrade your TensorFlow to the latest TensorFlow 1.x version, at least 1.15. This includes the final TensorFlow 2.0 API in tf.compat.v2.
Test With 1.15: Ensure your unit tests pass at this point. You’ll be running them repeatedly as you upgrade so starting from green is important.
Run the upgrade script: Run tf_upgrade_v2 on your entire source tree, tests included. This will upgrade your code to a format where it only uses symbols available in TensorFlow 2.0. Deprecated symbols will be accessed with tf.compat.v1. These will eventually require manual attention, but not immediately.
Run the converted tests with TensorFlow 1.15: Your code should still run fine in TensorFlow 1.15. Run your unit tests again. Any error in your tests here means there’s a bug in the upgrade script. Please let us know.
Check the upgrade report for warnings and errors: The script writes a report file that explains any conversions you should double-check, or any manual action you need to take. For example: Any remaining instances of contrib will require manual action to remove. Please consult the RFC for more instructions.
Install TensorFlow 2.x: At this point it should be safe to switch to TensorFlow 2.x binaries, even if you are running with legacy behaviors
Test with v1.disable_v2_behavior: Re-running your tests with a v1.disable_v2_behavior() in the tests' main function should give the same results as running under 1.15.
Enable V2 Behavior: Now that your tests work using the TF2 binaries, you can now begin migrating your code to avoiding tf.estimators and only using supported TF2 behaviors (with no TF2 behavior disabling). See the Migration guides for details.
Using the symbol-rewriting tf_upgrade_v2 script
Setup
Before getting started ensure that TensorFlow 2.x is installed.
End of explanation
!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models
Explanation: Clone the tensorflow/models git repository so you have some code to test on:
End of explanation
!tf_upgrade_v2 -h
Explanation: Read the help
The script should be installed with TensorFlow. Here is the builtin help:
End of explanation
!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10
Explanation: Example TF1 code
Here is a simple TensorFlow 1.0 script:
End of explanation
!(cd models/samples/cookbook/regression && python custom_regression.py)
Explanation: With TensorFlow 2.x installed it does not run:
End of explanation
!tf_upgrade_v2 \
--infile models/samples/cookbook/regression/custom_regression.py \
--outfile /tmp/custom_regression_v2.py
Explanation: Single file
The script can be run on a single Python file:
End of explanation
# update the .py files and copy all the other files to the outtree
!tf_upgrade_v2 \
--intree models/samples/cookbook/regression/ \
--outtree regression_v2/ \
--reportfile tree_report.txt
Explanation: The script will print errors if it can not find a fix for the code.
Directory tree
Typical projects, including this simple example, will use much more than one file. Typically want to update an entire package, so the script can also be run on a directory tree:
End of explanation
!(cd regression_v2 && python custom_regression.py 2>&1) | tail
Explanation: Note the one warning about the dataset.make_one_shot_iterator function.
Now the script works in with TensorFlow 2.x:
Note that because the tf.compat.v1 module is included in TF 1.15, the converted script will also run in TensorFlow 1.15.
End of explanation
!head -n 20 tree_report.txt
Explanation: Detailed report
The script also reports a list of detailed changes. In this example it found one possibly unsafe transformation and included a warning at the top of the file:
End of explanation
%%writefile dropout.py
import tensorflow as tf
d = tf.nn.dropout(tf.range(10), 0.2)
z = tf.zeros_like(d, optimize=False)
!tf_upgrade_v2 \
--infile dropout.py \
--outfile dropout_v2.py \
--reportfile dropout_report.txt > /dev/null
!cat dropout_report.txt
Explanation: Note again the one warning about the Dataset.make_one_shot_iterator function.
In other cases the output will explain the reasoning for non-trivial changes:
End of explanation
!cat dropout_v2.py
Explanation: Here is the modified file contents, note how the script adds argument names to deal with moved and renamed arguments:
End of explanation
!tf_upgrade_v2 \
--intree models/research/deeplab \
--outtree deeplab_v2 \
--reportfile deeplab_report.txt > /dev/null
Explanation: A larger project might contain a few errors. For example convert the deeplab model:
End of explanation
!ls deeplab_v2
Explanation: It produced the output files:
End of explanation
!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3
Explanation: But there were errors. The report will help you pin-point what you need to fix before this will run. Here are the first three errors:
End of explanation
!cat dropout.py
!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null
!cat dropout_v2_safe.py
Explanation: "Safety" mode
The conversion script also has a less invasive SAFETY mode that simply changes the imports to use the tensorflow.compat.v1 module:
End of explanation |
231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Прогнозирование уровня средней заработной платы в России
Известны данные о заработной плате за каждый месяц с января 1993 по август 2016. Необходимо проанализировать данные, подобрать для них оптимальную прогнозирующую модель в классе ARIMA и построить прогноз на каждый месяц на два года вперёд от конца данных.
Step1: 1. Визуальный анализ ряда
Загрузим данные и построим график временного ряда
Step2: Визуальный анализ ряда показывает, что в данных есть заметный возрастающий тренд и сезонность.
Очевидно, что он не стационарен, однако для формальности проверим стационарность с помощью критерию Дики-Фуллера, а также выполним STL-декомпозиция ряда
Step3: 2. Стабилизация дисперсии
Критерий Дики-Фуллера не отвергает гипотезу нестационарности.
Временной ряд отличается переменной дисперсией, поэтому выполним преобразование Бокса-Кокса для стабилизации дисперсии.
Step4: 3. Выбор порядка дифференцирования
Для приведения ряда к стационарному попробуем сезонное дифференцирование, сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность
Step5: Гипотеза нестационарности отвергается, и визуально ряд выглядит лучше — явного тренда больше нет, однако данные выглядят довольно непредсказуемо. Применим обычное дифференцирование
Step6: Гипотеза нестационарности по-прежнему отвергается, и визуально ряд выглядит еще лучше — разброс значений меньше и нет переменных повышающих и понижающих участков.
4. Выбор начальных приближений для p,q,P,Q
Построим графики ACF и PACF полученного ряда
Step7: Из расположения лагов в коррелограмме следуют начальные приближения
Step8: Выполним обучениие модели на всех вариантах параметров для нахождения наилучшей про критерию AIC
Step9: Лучшая модель
Step10: 6. Анализ остатков построенной модели
Остатки
Step11: Остатки несмещены (подтверждается критерием Стьюдента), стационарны (подтверждается критерием Дики-Фуллера и визуально) и неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой). Посмотрим, насколько хорошо модель описывает данные
Step12: 7. Прогнозирование
Построим прогноз на каждый месяц на два года вперёд от конца данных | Python Code:
%pylab inline
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import warnings
from itertools import product
def invboxcox(y,lmbda):
if lmbda == 0:
return(np.exp(y))
else:
return(np.exp(np.log(lmbda*y+1)/lmbda))
Explanation: Прогнозирование уровня средней заработной платы в России
Известны данные о заработной плате за каждый месяц с января 1993 по август 2016. Необходимо проанализировать данные, подобрать для них оптимальную прогнозирующую модель в классе ARIMA и построить прогноз на каждый месяц на два года вперёд от конца данных.
End of explanation
salary = pd.read_csv('WAG_C_M.csv', ';', index_col=['month'], parse_dates=['month'], dayfirst=True)
plt.figure(figsize(15,7))
salary.WAG_C_M.plot()
plt.ylabel('Month average salary')
pylab.show()
Explanation: 1. Визуальный анализ ряда
Загрузим данные и построим график временного ряда
End of explanation
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(salary.WAG_C_M).plot()
print "Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.WAG_C_M)[1]
Explanation: Визуальный анализ ряда показывает, что в данных есть заметный возрастающий тренд и сезонность.
Очевидно, что он не стационарен, однако для формальности проверим стационарность с помощью критерию Дики-Фуллера, а также выполним STL-декомпозиция ряда:
End of explanation
salary['salary_box'], lmbda = stats.boxcox(salary.WAG_C_M)
plt.figure(figsize(15,7))
salary.salary_box.plot()
plt.ylabel(u'Transformed average salary')
print "Оптимальный параметр преобразования Бокса-кокса: %f" % lmbda
print "Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_box)[1]
Explanation: 2. Стабилизация дисперсии
Критерий Дики-Фуллера не отвергает гипотезу нестационарности.
Временной ряд отличается переменной дисперсией, поэтому выполним преобразование Бокса-Кокса для стабилизации дисперсии.
End of explanation
salary['salary_box_diff'] = salary.salary_box - salary.salary_box.shift(12)
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(salary.salary_box_diff[12:]).plot()
print "Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_box_diff[12:])[1]
Explanation: 3. Выбор порядка дифференцирования
Для приведения ряда к стационарному попробуем сезонное дифференцирование, сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность:
End of explanation
salary['salary_box_diff2'] = salary.salary_box_diff - salary.salary_box_diff.shift(1)
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(salary.salary_box_diff2[13:]).plot()
print "Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_box_diff2[13:])[1]
Explanation: Гипотеза нестационарности отвергается, и визуально ряд выглядит лучше — явного тренда больше нет, однако данные выглядят довольно непредсказуемо. Применим обычное дифференцирование:
End of explanation
plt.figure(figsize(15,8))
ax = plt.subplot(211)
sm.graphics.tsa.plot_acf(salary.salary_box_diff2[13:].values.squeeze(), lags=48, ax=ax)
pylab.show()
ax = plt.subplot(212)
sm.graphics.tsa.plot_pacf(salary.salary_box_diff2[13:].values.squeeze(), lags=48, ax=ax)
pylab.show()
Explanation: Гипотеза нестационарности по-прежнему отвергается, и визуально ряд выглядит еще лучше — разброс значений меньше и нет переменных повышающих и понижающих участков.
4. Выбор начальных приближений для p,q,P,Q
Построим графики ACF и PACF полученного ряда:
End of explanation
ps = range(0, 6)
d=1
qs = range(0, 6)
Ps = range(0, 2)
D=1
Qs = range(0, 1)
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
len(parameters_list)
Explanation: Из расположения лагов в коррелограмме следуют начальные приближения: Q=0, q=5, P=1, p=5
5. Обучение и сравнение моделей-кандидатов, выбор победителя
Зададим последовательность значений параметров для перебора
End of explanation
results = []
best_aic = float("inf")
warnings.filterwarnings('ignore')
for param in parameters_list:
#try except нужен, потому что на некоторых наборах параметров модель не обучается
try:
model=sm.tsa.statespace.SARIMAX(salary.salary_box, order=(param[0], d, param[1]),
seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1)
#выводим параметры, на которых модель не обучается и переходим к следующему набору
except ValueError:
print 'wrong parameters:', param
continue
aic = model.aic
#сохраняем лучшую модель, aic, параметры
if aic < best_aic:
best_model = model
best_aic = aic
best_param = param
results.append([param, model.aic])
warnings.filterwarnings('default')
result_table = pd.DataFrame(results)
result_table.columns = ['parameters', 'aic']
print result_table.sort_values(by = 'aic', ascending=True).head()
Explanation: Выполним обучениие модели на всех вариантах параметров для нахождения наилучшей про критерию AIC
End of explanation
print best_model.summary()
Explanation: Лучшая модель:
End of explanation
plt.figure(figsize(15,8))
plt.subplot(211)
best_model.resid[13:].plot()
plt.ylabel(u'Residuals')
ax = plt.subplot(212)
sm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)
print "Критерий Стьюдента: p=%f" % stats.ttest_1samp(best_model.resid[13:], 0)[1]
print "Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1]
Explanation: 6. Анализ остатков построенной модели
Остатки:
End of explanation
salary['model'] = invboxcox(best_model.fittedvalues, lmbda)
plt.figure(figsize(15,7))
salary.WAG_C_M.plot()
salary.model[13:].plot(color='r')
plt.ylabel('Average salary')
pylab.show()
Explanation: Остатки несмещены (подтверждается критерием Стьюдента), стационарны (подтверждается критерием Дики-Фуллера и визуально) и неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой). Посмотрим, насколько хорошо модель описывает данные:
End of explanation
salary2 = salary[['WAG_C_M']]
date_list = [datetime.datetime.strptime("01.09.2016", "%d.%m.%Y") + relativedelta(months=x) for x in range(0,24)]
future = pd.DataFrame(index=date_list, columns= salary2.columns)
salary2 = pd.concat([salary2, future])
salary2['forecast'] = invboxcox(best_model.predict(start=284, end=307), lmbda)
plt.figure(figsize(15,7))
salary2.WAG_C_M.plot()
salary2.forecast.plot(color='r')
plt.ylabel('Average salary')
pylab.show()
Explanation: 7. Прогнозирование
Построим прогноз на каждый месяц на два года вперёд от конца данных
End of explanation |
232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multidimentional data - Matrices and Images
Step1: Let us work with the matrix
Step2: numpy matrix multiply uses the dot() function
Step3: Caution the * will just multiply the matricies on an element-by-element basis
Step4: Solving system of linear equations
$$
\begin{array}{c}
x + 2y = 4 \
x + y = 3 \
\end{array}
\hspace{2cm}
\left[
\begin{array}{cc}
1 & 2 \
1 & 1 \
\end{array}
\right]
\left[
\begin{array}{c}
x\
y
\end{array}
\right]
=
\left[
\begin{array}{c}
4\
3\
\end{array}
\right]
\hspace{2cm}
{\bf A}x = {\bf b}
\hspace{2cm}
\left[
\begin{array}{c}
x\
y
\end{array}
\right]
=
\left[
\begin{array}{cc}
1 & 2 \
1 & 1 \
\end{array}
\right]^{-1}
\left[
\begin{array}{c}
4\
3\
\end{array}
\right]
=
\left[
\begin{array}{c}
2\
1
\end{array}
\right]
$$
Step5: System of 3 equations example (Numpy)
Step6: System of 3 equations example (SymPy) - Python's Symbolic Math Package
Step7: SymPy is slower than NumPy
Step8: Images are just 2-d arrays - imshow will display 2-d arrays as images
Step9: Read in some data
Step10: Math on images applies to every value (pixel)
Step11: Show the image represenation of I with a colorbar
Step12: Colormap reference
Step13: WARNING! Common image formats DO NOT preserve dynamic range of original data!!
Common image formats
Step14: Creating images from math
Step15: Fancy Image Display
Step16: Reading in images (imread) - Common Formats
Step17: Images are just arrays that can be sliced.
For common image formats the origin is the upper left hand corner
Step18: Simple image manipulation
Step19: ndimage can do much more
Step20: You can use masks on images
Step21: You can add and subtract images
Step22: The two images above may look the same but they are not! Subtracting the two images reveals the truth.
Step23: FITS Tables - An astronomical example
Stellar spectra data from the ESO Library of Stellar Spectra
Step24: Pseudocolor - All color astronomy images are fake.
Color images are composed of three 2-d images
Step25: We just want to read in one of the three channels
Step26: Need to create a blank 3-d array to hold all of the images
Step27: Fill the array with the filtered images | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
plt.style.use('ggplot')
plt.rc('axes', grid=False) # turn off the background grid for images
Explanation: Multidimentional data - Matrices and Images
End of explanation
my_matrix = np.array([[1,2],[1,1]])
print(my_matrix.shape)
print(my_matrix)
my_matrix_transposed = np.transpose(my_matrix)
print(my_matrix_transposed)
my_matrix_inverse = linalg.inv(my_matrix)
print(my_matrix_inverse)
Explanation: Let us work with the matrix:
$
\left[
\begin{array}{cc}
1 & 2 \
1 & 1
\end{array}
\right]
$
End of explanation
my_matrix_inverse.dot(my_matrix)
Explanation: numpy matrix multiply uses the dot() function:
End of explanation
my_matrix_inverse * my_matrix_inverse
Explanation: Caution the * will just multiply the matricies on an element-by-element basis:
End of explanation
A = np.array([[1,2],[1,1]])
print(A)
b = np.array([[4],[3]])
print(b)
# Solve by inverting A and then mulitply by b
linalg.inv(A).dot(b)
# Cleaner looking
linalg.solve(A,b)
Explanation: Solving system of linear equations
$$
\begin{array}{c}
x + 2y = 4 \
x + y = 3 \
\end{array}
\hspace{2cm}
\left[
\begin{array}{cc}
1 & 2 \
1 & 1 \
\end{array}
\right]
\left[
\begin{array}{c}
x\
y
\end{array}
\right]
=
\left[
\begin{array}{c}
4\
3\
\end{array}
\right]
\hspace{2cm}
{\bf A}x = {\bf b}
\hspace{2cm}
\left[
\begin{array}{c}
x\
y
\end{array}
\right]
=
\left[
\begin{array}{cc}
1 & 2 \
1 & 1 \
\end{array}
\right]^{-1}
\left[
\begin{array}{c}
4\
3\
\end{array}
\right]
=
\left[
\begin{array}{c}
2\
1
\end{array}
\right]
$$
End of explanation
A = np.array([[1,3,5],[2,5,1],[2,3,8]])
b = np.array([[10],[8],[3]])
print(linalg.inv(A))
print(linalg.solve(A,b))
Explanation: System of 3 equations example (Numpy):
$$
\begin{array}{c}
x + 3y + 5z = 10 \
2x + 5y + z = 8 \
2x + 3y + 8z = 3 \
\end{array}
\hspace{3cm}
\left[
\begin{array}{ccc}
1 & 3 & 5 \
2 & 5 & 1 \
2 & 3 & 8
\end{array}
\right]
\left[
\begin{array}{c}
x\
y\
z
\end{array}
\right]
=
\left[
\begin{array}{c}
10\
8\
3
\end{array}
\right]
$$
End of explanation
import sympy as sym
AA = sym.Matrix([[1,3,5],[2,5,1],[2,3,8]])
bb = sym.Matrix([[10],[8],[3]])
print(AA**-1)
print(AA**-1 * bb)
Explanation: System of 3 equations example (SymPy) - Python's Symbolic Math Package
End of explanation
%timeit AA**-1 * bb
%timeit linalg.solve(A,b)
Explanation: SymPy is slower than NumPy
End of explanation
print(A)
plt.imshow(A, interpolation='nearest', cmap=plt.cm.Blues);
Explanation: Images are just 2-d arrays - imshow will display 2-d arrays as images
End of explanation
I = np.load("test_data.npy") # load in a saved numpy array
I.ndim, I.shape, I.dtype
print("The minimum value of the array I is {0:.2f}".format(I.min()))
print("The maximum value of the array I is {0:.2f}".format(I.max()))
print("The mean value of the array I is {0:.2f}".format(I.mean()))
print("The standard deviation of the array I is {0:.2f}".format(I.std()))
#flatten() collapses n-dimentional data into 1-d
plt.hist(I.flatten(),bins=30);
Explanation: Read in some data
End of explanation
II = I + 8
print("The minimum value of the array II is {0:.2f}".format(II.min()))
print("The maximum value of the array II is {0:.2f}".format(II.max()))
print("The mean value of the array II is {0:.2f}".format(II.mean()))
print("The standard deviation of the array II is {0:.2f}".format(II.std()))
Explanation: Math on images applies to every value (pixel)
End of explanation
plt.imshow(I, cmap=plt.cm.gray)
plt.colorbar();
Explanation: Show the image represenation of I with a colorbar
End of explanation
fig, ax = plt.subplots(1,5,sharey=True)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(I, cmap=plt.cm.viridis)
ax[0].set_xlabel('viridis')
ax[1].imshow(I, cmap=plt.cm.hot)
ax[1].set_xlabel('hot')
ax[2].imshow(I, cmap=plt.cm.magma)
ax[2].set_xlabel('magma')
ax[3].imshow(I, cmap=plt.cm.spectral)
ax[3].set_xlabel('spectral')
ax[4].imshow(I, cmap=plt.cm.gray)
ax[4].set_xlabel('gray')
Explanation: Colormap reference: http://matplotlib.org/examples/color/colormaps_reference.html
End of explanation
plt.imsave('Splash.png', I, cmap=plt.cm.gray) # Write the array I to a PNG file
Ipng = plt.imread('Splash.png') # Read in the PNG file
print("The original data has a min = {0:.2f} and a max = {1:.2f}".format(I.min(), I.max()))
print("The PNG file has a min = {0:.2f} and a max = {1:.2f}".format(Ipng.min(), Ipng.max()))
Explanation: WARNING! Common image formats DO NOT preserve dynamic range of original data!!
Common image formats: jpg, gif, png, tiff
Common image formats will re-scale your data values to [0:1]
Common image formats are NOT suitable for scientific data!
End of explanation
X = np.linspace(-5, 5, 500)
Y = np.linspace(-5, 5, 500)
X, Y = np.meshgrid(X, Y) # turns two 1-d arrays (X, Y) into one 2-d grid
Z = np.sqrt(X**2+Y**2)+np.sin(X**2+Y**2)
Z.min(), Z.max(), Z.mean()
Explanation: Creating images from math
End of explanation
from matplotlib.colors import LightSource
ls = LightSource(azdeg=0,altdeg=40)
shadedfig = ls.shade(Z,plt.cm.copper)
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(shadedfig)
contlevels = [1,2,Z.mean()]
ax[1].axis('equal')
ax[1].contour(Z,contlevels)
ax[2].imshow(shadedfig)
ax[2].contour(Z,contlevels);
Explanation: Fancy Image Display
End of explanation
I2 = plt.imread('doctor5.png')
print("The image I2 has a shape [height,width] of {0}".format(I2.shape))
print("The image I2 is made up of data of type {0}".format(I2.dtype))
print("The image I2 has a maximum value of {0}".format(I2.max()))
print("The image I2 has a minimum value of {0}".format(I2.min()))
plt.imshow(I2,cmap=plt.cm.gray);
Explanation: Reading in images (imread) - Common Formats
End of explanation
fig, ax = plt.subplots(1,4)
fig.set_size_inches(12,6)
fig.tight_layout()
# You can show just slices of the image - Rememeber: The origin is the upper left corner
ax[0].imshow(I2, cmap=plt.cm.gray)
ax[0].set_xlabel('Original')
ax[1].imshow(I2[0:300,0:100], cmap=plt.cm.gray)
ax[1].set_xlabel('[0:300,0:100]') # 300 rows, 100 columns
ax[2].imshow(I2[:,0:100], cmap=plt.cm.gray) # ":" = whole range
ax[2].set_xlabel('[:,0:100]') # all rows, 100 columns
ax[3].imshow(I2[:,::-1], cmap=plt.cm.gray);
ax[3].set_xlabel('[:,::-1]') # reverse the columns
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
CutLine = 300
ax[0].imshow(I2, cmap=plt.cm.gray)
ax[0].hlines(CutLine, 0, 194, color='b', linewidth=3)
ax[1].plot(I2[CutLine,:], color='b', linewidth=3)
ax[1].set_xlabel("X Value")
ax[1].set_ylabel("Pixel Value")
Explanation: Images are just arrays that can be sliced.
For common image formats the origin is the upper left hand corner
End of explanation
from scipy import ndimage
fig, ax = plt.subplots(1,5)
fig.set_size_inches(14,6)
fig.tight_layout()
ax[0].imshow(I2, cmap=plt.cm.gray)
I3 = ndimage.rotate(I2,45,cval=0.75) # cval is the value to set pixels outside of image
ax[1].imshow(I3, cmap=plt.cm.gray) # Rotate and reshape
I4 = ndimage.rotate(I2,45,reshape=False,cval=0.75) # Rotate and do not reshape
ax[2].imshow(I4, cmap=plt.cm.gray)
I5 = ndimage.shift(I2,(10,30),cval=0.75) # Shift image
ax[3].imshow(I5, cmap=plt.cm.gray)
I6 = ndimage.gaussian_filter(I2,5) # Blur image
ax[4].imshow(I6, cmap=plt.cm.gray);
Explanation: Simple image manipulation
End of explanation
import astropy.io.fits as fits
x = fits.open('bsg01.fits')
x.info()
x[0].header
xd = x[0].data
print("The image x has a shape [height,width] of {0}".format(xd.shape))
print("The image x is made up of data of type {0}".format(xd.dtype))
print("The image x has a maximum value of {0}".format(xd.max()))
print("The image x has a minimum value of {0}".format(xd.min()))
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(xd,cmap=plt.cm.gray)
ax[1].hist(xd.flatten(),bins=20);
Explanation: ndimage can do much more: http://scipy-lectures.github.io/advanced/image_processing/
FITS file (Flexible Image Transport System) - Standard Astro File Format
FITS format preserves dynamic range of data
FITS format can include lists, tables, images, and combunations of different types of data
End of explanation
CopyData = np.copy(xd)
CutOff = 40
mask = np.where(CopyData > CutOff)
CopyData[mask] = 50 # You can not just throw data away, you have to set it to something.
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(CopyData,cmap=plt.cm.gray)
ax[1].hist(CopyData.flatten(),bins=20);
Explanation: You can use masks on images
End of explanation
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(xd, cmap=plt.cm.gray)
# Open another file 'bsg02.fits'
y = fits.open('bsg02.fits')
yd = y[0].data
ax[1].imshow(yd, cmap=plt.cm.gray);
Explanation: You can add and subtract images
End of explanation
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(xd, cmap=plt.cm.gray)
ax[1].imshow(yd, cmap=plt.cm.gray)
z = xd - yd # Subtract the images pixel by pixel
ax[2].imshow(z, cmap=plt.cm.gray);
Explanation: The two images above may look the same but they are not! Subtracting the two images reveals the truth.
End of explanation
S = fits.open('SolarSpectra.fits')
S.info()
Data = S[0].data
Head = S[0].header
Head
# The FITS header has the information to make an array of wavelengths
Start = Head['CRVAL1']
Number = Head['NAXIS1']
Delta = Head['CDELT1']
End = Start + (Number * Delta)
Wavelength = np.arange(Start,End,Delta)
fig, ax = plt.subplots(2,1)
fig.set_size_inches(11,8.5)
fig.tight_layout()
# Full spectra
ax[0].plot(Wavelength, Data, color='b')
ax[0].set_ylabel("Flux")
ax[0].set_xlabel("Wavelength [angstroms]")
# Just the visible range with the hydrogen Balmer lines
ax[1].set_xlim(4000,7000)
ax[1].set_ylim(0.6,1.2)
ax[1].plot(Wavelength, Data, color='b')
ax[1].set_ylabel("Flux")
ax[1].set_xlabel("Wavelength [angstroms]")
H_Balmer = [6563,4861,4341,4102,3970,3889,3835,3646]
ax[1].vlines(H_Balmer,0,2, color='r', linewidth=3, alpha = 0.25)
Explanation: FITS Tables - An astronomical example
Stellar spectra data from the ESO Library of Stellar Spectra
End of explanation
redfilter = plt.imread('sphereR.jpg')
redfilter.shape,redfilter.dtype
Explanation: Pseudocolor - All color astronomy images are fake.
Color images are composed of three 2-d images: <img src="images/Layers.png" width="150">
JPG images are 3-d, even grayscale images
End of explanation
redfilter = plt.imread('sphereR.jpg')[:,:,0]
redfilter.shape,redfilter.dtype
plt.imshow(redfilter,cmap=plt.cm.gray);
greenfilter = plt.imread('sphereG.jpg')[:,:,0]
bluefilter = plt.imread('sphereB.jpg')[:,:,0]
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,3)
fig.tight_layout()
ax[0].set_title("Red Filter")
ax[1].set_title("Green Filter")
ax[2].set_title("Blue Filter")
ax[0].imshow(redfilter,cmap=plt.cm.gray)
ax[1].imshow(greenfilter,cmap=plt.cm.gray)
ax[2].imshow(bluefilter,cmap=plt.cm.gray);
Explanation: We just want to read in one of the three channels
End of explanation
rgb = np.zeros((480,640,3),dtype='uint8')
print(rgb.shape, rgb.dtype)
plt.imshow(rgb,cmap=plt.cm.gray);
Explanation: Need to create a blank 3-d array to hold all of the images
End of explanation
rgb[:,:,0] = redfilter
rgb[:,:,1] = greenfilter
rgb[:,:,2] = bluefilter
fig, ax = plt.subplots(1,4)
fig.set_size_inches(14,3)
fig.tight_layout()
ax[0].set_title("Red Filter")
ax[1].set_title("Green Filter")
ax[2].set_title("Blue Filter")
ax[3].set_title("All Filters Stacked")
ax[0].imshow(redfilter,cmap=plt.cm.gray)
ax[1].imshow(greenfilter,cmap=plt.cm.gray)
ax[2].imshow(bluefilter,cmap=plt.cm.gray)
ax[3].imshow(rgb,cmap=plt.cm.gray);
print("The image rgb has a shape [height,width] of {0}".format(rgb.shape))
print("The image rgb is made up of data of type {0}".format(rgb.dtype))
print("The image rgb has a maximum value of {0}".format(rgb.max()))
print("The image rgb has a minimum value of {0}".format(rgb.min()))
rgb[:,:,0] = redfilter * 1.5
plt.imshow(rgb)
Explanation: Fill the array with the filtered images
End of explanation |
233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Behaviour suite
This is the official results page for bsuite. You can use this to
Step1: Overall bsuite scores
Load your experiments below. We recommend a maximum of 5 result sets, for clarity of analysis.
The input to the load_bsuite function is a dict that maps from an experiment name of your choosing to the result path.
For an experiment that used CSV logging, this would map to the directory containing the results. For SQLite logging, this would map to the database file for that experiment.
Step2: Parsing the plot above
Step3: Parsing the plot above
Step4: Parsing the plot above
Step5: Parsing the plot above
Step6: MNIST
<img src="https
Step7: Parsing the plot above
Step8: Parsing the plot above
Step9: Parsing the plot above
Step10: Parsing the plot above
Step11: Parsing the plot above
Step12: Parsing the plot above
Step13: Parsing the plot above
Step14: Parsing the plot above
Step15: Parsing the plot above
Step16: Parsing the plot above
Step17: Parsing the plot above
Step18: Parsing the plot above
Step19: Parsing the plot above
Step20: Parsing the plot above
Step21: Parsing the plot above
Step22: Parsing the plot above
Step23: Parsing the plot above
Step24: Parsing the plot above
Step25: Parsing the plot above
Step26: Parsing the plot above
Step27: Parsing the plot above
Step28: Parsing the plot above
Step29: Parsing the plot above
Step30: Parsing the plot above
Step31: Parsing the plot above
Step32: Parsing the plot above
Step33: Parsing the plot above
Step34: Parsing the plot above
Step35: Parsing the plot above
Step36: Parsing the plot above
Step37: Parsing the plot above
Step38: Parsing the plot above
Step39: Parsing the plot above
Step40: Parsing the plot above
Step41: Parsing the plot above
Step42: Parsing the plot above
Step43: Parsing the plot above
Step44: Parsing the plot above
Step45: Parsing the plot above
Step46: Parsing the plot above
Step47: Parsing the plot above
Step48: Parsing the plot above
Step49: Parsing the plot above
Step50: Parsing the plot above
Step51: Parsing the plot above
Step52: Parsing the plot above
Step53: Parsing the plot above
Step54: Parsing the plot above
Step55: Parsing the plot above
Step56: Parsing the plot above
Step57: Parsing the plot above
Step58: Parsing the plot above
Step59: Parsing the plot above
Step60: Parsing the plot above
Step61: Parsing the plot above
Step62: Parsing the plot above
Step63: Parsing the plot above
Step64: Parsing the plot above
Step65: Parsing the plot above
Step66: Parsing the plot above
Step67: Parsing the plot above
Step68: Parsing the plot above
Step69: Parsing the plot above
Step70: Parsing the plot above | Python Code:
#@title Imports
! pip install --quiet git+git://github.com/deepmind/bsuite
import warnings
from bsuite.experiments import summary_analysis
from bsuite.logging import csv_load
from bsuite.logging import sqlite_load
import numpy as np
import pandas as pd
import plotnine as gg
pd.options.mode.chained_assignment = None
gg.theme_set(gg.theme_bw(base_size=16, base_family='serif'))
gg.theme_update(figure_size=(12, 8), panel_spacing_x=0.5, panel_spacing_y=0.5)
warnings.filterwarnings('ignore')
#@title Import experiment-specific analysis
from bsuite.experiments.bandit import analysis as bandit_analysis
from bsuite.experiments.bandit_noise import analysis as bandit_noise_analysis
from bsuite.experiments.bandit_scale import analysis as bandit_scale_analysis
from bsuite.experiments.cartpole import analysis as cartpole_analysis
from bsuite.experiments.cartpole_noise import analysis as cartpole_noise_analysis
from bsuite.experiments.cartpole_scale import analysis as cartpole_scale_analysis
from bsuite.experiments.cartpole_swingup import analysis as cartpole_swingup_analysis
from bsuite.experiments.catch import analysis as catch_analysis
from bsuite.experiments.catch_noise import analysis as catch_noise_analysis
from bsuite.experiments.catch_scale import analysis as catch_scale_analysis
from bsuite.experiments.deep_sea import analysis as deep_sea_analysis
from bsuite.experiments.deep_sea_stochastic import analysis as deep_sea_stochastic_analysis
from bsuite.experiments.discounting_chain import analysis as discounting_chain_analysis
from bsuite.experiments.memory_len import analysis as memory_len_analysis
from bsuite.experiments.memory_size import analysis as memory_size_analysis
from bsuite.experiments.mnist import analysis as mnist_analysis
from bsuite.experiments.mnist_noise import analysis as mnist_noise_analysis
from bsuite.experiments.mnist_scale import analysis as mnist_scale_analysis
from bsuite.experiments.mountain_car import analysis as mountain_car_analysis
from bsuite.experiments.mountain_car_noise import analysis as mountain_car_noise_analysis
from bsuite.experiments.mountain_car_scale import analysis as mountain_car_scale_analysis
from bsuite.experiments.umbrella_distract import analysis as umbrella_distract_analysis
from bsuite.experiments.umbrella_length import analysis as umbrella_length_analysis
Explanation: Behaviour suite
This is the official results page for bsuite. You can use this to:
- Get a snapshot of agent performance.
- Diagnose strengths/weaknesses of your agent.
- Leverage ready-made plots and analysis
End of explanation
#@title loading results from local data:
experiments = {} # Add results here
DF, SWEEP_VARS = sqlite_load.load_bsuite(experiments)
# Or
# DF, SWEEP_VARS = csv_load.load_bsuite(experiments)
#@title overall score as radar plot (double-click to show/hide code)
BSUITE_SCORE = summary_analysis.bsuite_score(DF, SWEEP_VARS)
BSUITE_SUMMARY = summary_analysis.ave_score_by_tag(BSUITE_SCORE, SWEEP_VARS)
__radar_fig__ = summary_analysis.bsuite_radar_plot(BSUITE_SUMMARY, SWEEP_VARS)
Explanation: Overall bsuite scores
Load your experiments below. We recommend a maximum of 5 result sets, for clarity of analysis.
The input to the load_bsuite function is a dict that maps from an experiment name of your choosing to the result path.
For an experiment that used CSV logging, this would map to the directory containing the results. For SQLite logging, this would map to the database file for that experiment.
End of explanation
#@title plotting overall score as bar (double-click to show/hide code)
summary_analysis.bsuite_bar_plot(BSUITE_SCORE, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Snapshot of agent behaviour across key metrics as measured by bsuite.
Length of each "spoke" represents score between 0 and 1.
For more detailed analysis, click into specific challenge domains.
Plotting scores per challenge in bar plot (click to show)
End of explanation
#@title compare agent performance on each challenge (double-click to show/hide code)
summary_analysis.bsuite_bar_plot_compare(BSUITE_SCORE, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Height of each bar is the score on each challenge domain.
Partially-finished runs are shown with transparent bars.
Parameter/agent sweeps are automatically faceted side by side.
For more detailed analysis, click into specific challenge domains.
End of explanation
#@title parsing data
bandit_df = DF[DF.bsuite_env == 'bandit'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'bandit', SWEEP_VARS).draw();
#@title plot average regret through learning (lower is better)
bandit_analysis.plot_learning(bandit_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Height of each bar is the score on each challenge domain.
Partially-finished runs are shown with transparent bars.
Each "facet" focuses on a separate environment.
This plot allows for easier comparison between agents.
For more detailed analysis, click into specific challenge domains.
Individual challenge domains
This section of the report contains specific analysis for each individual bsuite experiment.
Basic
We begin with a collection of very simple decision problems with standard analysis:
- Does the agent learn a reasonable rewarding policy?
- How quickly do they learn simple tasks?
We call these experiments "basic", since they are not particularly targeted at specific core issues.
Bandit
<img src="https://storage.cloud.google.com/bsuite-colab-images/bandit.png" alt="bandit diagram" height="300"/>
A simple independent-armed bandit problem.
The agent is faced with 11 actions with deterministic rewards [0.0, 0.1, .., 1.0] randomly assigned.
Run over 20 seeds for 10k episodes.
Score is 1 - 2 * average_regret at 10k episodes.
Must log episode, total_regret for standard analysis.
End of explanation
#@title plot performance by seed (higher is better)
bandit_analysis.plot_seeds(bandit_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of the agent averaged over 20 seeds.
Random policy has reward of 0 = regret of 0.5 = dashed line
Want to see a stable learning curve -> 0 and fast!
Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
#@title parsing data
mnist_df = DF[DF.bsuite_env == 'mnist'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'mnist', SWEEP_VARS).draw();
#@title plot average regret through learning (lower is better)
mnist_analysis.plot_learning(mnist_df, SWEEP_VARS).draw();
Explanation: MNIST
<img src="https://storage.cloud.google.com/bsuite-colab-images/mnist.png" alt="mnist diagram" height="300"/>
The "hello world" of deep learning, now as a contextual bandit.
Every timestep the agent must classify a random MNIST digit.
Reward +1 for correct, -1 for incorrect.
Run for 10k episodes, 20 seeds.
Score is percentage of successful classifications.
Must log episode, total_regret for standard analysis.
End of explanation
#@title plot performance by seed (higher is better)
mnist_analysis.plot_seeds(mnist_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of the agent averaged over 20 seeds.
Random policy has reward of 0 = regret of 1.8 = dashed line
Want to see a stable learning curve -> 0 and fast!
Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
#@title parsing data
catch_df = DF[DF.bsuite_env == 'catch'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'catch', SWEEP_VARS).draw();
#@title plot average regret through learning (lower is better)
catch_analysis.plot_learning(catch_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Catch
<img src="https://storage.cloud.google.com/bsuite-colab-images/catch.png" alt="catch diagram" height="300"/>
DeepMind's internal "hello world" for RL agents.
The environment is a 5x10 grid with a single falling block per episodes (similar to Tetris).
The agent controls a single "paddle" pixel that it should use to "catch" the falling block.
If the agent catches the block reward +1, if the agent misses the block reward -1.
Run the agent for 10k episodes and 20 seeds.
Score is percentage of successful "catch" over first 10k episodes.
Must log episode, total_regret for standard analysis.
End of explanation
#@title plot performance by seed (higher is better)
catch_analysis.plot_seeds(catch_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of the agent averaged over 20 seeds.
Random policy has reward of 0 = regret of 1.6 = dashed line
Want to see a stable learning curve -> 0 and fast!
Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
#@title parsing data
mountain_car_df = DF[DF.bsuite_env == 'mountain_car'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'mountain_car', SWEEP_VARS).draw();
#@title plot average regret through learning (lower is better)
mountain_car_analysis.plot_learning(mountain_car_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Mountain car
<img src="https://storage.cloud.google.com/bsuite-colab-images/mountain_car.png" alt="mountaincar diagram" height="300"/>
A classic benchmark problem in RL.
The agent controls an underpowered car and must drive it out of a valley.
Reward of -1 each step until the car reaches the goal.
Maximum episode length of 1000 steps.
Run 1000 episodes for 20 seeds.
Score is based on regret against "good" policy that solves in 25 steps.
Must log episode, total_regret for standard analysis.
End of explanation
#@title plot performance by seed (higher is better)
mountain_car_analysis.plot_seeds(mountain_car_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of the agent averaged over 20 seeds.
Dashed line is at 415 = average regret of a random agent.
Want to see a stable learning curve -> 0 and fast!
Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
#@title parsing data
cartpole_df = DF[DF.bsuite_env == 'cartpole'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'cartpole', SWEEP_VARS).draw();
#@title plot average regret through learning (lower is better)
cartpole_analysis.plot_learning(cartpole_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Cartpole
<img src="https://storage.cloud.google.com/bsuite-colab-images/cartpole.png" alt="cartpole diagram" height="300"/>
A classic benchmark problem in RL.
The agent controls a cart on a frictionless plane.
The poles starts near-to upright.
The observation is [x, x_dot, sin(theta), sin(theta)_dot, cos(theta), cos(theta)_dot, time_elapsed]
Episodes end once 1000 steps have occured, or |x| is greater than 1.
Reward of +1 when pole > 0.8 height.
Run 1000 episodes for 20 seeds.
Score is percentage of timesteps balancing the pole.
Must log episode, total_regret for standard analysis.
End of explanation
#@title plot performance by seed (higher is better)
cartpole_analysis.plot_seeds(cartpole_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of the agent averaged over 20 seeds.
Maximum regret of 1000 per episode = dashed line
Want to see a stable learning curve -> 0 and fast!
Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
#@title parsing data
bandit_noise_df = DF[DF.bsuite_env == 'bandit_noise'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'bandit_noise', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
bandit_noise_analysis.plot_average(bandit_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Reward noise
To investigate the robustness of RL agents to noisy rewards, we repeat the "basic" experiments under differing levels of Gaussian noise.
This time we allocate the 20 different seeds across 5 levels of Gaussian noise $N(0, \sigma^2)$ for $\sigma$ = noise_scale = $[0.1, 0.3, 1, 3, 10]$ with 4 seeds each.
Bandit noise
<img src="https://storage.cloud.google.com/bsuite-colab-images/bandit.png" alt="bandit diagram" height="300"/>
A simple independent-armed bandit problem.
The agent is faced with 11 actions with deterministic rewards [0.0, 0.1, .., 1.0] randomly assigned.
Run noise_scale = [0.1, 0.3, 1., 3, 10] for 4 seeds for 10k episodes.
Score is 1 - 2 * average_regret at 10k episodes.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
bandit_noise_analysis.plot_learning(bandit_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for largest noise_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
bandit_noise_analysis.plot_seeds(bandit_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for largest noise_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
mnist_noise_df = DF[DF.bsuite_env == 'mnist_noise'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'mnist_noise', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
mnist_noise_analysis.plot_average(mnist_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
MNIST noise
<img src="https://storage.cloud.google.com/bsuite-colab-images/mnist.png" alt="mnist diagram" height="300"/>
The "hello world" of deep learning, now as a contextual bandit.
Every timestep the agent must classify a random MNIST digit.
Reward +1 for correct, -1 for incorrect.
Run noise_scale = [0.1, 0.3, 1., 3, 10] for 4 seeds for 10k episodes.
Score is percentage of successful classifications.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
mnist_noise_analysis.plot_learning(mnist_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for largest noise_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
mnist_noise_analysis.plot_seeds(mnist_noise_df, SWEEP_VARS).draw;
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for largest noise_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
catch_noise_df = DF[DF.bsuite_env == 'catch_noise'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'catch_noise', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
catch_noise_analysis.plot_average(catch_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Catch noise
<img src="https://storage.cloud.google.com/bsuite-colab-images/catch.png" alt="catch diagram" height="300"/>
DeepMind's internal "hello world" for RL agents.
The environment is a 5x10 grid with a single falling block per episodes (similar to Tetris).
The agent controls a single "paddle" pixel that it should use to "catch" the falling block.
If the agent catches the block reward +1, if the agent misses the block reward -1.
Run noise_scale = [0.1, 0.3, 1., 3, 10] for 4 seeds for 10k episodes.
Score is percentage of successful "catch" over first 10k episodes.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
catch_noise_analysis.plot_learning(catch_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for largest noise_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
catch_noise_analysis.plot_seeds(catch_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for largest noise_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
mountain_car_noise_df = DF[DF.bsuite_env == 'mountain_car_noise'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'mountain_car_noise', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
mountain_car_noise_analysis.plot_average(mountain_car_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Mountain car noise
<img src="https://storage.cloud.google.com/bsuite-colab-images/mountain_car.png" alt="mountaincar diagram" height="300"/>
A classic benchmark problem in RL.
The agent controls an underpowered car and must drive it out of a valley.
Reward of -1 each step until the car reaches the goal.
Maximum episode length of 1000 steps.
Run noise_scale = [0.1, 0.3, 1., 3, 10] for 4 seeds for 1k episodes.
Score is based on regret against "good" policy that solves in 25 steps.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
mountain_car_noise_analysis.plot_learning(mountain_car_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for largest noise_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
mountain_car_noise_analysis.plot_seeds(mountain_car_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for largest noise_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
cartpole_noise_df = DF[DF.bsuite_env == 'cartpole_noise'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'cartpole_noise', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
cartpole_noise_analysis.plot_average(cartpole_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Cartpole noise
<img src="https://storage.cloud.google.com/bsuite-colab-images/cartpole.png" alt="cartpole diagram" height="300"/>
A classic benchmark problem in RL.
The agent controls a cart on a frictionless plane.
The poles starts near-to upright.
The observation is [x, x_dot, sin(theta), sin(theta)_dot, cos(theta), cos(theta)_dot, time_elapsed]
Episodes end once 1000 steps have occured, or |x| is greater than 1.
Reward of +1 when pole > 0.8 height.
Run noise_scale = [0.1, 0.3, 1., 3, 10] for 4 seeds for 1k episodes.
Score is percentage of timesteps balancing the pole.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
cartpole_noise_analysis.plot_learning(cartpole_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for largest noise_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
cartpole_noise_analysis.plot_seeds(cartpole_noise_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by noise_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for largest noise_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
bandit_scale_df = DF[DF.bsuite_env == 'bandit_scale'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'bandit_scale', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
bandit_scale_analysis.plot_average(bandit_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Reward scale
To investigate the robustness of RL agents to reward rewards, we repeat the "basic" experiments under differing levels of problem rescaling.
This time we allocate the 20 different seeds across 5 levels of reward_scale = $[0.1, 0.3, 1, 3, 10]$ with 4 seeds each.
In order to keep comparable statistics/regret we report rescaled regret/reward_scale.
Bandit scale
<img src="https://storage.cloud.google.com/bsuite-colab-images/bandit.png" alt="bandit diagram" height="300"/>
A simple independent-armed bandit problem.
The agent is faced with 11 actions with deterministic rewards [0.0, 0.1, .., 1.0] randomly assigned.
Run reward_scale = [0.01, 0.1, 1., 10, 100] for 4 seeds for 10k episodes.
Score is 1 - 2 * average_regret at 10k episodes.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
bandit_scale_analysis.plot_learning(bandit_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for reward_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
bandit_scale_analysis.plot_seeds(bandit_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for reward_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
mnist_scale_df = DF[DF.bsuite_env == 'mnist_scale'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'mnist_scale', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
mnist_scale_analysis.plot_average(mnist_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
MNIST scale
<img src="https://storage.cloud.google.com/bsuite-colab-images/mnist.png" alt="mnist diagram" height="300"/>
The "hello world" of deep learning, now as a contextual bandit.
Every timestep the agent must classify a random MNIST digit.
Reward +1 for correct, -1 for incorrect.
Run reward_scale = [0.01, 0.1, 1., 10, 100] for 4 seeds for 10k episodes.
Score is percentage of successful classifications.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
mnist_scale_analysis.plot_learning(mnist_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for reward_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
mnist_scale_analysis.plot_seeds(mnist_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for reward_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
catch_scale_df = DF[DF.bsuite_env == 'catch_scale'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'catch_scale', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
catch_scale_analysis.plot_average(catch_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Catch scale
<img src="https://storage.cloud.google.com/bsuite-colab-images/catch.png" alt="catch diagram" height="300"/>
DeepMind's internal "hello world" for RL agents.
The environment is a 5x10 grid with a single falling block per episodes (similar to Tetris).
The agent controls a single "paddle" pixel that it should use to "catch" the falling block.
If the agent catches the block reward +1, if the agent misses the block reward -1.
Run reward_scale = [0.01, 0.1, 1., 10, 100] for 4 seeds for 10k episodes.
Score is percentage of successful "catch" over first 10k episodes.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
catch_scale_analysis.plot_learning(catch_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for reward_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
catch_scale_analysis.plot_seeds(catch_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for reward_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
mountain_car_scale_df = DF[DF.bsuite_env == 'mountain_car_scale'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'mountain_car_scale', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
mountain_car_scale_analysis.plot_average(mountain_car_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Mountain car scale
<img src="https://storage.cloud.google.com/bsuite-colab-images/mountain_car.png" alt="mountaincar diagram" height="300"/>
A classic benchmark problem in RL.
The agent controls an underpowered car and must drive it out of a valley.
Reward of -1 each step until the car reaches the goal.
Maximum episode length of 1000 steps.
Run reward_scale = [0.01, 0.1, 1., 10, 100] for 4 seeds for 1k episodes.
Score is based on regret against "good" policy that solves in 25 steps.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
mountain_car_scale_analysis.plot_learning(mountain_car_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for reward_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
mountain_car_scale_analysis.plot_seeds(mountain_car_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for reward_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
cartpole_scale_df = DF[DF.bsuite_env == 'cartpole_scale'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'cartpole_scale', SWEEP_VARS).draw();
#@title average regret over learning (lower is better)
cartpole_scale_analysis.plot_average(cartpole_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Cartpole scale
<img src="https://storage.cloud.google.com/bsuite-colab-images/cartpole.png" alt="cartpole diagram" height="300"/>
A classic benchmark problem in RL.
The agent controls a cart on a frictionless plane.
The poles starts near-to upright.
The observation is [x, x_dot, sin(theta), sin(theta)_dot, cos(theta), cos(theta)_dot, time_elapsed]
Episodes end once 1000 steps have occured, or |x| is greater than 1.
Reward of +1 when pole > 0.8 height.
Run reward_scale = [0.01, 0.1, 1., 10, 100] for 4 seeds for 1k episodes.
Score is percentage of timesteps balancing the pole.
Must log episode, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
cartpole_scale_analysis.plot_learning(cartpole_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agents.
- Look for reward_scale with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
cartpole_scale_analysis.plot_seeds(cartpole_scale_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by reward_scale (lower is better)
- Dashed line shows the performance of a random agent baseline.
- Look for reward_scale with performance significantly better than baseline.
End of explanation
#@title parsing data
deep_sea_df = DF[DF.bsuite_env == 'deep_sea'].copy()
deep_sea_plt = deep_sea_analysis.find_solution(deep_sea_df, SWEEP_VARS)
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'deep_sea', SWEEP_VARS).draw();
#@title average regret by size through learning (lower is better)
deep_sea_analysis.plot_regret(deep_sea_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Exploration
Exploration is the problem of prioritizing useful information for learning.
Deep sea
<img src="https://storage.cloud.google.com/bsuite-colab-images/deep_sea.png" alt="deep sea diagram" height="300"/>
Scalable chain domains that test for
deep exploration.
The environment is an N x N grid with falling blocks similar to catch. However
the block always starts in the top left. In each timestep, the agent can move
the block "left" or "right". At each timestep, there is a small cost for moving
"right" and no cost for moving "left". However, the agent can receive a large
reward for choosing "right" N-times in a row and reaching the bottom right. This
is the single rewarding policy, all other policies receive zero or negative
return making this a very difficult exploration problem.
Run deep_sea sizes N=5,6,7,..,50 for at least 10k episodes.
Score is the percentage of N for which average regret < 0.9 faster than 2^N.
Must log episode, total_return for standard analysis.
End of explanation
#@title scaling of learning time with deep_sea size (lower + more blue is better)
deep_sea_analysis.plot_scaling(deep_sea_plt, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Learning curves of average regret through time (lower is better).
- Dashed line shows the performance of suboptimal "greedy" algorithm
- Look for largest size with performance significantly better than greedy agent.
- Curves also show dynamics through time.
End of explanation
#@title scaling of learning time with deep_sea size on log scale (lower + more blue is better)
deep_sea_analysis.plot_scaling_log(deep_sea_plt, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Compute the number of episodes until the average regret < 0.9 for each problem size.
- Red dots have not solved the problem, but have simply performed only that many episodes.
- Dashed line shows curve 2^N, which is the scaling we expect for agents without deep exploration.
- Want to see consistent curve of blue dots signficantly below the dashed line -> deep exploration.
End of explanation
#@title plot performance by seed (higher is better)
deep_sea_analysis.plot_seeds(deep_sea_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Plots exactly the same data as above, but on a logarithmic scale.
- If we see polynomial scaling -> this should result in a linear relationship between log(learning time) and log(size).
- Want to see consistent line of blue dots significantly below the dashed line -> deep exploration.
End of explanation
#@title parsing data
deep_sea_stochastic_df = DF[DF.bsuite_env == 'deep_sea_stochastic'].copy()
deep_sea_stochastic_plt = deep_sea_stochastic_analysis.find_solution(deep_sea_stochastic_df, SWEEP_VARS)
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'deep_sea_stochastic', SWEEP_VARS).draw();
#@title average regret by size through learning (lower is better)
deep_sea_stochastic_analysis.plot_regret(deep_sea_stochastic_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Stochastic deep sea
<img src="https://storage.cloud.google.com/bsuite-colab-images/deep_sea.png" alt="deep sea stochastic diagram" height="300"/>
Scalable chain domains that test for
deep exploration.
The environment is an N x N grid with falling blocks similar to catch. However
the block always starts in the top left. In each timestep, the agent can move
the block "left" or "right". At each timestep, there is a small cost for moving
"right" and no cost for moving "left". However, the agent can receive a large
reward for choosing "right" N-times in a row and reaching the bottom right. This
is the single rewarding policy, all other policies receive zero or negative
return making this a very difficult exploration problem.
The stochastic version of this domain only transitions to the right with
probability (1 - 1/N) and adds N(0,1) noise to the 'end' states of the chain.
Run deep_sea sizes N=5,6,7,..,50 for at least 10k episodes.
Score is the percentage of N for which average regret < 0.9 faster than 2^N.
Must log episode, total_return for standard analysis.
End of explanation
#@title scaling of learning time with deep_sea_stochastic size (lower + more blue is better)
deep_sea_stochastic_analysis.plot_scaling(deep_sea_stochastic_plt, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Learning curves of average regret through time (lower is better).
- Dashed line shows the performance of suboptimal "greedy" algorithm
- Look for largest size with performance significantly better than greedy agent.
- Curves also show dynamics through time.
End of explanation
#@title scaling of learning time with deep_sea size on log scale (lower + more blue is better)
deep_sea_stochastic_analysis.plot_scaling_log(deep_sea_stochastic_plt, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Compute the number of episodes until the average regret < 0.9 for each problem size.
- Red dots have not solved the problem, but have simply performed only that many episodes.
- Dashed line shows curve 2^N, which is the scaling we expect for agents without deep exploration.
- Want to see consistent curve of blue dots signficantly below the dashed line -> deep exploration.
End of explanation
#@title plot performance by seed (higher is better)
deep_sea_stochastic_analysis.plot_seeds(deep_sea_stochastic_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Plots exactly the same data as above, but on a logarithmic scale.
- If we see polynomial scaling -> this should result in a linear relationship between log(learning time) and log(size).
- Want to see consistent line of blue dots significantly below the dashed line -> deep exploration.
End of explanation
#@title parsing data
cartpole_swingup_df = DF[DF.bsuite_env == 'cartpole_swingup'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'cartpole_swingup', SWEEP_VARS).draw();
#@title scaling with difficulty scale (higher + more blue is better)
cartpole_swingup_analysis.plot_scale(cartpole_swingup_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Cartpole swingup
<img src="https://storage.cloud.google.com/bsuite-colab-images/cartpole.png" alt="cartpole diagram" height="300"/>
A difficult cartpole swingup task with sparse rewards and a cost for moving.
This domain is somewhat similar to "deep sea" but cannot be solved easily by tabular reinforcement learning algorithms.
The observation is [x, cos_theta, sin_theta, x_dot, theta_dot, x_central]
The dynamics are given by the classic cartpole from dm control suite
Each episode begins with the pole hanging downwards and ends after 1000 timesteps.
There is a small cost of -0.1 for any movement of the pole.
There is a reward of +1 only if:
x_dot, theta_dot < 1
pole_height > 1 - difficulty_scale
x < 1 - difficulty_scale
The parameter difficulty_scale acts as a scaling for the depth of exploration, similar to the "size" in deep sea.
To run this experiment:
Run the agent on difficulty_scale = 0, 0.05, 0.1, .. , 0.95 for 1k episodes
Score is proportion of runs that achieve an average_return > 0 at any point.
Must log episode, total_return for standard analysis
End of explanation
#@title average regret through learning (lower is better)
cartpole_swingup_analysis.plot_learning(cartpole_swingup_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- For each height threshold, look at the best observed return.
- If the observed return is greater than 500 ==> the pole was swung upright and balanced for at least 5 seconds.
- Look for higher scores and more blue.
End of explanation
#@title plot performance by seed (higher is better)
cartpole_swingup_analysis.plot_seeds(cartpole_swingup_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Learning curves of average return through time (higher is better).
- Dashed line shows the performance of an agent that does not move = 0.
- Look for largest difficulty_scale with performance significantly better than staying still.
End of explanation
#@title parsing data
umbrella_length_df = DF[DF.bsuite_env == 'umbrella_length'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'umbrella_length', SWEEP_VARS).draw();
#@title average regret after 10k episodes (lower is better)
umbrella_length_analysis.plot_scale(umbrella_length_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Credit assignment
This is a collection of domains for credit assignment.
Umbrella length
<img src="https://storage.cloud.google.com/bsuite-colab-images/umbrella.png" alt="umbrella diagram" height="300"/>
A stylized problem designed to highlight problems to do with temporal credit assignment and scaling with time horizon.
The state observation is [need_umbrella, have_umbrella, time_to_go,] + n "distractor" features that are iid Bernoulli.
At the start of each episode the agent observes if it will need an umbrella.
It then has the chance to pick up an umbrella only in the first timestep.
At the end of the episode the agent receives a reward of +1 if it made the correct choice of umbrella, but -1 if it made the incorrect choice.
During chain_length intermediate steps rewards are random +1 or -1.
The experiment setup:
- Run umbrella_chain with n_distractor=20 and sweep chain_length=1..100 logarithmically spaced for 10k episodes.
- Score is percent of tasks with average reward per episode > 0.5.
- Must log episode, total_return, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
umbrella_length_analysis.plot_learning(umbrella_length_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Compute the average regret after 10k episodes for each chain_length problem scale.
- Red dots have not solved the problem, blue dots made significant progress (average regret < 0.5)
- Dashed line shows regret of a random agent = 1.0.
- We want to see lots of blue dots with low regret for large chain_length.
End of explanation
#@title plot performance by seed (higher is better)
umbrella_length_analysis.plot_seeds(umbrella_length_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Learning curves of average regret through time (lower is better).
- Dashed line shows the performance of a random agents (regret = 1.0)
- Look for largest chain_length with performance significantly better than random agent.
- Curves also show dynamics through time.
- Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
#@title parsing data
umbrella_distract_df = DF[DF.bsuite_env == 'umbrella_distract'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'umbrella_distract', SWEEP_VARS).draw();
#@title average regret after 10k episodes (lower is better)
umbrella_distract_analysis.plot_scale(umbrella_distract_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Umbrella distract
<img src="https://storage.cloud.google.com/bsuite-colab-images/umbrella.png" alt="umbrella diagram" height="300"/>
A stylized problem designed to highlight problems to do with temporal credit assignment and scaling with time horizon.
The state observation is [need_umbrella, have_umbrella, time_to_go,] + n "distractor" features that are iid Bernoulli.
At the start of each episode the agent observes if it will need an umbrella.
It then has the chance to pick up an umbrella only in the first timestep.
At the end of the episode the agent receives a reward of +1 if it made the correct choice of umbrella, but -1 if it made the incorrect choice.
During chain_length intermediate steps rewards are random +1 or -1.
The experiment setup:
- Run umbrella_chain with n_distractor=20 and sweep chain_length=1..100 logarithmically spaced for 10k episodes.
- Score is percent of tasks with average reward per episode > 0.5.
- Must log episode, total_return, total_regret for standard analysis.
End of explanation
#@title average regret through learning (lower is better)
umbrella_distract_analysis.plot_learning(umbrella_distract_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Compute the average regret after 10k episodes for each chain_length problem scale.
- Red dots have not solved the problem, blue dots made significant progress (average regret < 0.5)
- Dashed line shows regret of a random agent = 1.0.
- We want to see lots of blue dots with low regret for large chain_length.
End of explanation
#@title plot performance by seed (higher is better)
umbrella_distract_analysis.plot_seeds(umbrella_distract_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Learning curves of average regret through time (lower is better).
- Dashed line shows the performance of a random agents (regret = 1.0)
- Look for largest chain_length with performance significantly better than random agent.
- Curves also show dynamics through time.
- Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
#@title parsing data
discounting_chain_df = DF[DF.bsuite_env == 'discounting_chain'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'discounting_chain', SWEEP_VARS).draw();
#@title average regret after 1k episodes (lower is better)
discounting_chain_analysis.plot_average(discounting_chain_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Discounting chain
<img src="https://storage.cloud.google.com/bsuite-colab-images/discounting_chain.png" alt="discount diagram" height="300"/>
A stylized problem designed to highlight an agent's ability to correctly maximize cumulative rewards without discounting bias.
- The only decision that actually matters is the agent's first of the episode, after which the agent is locked into a "chain" irrespective of actions.
- Each chain gives a non-zero reward only at one step of the length-100 episode: [1, 3, 10, 30, 100] steps.
- Each chain gives a reward of +1, except for the optimal_horizon, which gives a reward of +1.1
- Many agents with discounting will struggle to maximize cumulative returns.
The experiment setup:
- Run each optimal_horizon [1, 3, 10, 30, 100], each with 5 seeds for 1k episodes.
- Score is average regret * 10.
- Must log episode, total_return for standard analysis
End of explanation
#@title average regret through learning (lower is better)
discounting_chain_analysis.plot_learning(discounting_chain_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Display the average regret after 10k episodes by optimal_horizon (lower is better)
- Dashed line shows the performance of a random agents (regret = 0.8)
- Look for largest horizon with performance significantly better than random agent.
End of explanation
#@title plot performance by seed (higher is better)
discounting_chain_analysis.plot_seeds(discounting_chain_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Learning curves of average regret through time (lower is better).
- Dashed line shows the performance of a random agents (regret = 0.8)
- Look for largest horizon with performance significantly better than random agent.
- Curves also show dynamics through time.
- Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
#@title parsing data
memory_len_df = DF[DF.bsuite_env == 'memory_len'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'memory_len', SWEEP_VARS).draw();
#@title memory scaling (lower + more blue is better)
memory_len_analysis.plot_scale(memory_len_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Memory
A collection of experiments designed to test memory capabilities.
Memory length
<img src="https://storage.cloud.google.com/bsuite-colab-images/memory_chain.png" alt="memory diagram" height="300"/>
A stylized T-maze problem designed to highlight an agent's ability to remember important information and use it to make good decisions.
- At the beginning of the episode the agent is provided a context of +1 or -1.
- At all future timesteps the context is equal to zero and a countdown until the end of the episode.
- At the end of the episode the agent must select the correct action corresponding to the context to reward +1 or -1.
The experiment setup:
- Run memory sizes 1..100 logarithmically spaced.
- Score is proportion of memory sizes with average regret < 0.5.
- Must log episode, total_return for standard analysis
End of explanation
#@title average regret through learning (lower is better)
memory_len_analysis.plot_learning(memory_len_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Compute the average regret after 10k episodes for each memory_length problem scale.
- Red dots have not solved the problem, blue dots made significant progress (average regret < 0.5)
- Dashed line shows regret of a random agent = 1.0.
- We want to see lots of blue dots with low regret for large memory_length.
End of explanation
#@title plot performance by seed (higher is better)
memory_len_analysis.plot_seeds(memory_len_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Learning curves of average regret through time (lower is better).
- Dashed line shows the performance of a random agents (regret = 1.0)
- Look for largest memory_length with performance significantly better than random agent.
- Curves also show dynamics through time.
- Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
#@title parsing data
memory_size_df = DF[DF.bsuite_env == 'memory_size'].copy()
summary_analysis.plot_single_experiment(BSUITE_SCORE, 'memory_size', SWEEP_VARS).draw();
#@title memory scaling (lower + more blue is better)
memory_size_analysis.plot_scale(memory_size_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Memory size)
<img src="https://storage.cloud.google.com/bsuite-colab-images/memory_chain.png" alt="memory diagram" height="300"/>
A stylized T-maze problem designed to highlight an agent's ability to remember important information and use it to make good decisions.
- At the beginning of an episode the agent is provided an N bit context vector.
- After a couple of steps the agent is provided a query as an integer number between 0 and num_bits-1 and must select the correct action corresponding to context[query].
The experiment setup:
- Run memory sizes 1..100 logarithmically spaced.
- Score is proportion of memory sizes with average regret < 0.5.
- Must log episode, total_return for standard analysis
End of explanation
#@title average regret through learning (lower is better)
memory_size_analysis.plot_learning(memory_size_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Compute the average regret after 10k episodes for each memory_sizegth problem scale.
- Red dots have not solved the problem, blue dots made significant progress (average regret < 0.5)
- Dashed line shows regret of a random agent = 1.0.
- We want to see lots of blue dots with low regret for large memory_sizegth.
End of explanation
#@title plot performance by seed (higher is better)
memory_size_analysis.plot_seeds(memory_size_df, SWEEP_VARS).draw();
Explanation: Parsing the plot above:
- Learning curves of average regret through time (lower is better).
- Dashed line shows the performance of a random agents (regret = 1.0)
- Look for largest memory_length with performance significantly better than random agent.
- Curves also show dynamics through time.
- Smoothing is performed with rolling mean over 10% of data with confidence bar at 95% Gaussian standard error.
End of explanation
import os
from google.colab import files
# Save images required for the reports in an `images/` folder.
if not os.path.exists('images'):
os.makedirs('images')
__radar_fig__.savefig('images/radar_plot.png', bbox_inches="tight")
# Compress folder and download
!zip -r /images.zip /content/images > /dev/null
try:
files.download("images.zip")
except:
pass
Explanation: Parsing the plot above:
Here we can see the performance of each agent individually through time.
Higher scores are better, but individual runs may be noisy.
Use this plot to diagnose strange agent behaviour.
Exporting as PDF
Run all colab cells above in Colaboratory
Run the cell below to download a compressed images.zip
Copy images/ in bsuite/reports/images
Run bsuite/reports/bsuite_report.tex to generate a summary pdf report
End of explanation |
234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BigQuery Essentials for Teradata Users
In this lab you will take an existing 2TB+ TPC-DS benchmark dataset and learn common day-to-day activities you'll perform in BigQuery.
What you'll do
In this lab, you will learn how to
Step1: Google Cloud resources are organized hierarchically. Starting from the bottom of the hierarchy, projects are the first level, and they contain other resources. All resources except for organizations have exactly one parent. The Organization is the top of the hierarchy and does not have a parent.
Folders are an additional grouping mechanism on top of projects.
<img src="img/cloud-folders-hierarchy.png">
For the purposes of a BigQuery user, this is helpful to know as access management policies (IAM) and Organizational policies are largely imposed at the project, folder, or organizational level. Also, BigQuery "Reservations", or chunks of allocated Bigcompute (but not storage) are currently assigned at the project or folder level.
BigQuery Datasets
<p>A dataset is contained within a specific <a href="https
Step2: For this lab, you will be accessing data stored in another project, in this case a publically accessible sample project qwiklabs-resources. See how many datasets exist in this project
Step3: And let's look at the tables and views in one of these datasets
Step4: But how are we able to access other data? And won't querying that data create work in that user's cluster? Not at all! Because BigQuery has completely separated the compute and storage layers so they can scale independently, we can easily query data (so long as we have permissions) that are in public datasets or datasets from other teams, without incurring compute costs for them, and without slowing their queries down, even if we're accessing the same data.
To explain why, we dive a little deeper into the architecture of BigQuery.
BigQuery Architecture
BigQuery’s serverless architecture decouples storage and compute and allows them to scale independently on demand. This structure offers both immense flexibility and cost controls for customers because they don’t need to keep their expensive compute resources up and running all the time. This is very different from traditional node-based cloud data warehouse solutions or on-premise massively parallel processing (MPP) systems. This approach also allows customers of any size to bring their data into the data warehouse and start analyzing their data using Standard SQL without worrying about database operations and system engineering.
<img src="img/bq_explained_2.jpg">
Under the hood, BigQuery employs a vast set of multi-tenant services driven by low-level Google infrastructure technologies like Dremel, Colossus, Jupiter and Borg.
<img src="img/bq_explained_3.jpg">
Compute is Dremel, a large multi-tenant cluster that executes SQL queries.
Dremel turns SQL queries into distributed, scaled-out execution plans. The nodes of these execution plans are called slots and do the heavy lifting of reading data from storage and any necessary computation.
Dremel dynamically apportions slots to queries on an as-needed basis, maintaining fairness for concurrent queries from multiple users. A single user can get thousands of slots to run their queries. These slots are assigned just-in-time to your query, and the moment that unit of work is done it gets assigned new work, potentially for someone else's query. This is how BigQuery is able to execute so quickly at low cost. You don't have to over-provision resources like you would with statically sized clusters.
Storage is Colossus, Google’s global storage system.
BigQuery leverages the columnar storage format and compression algorithm to store data in Colossus, optimized for reading large amounts of structured data. This is the same technology powering Google Cloud's blog storage services - GCS.
Colossus also handles replication, recovery (when disks crash) and distributed management (so there is no single point of failure). Colossus allows BigQuery users to scale to dozens of petabytes of data stored seamlessly, without paying the penalty of attaching much more expensive compute resources as in traditional data warehouses.
Compute and storage talk to each other through the petabit Jupiter network.
In between storage and compute is ‘shuffle’, which takes advantage of Google’s Jupiter network to move data extremely rapidly from one place to another.
BigQuery is orchestrated via Borg, Google’s precursor to Kubernetes.
The mixers and slots are all run by Borg, which allocates hardware resources. Essentially, a single BigQuery 'cluster' is able to run thousands of physical machines at once and be securely shared between users, giving massive compute power just-in-time to those who need it.
What does this mean for you?
Working with BigQuery is different. Some concepts that are important
Step5: The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables)
Step6: Question
Step7: Question
- How many columns of data does each table have (sorted by most to least?)
- Which table has the most columns of data?
Step8: Previewing sample rows of data values
Click on the catalog_sales table name for the tpcds_2t_baseline dataset under qwiklabs-resources
Question
- How many rows are in the table?
- How large is the table in TB?
Step9: Question
Step10: Create an example sales report
TODO(you)
Step11: A note on our data
Step12: Side note
Step13: Running the first benchmark test
Now let's run the first query against our dataset and note the execution time. Tip
Step14: It should execute in just a few seconds. Then try running it again and see if you get the same performance. BigQuery will automatically cache the results from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics.
Viewing BigQuery job statistics
Let's list our five most recent query jobs run on BigQuery using the bq command line interface. Then we will get even more detail on our most recent job with the bq show command.
Step15: Be sure to replace the job id with your own most recent job.
Step17: Looking at the job statistics we can see our most recent query hit cache
- cacheHit
Step19: 132 GB will be processed. At the time of writing, BigQuery pricing is \$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \$0.66 to run.
Now let's run it an ensure we're not pulling from cache so we get an accurate time-to-completion benchmark.
Step20: If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below
Step21: Here we will use the bq cp command to copy tables over. If you need to periodically refresh data, the BQ Transfer service or scheduled queries are good tools as well.
Step22: Inspect the tables now in your project.
Step23: Verify you now have the baseline data in your project
Run the below query and confirm you see data. Note that if you omit the project-id ahead of the dataset name in the FROM clause, BigQuery will assume your default project.
Step24: Setup an automated test
Running each of the 99 queries manually via the Console UI would be a tedious effort. We'll show you how you can run all 99 programmatically and automatically log the output (time and GB processed) to a log file for analysis.
Below is a shell script that
Step25: Viewing the benchmark results
As part of the benchmark test, we stored the processing time of each query into a new perf BigQuery table. We can query that table and get some performance stats for our test.
First are each of the tests we ran
Step26: And finally, the overall statistics for the entire test | Python Code:
%%bash
gcloud config list
Explanation: BigQuery Essentials for Teradata Users
In this lab you will take an existing 2TB+ TPC-DS benchmark dataset and learn common day-to-day activities you'll perform in BigQuery.
What you'll do
In this lab, you will learn how to:
Use BigQuery to access and query the TPC-DS benchmark dataset
Understand common differences between Teradata and BigQuery
Run pre-defined queries to establish baseline performance benchmarks
BigQuery
BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without managing infrastructure. BigQuery allows you to focus on analyzing data to find meaningful insights.
TPC-DS Background
In order to benchmark the performance of a data warehouse we first must get tables and data to run queries against. There is a public organization, TPC, that provides large benchmarking datasets to companies explicitly for this purpose. The purpose of TPC benchmarks is to provide relevant, objective performance data to industry users.
The TPC-DS Dataset we will be using comprises of 25 tables and 99 queries that simulate common data analysis tasks. View the full documentation here.
Exploring TPC-DS in BigQuery
The TPC-DS tables have been loaded into BigQuery for you to explore. We have limited the size to 2TB for the timing of this lab but the dataset itself can be expanded as needed.
Note: The TPC Benchmark and TPC-DS are trademarks of the Transaction Processing Performance Council (http://www.tpc.org). The Cloud DW benchmark is derived from the TPC-DS Benchmark and as such is not comparable to published TPC-DS results.
Google Cloud and BigQuery organization
First, a note on resource hierarchy. At the lowest level, resources are the fundamental components that make up all Google Cloud services. Examples of resources include Compute Engine Virtual Machines (VMs), Pub/Sub topics, Cloud Storage buckets, App Engine instances, and BigQuery datasets. All these lower level resources can only be parented by projects, which represent the first grouping mechanism of the Google Cloud resource hierarchy.
You may have noticed you had a project name in the upper left of the console when you opened this notebook:
<img src="img/project.png">
You can also run a local gcloud command to detect what your project and id currently are set:
End of explanation
!bq ls
Explanation: Google Cloud resources are organized hierarchically. Starting from the bottom of the hierarchy, projects are the first level, and they contain other resources. All resources except for organizations have exactly one parent. The Organization is the top of the hierarchy and does not have a parent.
Folders are an additional grouping mechanism on top of projects.
<img src="img/cloud-folders-hierarchy.png">
For the purposes of a BigQuery user, this is helpful to know as access management policies (IAM) and Organizational policies are largely imposed at the project, folder, or organizational level. Also, BigQuery "Reservations", or chunks of allocated Bigcompute (but not storage) are currently assigned at the project or folder level.
BigQuery Datasets
<p>A dataset is contained within a specific <a href="https://cloud.google.com/bigquery/docs/projects">project</a>. Datasets
are top-level containers that are used to organize and control access to your
<a href="https://cloud.google.com/bigquery/docs/tables">tables</a> and <a href="https://cloud.google.com/bigquery/docs/views">views</a>. A table
or view must belong to a dataset, so you need to create at least one dataset before
<a href="https://cloud.google.com/bigquery/docs/loading-data">loading data into BigQuery</a>.</p>
BigQuery datasets are subject to the following limitations:
You can set the geographic location at creation time only. After a dataset has
been created, the location becomes immutable and can't be changed by using the
Console, using the bq tool, or calling the patch or
update API methods.
All tables that are referenced in a query must be stored in datasets in the
same location
When you copy a table, the
datasets that contain the source table and destination table must reside in
the same location.
Dataset names must be unique for each project.
How many datasets are in your current project? Run the following to find out:
End of explanation
!bq ls --project_id qwiklabs-resources
Explanation: For this lab, you will be accessing data stored in another project, in this case a publically accessible sample project qwiklabs-resources. See how many datasets exist in this project:
End of explanation
!bq ls --project_id qwiklabs-resources tpcds_2t_baseline
Explanation: And let's look at the tables and views in one of these datasets:
End of explanation
%%bigquery
SELECT
dataset_id,
table_id,
-- Convert bytes to GB.
ROUND(size_bytes/pow(10,9),2) as size_gb,
-- Convert UNIX EPOCH to a timestamp.
TIMESTAMP_MILLIS(creation_time) AS creation_time,
TIMESTAMP_MILLIS(last_modified_time) as last_modified_time,
row_count,
CASE
WHEN type = 1 THEN 'table'
WHEN type = 2 THEN 'view'
ELSE NULL
END AS type
FROM
`qwiklabs-resources.tpcds_2t_baseline.__TABLES__`
ORDER BY size_gb DESC
Explanation: But how are we able to access other data? And won't querying that data create work in that user's cluster? Not at all! Because BigQuery has completely separated the compute and storage layers so they can scale independently, we can easily query data (so long as we have permissions) that are in public datasets or datasets from other teams, without incurring compute costs for them, and without slowing their queries down, even if we're accessing the same data.
To explain why, we dive a little deeper into the architecture of BigQuery.
BigQuery Architecture
BigQuery’s serverless architecture decouples storage and compute and allows them to scale independently on demand. This structure offers both immense flexibility and cost controls for customers because they don’t need to keep their expensive compute resources up and running all the time. This is very different from traditional node-based cloud data warehouse solutions or on-premise massively parallel processing (MPP) systems. This approach also allows customers of any size to bring their data into the data warehouse and start analyzing their data using Standard SQL without worrying about database operations and system engineering.
<img src="img/bq_explained_2.jpg">
Under the hood, BigQuery employs a vast set of multi-tenant services driven by low-level Google infrastructure technologies like Dremel, Colossus, Jupiter and Borg.
<img src="img/bq_explained_3.jpg">
Compute is Dremel, a large multi-tenant cluster that executes SQL queries.
Dremel turns SQL queries into distributed, scaled-out execution plans. The nodes of these execution plans are called slots and do the heavy lifting of reading data from storage and any necessary computation.
Dremel dynamically apportions slots to queries on an as-needed basis, maintaining fairness for concurrent queries from multiple users. A single user can get thousands of slots to run their queries. These slots are assigned just-in-time to your query, and the moment that unit of work is done it gets assigned new work, potentially for someone else's query. This is how BigQuery is able to execute so quickly at low cost. You don't have to over-provision resources like you would with statically sized clusters.
Storage is Colossus, Google’s global storage system.
BigQuery leverages the columnar storage format and compression algorithm to store data in Colossus, optimized for reading large amounts of structured data. This is the same technology powering Google Cloud's blog storage services - GCS.
Colossus also handles replication, recovery (when disks crash) and distributed management (so there is no single point of failure). Colossus allows BigQuery users to scale to dozens of petabytes of data stored seamlessly, without paying the penalty of attaching much more expensive compute resources as in traditional data warehouses.
Compute and storage talk to each other through the petabit Jupiter network.
In between storage and compute is ‘shuffle’, which takes advantage of Google’s Jupiter network to move data extremely rapidly from one place to another.
BigQuery is orchestrated via Borg, Google’s precursor to Kubernetes.
The mixers and slots are all run by Borg, which allocates hardware resources. Essentially, a single BigQuery 'cluster' is able to run thousands of physical machines at once and be securely shared between users, giving massive compute power just-in-time to those who need it.
What does this mean for you?
Working with BigQuery is different. Some concepts that are important:
* Compute and storage are separate and storage is CHEAP - making copies of data will not waste compute space on nodes like in previous systems. It is also easy to set a TTL on temporary datasets and tables to the garbage collect automatically.
* The 'workers' in bigquery are called Slots. These are scheduled fairly amongst all the users and queries within a project. Sometimes your query is bound by the amount of parallelism that BigQuery can achieve. Sometimes it is bound by the number of slots available to your organization - hence getting more slots will speed it up
* While your organization may have a reservation for Slots, meaning a guaranteed number of compute power available to teams, your organization doesn't have it's own BigQuery cluster, per se. It is running in a much larger installation of BigQuery, shared securely amongst other customers. This means you can easily increase and decrease the amount of slots your organization has reserved at a moment's notice with Flex Slots.
Exploring the TPC-DS Schema with SQL
Question:
- How many tables are in the dataset?
- What is the name of the largest table (in GB)? How many rows does it have?
- Note the FROM clause - which identifier is the project, which is the datasets, and which is the table or view?
End of explanation
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
Explanation: The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables):
These systems are driven by the core functions of our retail business. As you can see, our store accepts sales from online (web), mail-order (catalog), and in-store. The business must keep track of inventory and can offer promotional discounts on items sold.
Exploring all available columns of data
Question:
- How many columns of data are in the entire dataset (all tables)?
End of explanation
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
WHERE
is_partitioning_column = 'YES' OR clustering_ordinal_position IS NOT NULL
Explanation: Question:
- Are any of the columns of data in this baseline dataset partitioned or clustered? (This will be covered in another lab)
End of explanation
%%bigquery
SELECT
COUNT(column_name) AS column_count,
table_name
FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
GROUP BY table_name
ORDER BY column_count DESC, table_name
Explanation: Question
- How many columns of data does each table have (sorted by most to least?)
- Which table has the most columns of data?
End of explanation
!bq show qwiklabs-resources:tpcds_2t_baseline.catalog_sales
Explanation: Previewing sample rows of data values
Click on the catalog_sales table name for the tpcds_2t_baseline dataset under qwiklabs-resources
Question
- How many rows are in the table?
- How large is the table in TB?
End of explanation
!bq head -n 15 --selected_fields "cs_order_number,cs_quantity,cs_ext_sales_price,cs_ext_ship_cost" qwiklabs-resources:tpcds_2t_baseline.catalog_sales
Explanation: Question:
- Preview the data and find the Catalog Sales Extended Sales Price cs_ext_sales_price field (which is calculated based on product quantity * sales price)
- Are there any missing data values for Catalog Sales Quantity (cs_quantity)?
- Are there any missing values for cs_ext_ship_cost? For what type of product could this be expected? (Digital products)
We are using the bq head command line tool to avoid a full table scan with a SELECT * LIMIT 15
End of explanation
%%bigquery --verbose
--Query should fail
SELECT
FROM
`qwiklabs-resources.tpcds_2t_baseline.catalog_sales`
LIMIT
10
%%bigquery --verbose
--Query should succeed
SELECT
cs_item_sk,
COUNT(cs_order_number) AS total_orders,
SUM(cs_quantity) AS total_quantity,
SUM(cs_ext_sales_price) AS total_revenue,
SUM(cs_net_profit) AS total_profit
FROM
`qwiklabs-resources.tpcds_2t_baseline.catalog_sales`
GROUP BY
cs_item_sk
ORDER BY
total_orders DESC
LIMIT
10
Explanation: Create an example sales report
TODO(you): Write a query that shows key sales stats for each item sold from the Catalog and execute it here:
- total orders
- total unit quantity
- total revenue
- total profit
- sorted by total orders highest to lowest, limit 10
End of explanation
%%bigquery
SELECT
project_id,
job_id,
query,
cache_hit,
reservation_id,
EXTRACT(DATE FROM creation_time) AS creation_date,
creation_time,
end_time,
TIMESTAMP_DIFF(end_time, start_time, SECOND) AS job_duration_seconds,
job_type,
user_email,
state,
error_result,
total_bytes_processed,
total_slot_ms / 1000 / 60 AS slot_minutes,
-- Average slot utilization per job is calculated by dividing
-- total_slot_ms by the millisecond duration of the job
total_slot_ms / (TIMESTAMP_DIFF(end_time, start_time, MILLISECOND)) AS avg_slots
FROM
`region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
ORDER BY
creation_time DESC
LIMIT 15;
!bq ls -j -a -n 15
Explanation: A note on our data: The TPC-DS benchmark allows data warehouse practitioners to generate any volume of data programmatically. Since the rows of data are system generated, they may not make the most sense in a business context (like why are we selling our top product at such a huge profit loss!).
The good news is that to benchmark our performance we care most about the volume of rows and columns to run our benchmark against.
Analyzing query performance
You can use the INFORMATION_SCHEMA to inspect your query performance. A lot of this data is also presented in the UI under Execution Details.
Refer to the query below (which should be similar to your results) and answer the following questions.
Question
- How long did it take the query to run? 14s
- How much data in GB was processed? 150GB
- How much slot time was consumed? 1hr 7min
End of explanation
!head --lines=50 'sql/example_baseline_queries.sql'
Explanation: Side note: Slot Time
We know the query took 15 seconds to run so what does the 1hr 7 min slot time metric mean?
Inside of the BigQuery service are lots of virtual machines that massively process your data and query logic in parallel. These workers, or "slots", work together to process a single query job really quickly. For accounts with on-demand pricing, you can have up to 2,000 slots.
So say we had 30 minutes of slot time or 1800 seconds. If the query took 20 seconds in total to run,
but it was 1800 seconds worth of work, how many workers at minimum worked on it?
1800/20 = 90
And that's assuming each worker instantly had all the data it needed (no shuffling of data between workers) and was at full capacity for all 20 seconds!
In reality, workers have a variety of tasks (waiting for data, reading it, performing computations, and writing data)
and also need to compare notes with each other on what work was already done on the job. The good news for you is
that you don't need to worry about optimizing these workers or the underlying data to run perfectly in parallel. That's why BigQuery is a managed service -- there's an entire team dedicated to hardware and data storage optimization.
The "avg_slots" metric indicates the average number of slots being utilized by your query at any given time. Often, portions of the query plan will have different amounts of parallelism and thus can benefit (or not) from more slots. For example, if you're performing a basic READ+FILTER+AGGREGATE query, reading data from a large table may require 1,000 slots for the INPUT phase since each slot reads a file, but if a lot of the data is immediately filtered, there may be fewer slots or even one slot needed for the next stage to aggregate. Certain portions of your queries may become bottlenecks for parallelism, for example, JOINs, SORTs, etc. BigQuery can execute many of these in a parallel manner and optimizing this queries is a more advanced topic. At this point, it's important to know slot_time, and conceptually what a slot is.
In case you were wondering, the worker limit for your project is 2,000 slots at once. In a production setting, this will vary depending on whether your organization is using "flat-rate" pricing on "on-demand". If you're "flat-rate", the amount of slots will depend on the organization's reservation, how that reservations is apportioned to different folders, projects, and teams, and how busy each slice of the reservation is at any given moment.
Running a performance benchmark
To performance benchmark our data warehouse in BigQuery we need to create more than just a single SQL report. The good news is the TPC-DS dataset ships with 99 standard benchmark queries that we can run and log the performance outcomes.
In this lab, we are doing no adjustments to the existing data warehouse tables (no partitioning, no clustering, no nesting) so we can establish a performance benchmark to beat in future labs.
Viewing the 99 pre-made SQL queries
We have a long SQL file with 99 standard queries against this dataset stored in our /sql/ directory.
Let's view the first 50 lines of those baseline queries to get familiar with how we will be performance benchmarking our dataset.
End of explanation
%%bigquery --verbose
# start query 1 in stream 0 using template query96.tpl
select count(*)
from `qwiklabs-resources.tpcds_2t_baseline.store_sales` as store_sales
,`qwiklabs-resources.tpcds_2t_baseline.household_demographics` as household_demographics
,`qwiklabs-resources.tpcds_2t_baseline.time_dim` as time_dim,
`qwiklabs-resources.tpcds_2t_baseline.store` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
Explanation: Running the first benchmark test
Now let's run the first query against our dataset and note the execution time. Tip: You can use the --verbose flag in %%bigquery magics to return the job and completion time.
End of explanation
!bq ls -j -a -n 5
Explanation: It should execute in just a few seconds. Then try running it again and see if you get the same performance. BigQuery will automatically cache the results from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics.
Viewing BigQuery job statistics
Let's list our five most recent query jobs run on BigQuery using the bq command line interface. Then we will get even more detail on our most recent job with the bq show command.
End of explanation
!bq show --format=prettyjson -j fae46669-5e96-4744-9d2c-2b1b95fa21e7
Explanation: Be sure to replace the job id with your own most recent job.
End of explanation
%%bash
bq query \
--dry_run \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
# Convert bytes to GB
132086388641 / 1e+9
Explanation: Looking at the job statistics we can see our most recent query hit cache
- cacheHit: true and therefore
- totalBytesProcessed: 0.
While this is great in normal uses for BigQuery (you aren't charged for queries that hit cache) it kind of ruins our performance test. While cache is super useful we want to disable it for testing purposes.
Disabling Cache and Dry Running Queries
As of the time this lab was created, you can't pass a flag to %%bigquery iPython notebook magics to disable cache or to quickly see the amount of data processed. So we will use the traditional bq command line interface in bash.
First we will do a dry run of the query without processing any data just to see how many bytes of data would be processed. Then we will remove that flag and ensure nouse_cache is set to avoid hitting cache as well.
End of explanation
%%bash
bq query \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
Explanation: 132 GB will be processed. At the time of writing, BigQuery pricing is \$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \$0.66 to run.
Now let's run it an ensure we're not pulling from cache so we get an accurate time-to-completion benchmark.
End of explanation
%%bash
export PROJECT_ID=$(gcloud config list --format 'value(core.project)')
export BENCHMARK_DATASET_NAME=tpcds_2t_baseline # Name of the dataset you want to create
## Create a BigQuery dataset for tpcds_2t_flat_part_clust if it doesn't exist
datasetexists=$(bq ls -d | grep -w $BENCHMARK_DATASET_NAME)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset $BENCHMARK_DATASET_NAME already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: $BENCHMARK_DATASET_NAME"
bq --location=US mk --dataset \
--description 'Benchmark Dataset' \
$PROJECT:$BENCHMARK_DATASET_NAME
fi
# Inspect your project and datasets
!bq ls
!bq ls tpcds_2t_baseline
Explanation: If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below:
It's a matter of preference whether you do your work in the Web UI or the command line -- each has it's advantages.
One major advantage of using the bq command line interface is the ability to create a script that will run the remaining 98 benchmark queries for us and log the results.
Copy the qwiklabs-resources dataset into your own GCP project
We will use the new BigQuery Transfer Service to quickly copy our large dataset from the qwiklabs-resources GCP project into your own so you can perform the benchmarking.
Create a new baseline dataset in your project
End of explanation
%%bash
# Should take about 30 seconds, starts a bunch of asynchronous copy jobs
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.call_center tpcds_2t_baseline.call_center
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.catalog_page tpcds_2t_baseline.catalog_page
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.catalog_returns tpcds_2t_baseline.catalog_returns
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.catalog_sales tpcds_2t_baseline.catalog_sales
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.customer tpcds_2t_baseline.customer
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.customer_address tpcds_2t_baseline.customer_address
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.customer_demographics tpcds_2t_baseline.customer_demographics
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.date_dim tpcds_2t_baseline.date_dim
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.dbgen_version tpcds_2t_baseline.dbgen_version
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.household_demographics tpcds_2t_baseline.household_demographics
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.income_band tpcds_2t_baseline.income_band
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.inventory tpcds_2t_baseline.inventory
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.item tpcds_2t_baseline.item
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.perf tpcds_2t_baseline.perf
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.promotion tpcds_2t_baseline.promotion
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.reason tpcds_2t_baseline.reason
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.ship_mode tpcds_2t_baseline.ship_mode
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.store tpcds_2t_baseline.store
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.store_returns tpcds_2t_baseline.store_returns
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.store_sales tpcds_2t_baseline.store_sales
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.time_dim tpcds_2t_baseline.time_dim
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.warehouse tpcds_2t_baseline.warehouse
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_page tpcds_2t_baseline.web_page
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_returns tpcds_2t_baseline.web_returns
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_sales tpcds_2t_baseline.web_sales
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_site tpcds_2t_baseline.web_site
Explanation: Here we will use the bq cp command to copy tables over. If you need to periodically refresh data, the BQ Transfer service or scheduled queries are good tools as well.
End of explanation
!bq ls tpcds_2t_baseline
Explanation: Inspect the tables now in your project.
End of explanation
%%bigquery
SELECT COUNT(*) AS store_transaction_count
FROM tpcds_2t_baseline.store_sales
Explanation: Verify you now have the baseline data in your project
Run the below query and confirm you see data. Note that if you omit the project-id ahead of the dataset name in the FROM clause, BigQuery will assume your default project.
End of explanation
%%bash
# runs the SQL queries from the TPCDS benchmark
# Pull the current Google Cloud Platform project name
BQ_DATASET="tpcds_2t_baseline" # let's start by benchmarking our baseline dataset
QUERY_FILE_PATH="./sql/example_baseline_queries.sql" # the full test is on 99_baseline_queries but that will take 80+ mins to run
IFS=";"
# create perf table to keep track of run times for all 99 queries
printf "\033[32;1m Housekeeping tasks... \033[0m\n\n";
printf "Creating a reporting table perf to track how fast each query runs...";
perf_table_ddl="CREATE TABLE IF NOT EXISTS $BQ_DATASET.perf(performance_test_num int64, query_num int64, elapsed_time_sec int64, ran_on int64)"
bq rm -f $BQ_DATASET.perf
bq query --nouse_legacy_sql $perf_table_ddl
start=$(date +%s)
index=0
for select_stmt in $(<$QUERY_FILE_PATH)
do
# run the test until you hit a line with the string 'END OF BENCHMARK' in the file
if [[ "$select_stmt" == *'END OF BENCHMARK'* ]]; then
break
fi
printf "\n\033[32;1m Let's benchmark this query... \033[0m\n";
printf "$select_stmt";
SECONDS=0;
bq query --use_cache=false --nouse_legacy_sql $select_stmt # critical to turn cache off for this test
duration=$SECONDS
# get current timestamp in milliseconds
ran_on=$(date +%s)
index=$((index+1))
printf "\n\033[32;1m Here's how long it took... \033[0m\n\n";
echo "Query $index ran in $(($duration / 60)) minutes and $(($duration % 60)) seconds."
printf "\n\033[32;1m Writing to our benchmark table... \033[0m\n\n";
insert_stmt="insert into $BQ_DATASET.perf(performance_test_num, query_num, elapsed_time_sec, ran_on) values($start, $index, $duration, $ran_on)"
printf "$insert_stmt"
bq query --nouse_legacy_sql $insert_stmt
done
end=$(date +%s)
printf "Benchmark test complete"
Explanation: Setup an automated test
Running each of the 99 queries manually via the Console UI would be a tedious effort. We'll show you how you can run all 99 programmatically and automatically log the output (time and GB processed) to a log file for analysis.
Below is a shell script that:
1. Accepts a BigQuery dataset to benchmark
2. Accepts a list of semi-colon separated queries to run
3. Loops through each query and calls the bq query command
4. Records the execution time into a separate BigQuery performance table perf
Execute the below statement and follow along with the results as you benchmark a few example queries (don't worry, we've already ran the full 99 recently so you won't have to).
After executing, wait 1-2 minutes for the benchmark test to complete
End of explanation
%%bigquery
SELECT * FROM tpcds_2t_baseline.perf
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
ORDER BY ran_on
Explanation: Viewing the benchmark results
As part of the benchmark test, we stored the processing time of each query into a new perf BigQuery table. We can query that table and get some performance stats for our test.
First are each of the tests we ran:
End of explanation
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
MAX(performance_test_num) AS latest_performance_test_num,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec
FROM
tpcds_2t_baseline.perf
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
Explanation: And finally, the overall statistics for the entire test:
End of explanation |
235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatial Gaussian Process inference in PyMC3
This is the first step in modelling Species occurrence.
The good news is that MCMC works,
The bad one is that it's computationally intense.
Step1: Simulated gaussian data
Step2: Diagnostics
For one parameter it took around 1.3 minutes
For two parameters it took 4min 27 seconds
Step3: Simulated Binomial data with latent Gaussian Field
Step4: Examine actual posterior distribution
The posterior is analytically tractable so we can compute the posterior mean explicitly. Rather than computing the inverse of the covariance matrix K, we use the numerically stable calculation described Algorithm 2.1 in the book “Gaussian Processes for Machine Learning” (2006) by Rasmussen and Williams, which is available online for free.
Step5: Ok, it's good to have the analitical solution but not always possible sooooo.
Let's do some computing.
Model in PyM3
Step6: Evaluate posterior fit
The posterior samples are consistent with the analytically derived posterior and behaves how one would expect–narrower near areas with lots of observations and wider in areas with more uncertainty.
Step7: Clasification
In Gaussian process classification, the likelihood is not normal and thus the posterior is not analytically tractable. The prior is again a multivariate normal with covariance matrix K, and the likelihood is the standard likelihood for logistic regression
Step8: Sample from posterior distribution
Step9: Evaluate posterior fit
The posterior looks good, though the fit is, unsurprisingly, erratic outside the range of the observed data. | Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps/external_plugins/spystats/')
#import django
#django.setup()
import pandas as pd
import matplotlib.pyplot as plt
## Use the ggplot style
plt.style.use('ggplot')
import numpy as np
## Model Specification
import pymc3 as pm
from spystats import tools
Explanation: Spatial Gaussian Process inference in PyMC3
This is the first step in modelling Species occurrence.
The good news is that MCMC works,
The bad one is that it's computationally intense.
End of explanation
sigma=3.5
range_a=10.13
kappa=3.0/2.0
#ls = 0.2
#tau = 2.0
cov = sigma * pm.gp.cov.Matern32(2, range_a,active_dims=[0,1])
n = 10
grid = tools.createGrid(grid_sizex=n,grid_sizey=n,minx=0,miny=0,maxx=50,maxy=50)
K = cov(grid[['Lon','Lat']].values).eval()
sample = pm.MvNormal.dist(mu=np.zeros(K.shape[0]), cov=K).random(size=1)
grid['Z'] = sample
plt.figure(figsize=(14,4))
plt.imshow(grid.Z.values.reshape(n,n),interpolation=None)
print("sigma: %s, phi: %s"%(sigma,range_a))
## Analysis, GP only one parameter to fit
# The variational method is much beter.
with pm.Model() as model:
#sigma = 1.0
sigma = pm.Uniform('sigma',0,4)
phi = pm.Normal('phi',mu=8,sd=3)
# phi = pm.Uniform('phi',5,10)
cov = sigma * pm.gp.cov.Matern32(2,phi,active_dims=[0,1])
K = cov(grid[['Lon','Lat']].values)
y_obs = pm.MvNormal('y_obs',mu=np.zeros(n*n),cov=K,observed=grid.Z)
#gp = pm.gp.Latent(cov_func=cov,observed=sample)
# Use elliptical slice sampling
#ess_step = pm.EllipticalSlice(vars=[f_sample], prior_cov=K)
#ess_Step = pm.HamiltonianMC()
#%time trace = pm.sample(5000)
## Variational
%time results = pm.fit()
Explanation: Simulated gaussian data
End of explanation
from pymc3 import find_MAP
map_estimate = find_MAP(model=model)
map_estimate
Explanation: Diagnostics
For one parameter it took around 1.3 minutes
For two parameters it took 4min 27 seconds
End of explanation
sigma=3.5
range_a=10.13
kappa=3.0/2.0
#ls = 0.2
#tau = 2.0
cov = sigma * pm.gp.cov.Matern32(2, range_a,active_dims=[0,1])
n = 30
grid = tools.createGrid(grid_sizex=n,grid_sizey=n,minx=0,miny=0,maxx=50,maxy=50)
K = cov(grid[['Lon','Lat']].values).eval()
pfield = pm.MvNormal.dist(mu=np.zeros(K.shape[0]), cov=K).random(size=1)
alpha = 0
poiss_data = np.exp(alpha + pfield)
grid['Z'] = poiss_data
plt.figure(figsize=(14,4))
plt.imshow(grid.Z.values.reshape(n,n),interpolation=None)
print("sigma: %s, phi: %s"%(sigma,range_a))
pm.sample?
## Analysis, GP only one parameter to fit
# The variational method is much beter.
with pm.Model() as model:
#sigma=3.5
range_a=10.13
sigma = pm.Uniform('sigma',0,4)
#phi = pm.HalfNormal('phi',mu=8,sd=3)
phi = pm.Uniform('phi',6,20)
cov = sigma * pm.gp.cov.Matern32(2,phi,active_dims=[0,1])
#K = cov(grid[['Lon','Lat']].values)
#phiprint = tt.printing.Print('phi')(phi)
## The latent function
gp = pm.gp.Latent(cov_func=cov)
## I don't know why this
f = gp.prior("latent_field", X=grid[['Lon','Lat']].values,reparameterize=False)
#f_print = tt.printing.Print('latent_field')(f)
y_obs = pm.Poisson('y_obs',mu=f,observed=grid.Z)
#y_obs = pm.MvNormal('y_obs',mu=np.zeros(n*n),cov=K,observed=grid.Z)
#gp = pm.gp.Latent(cov_func=cov,observed=sample)
# Use elliptical slice sampling
#ess_step = pm.EllipticalSlice(vars=[f_sample], prior_cov=K)
#step = pm.HamiltonianMC()
#step = pm.Metropolis()
#%time trace = pm.sample(5000,step)#,tune=0,chains=1)
## Variational
%time results = pm.fit()
ESsta dando un monton de inf en averafe lost
pm.traceplot(trace)
for RV in model.basic_RVs:
print(RV.name, RV.logp(model.test_point))
from pymc3 import find_MAP
map_estimate = find_MAP(model=model)
map_estimate
np.log(0)
import theano.tensor as tt
Explanation: Simulated Binomial data with latent Gaussian Field
End of explanation
fig, ax = plt.subplots(figsize=(14, 6));
ax.scatter(X0, f, s=40, color='b', label='True points');
# Analytically compute posterior mean
## This is the cholesky decomposition of the Covariance Matrix with kernel nugget
L = np.linalg.cholesky(K_noise.eval())
## Faith step, This solves the base x's such that Lx = f and the uses x for solving y's such that L.T y = x
alpha = np.linalg.solve(L.T, np.linalg.solve(L, f))
## Multiply the posterior (ALgorithm 2.1 in Rasmunssen)
## Using the "extended matrix" K_s
post_mean = np.dot(K_s.T.eval(), alpha)
ax.plot(X0, post_mean, color='g', alpha=0.8, label='Posterior mean');
ax.set_xlim(0, 3);
ax.set_ylim(-2, 2);
ax.legend();
Explanation: Examine actual posterior distribution
The posterior is analytically tractable so we can compute the posterior mean explicitly. Rather than computing the inverse of the covariance matrix K, we use the numerically stable calculation described Algorithm 2.1 in the book “Gaussian Processes for Machine Learning” (2006) by Rasmussen and Williams, which is available online for free.
End of explanation
with pm.Model() as model:
# The actual distribution of f_sample doesn't matter as long as the shape is right since it's only used
# as a dummy variable for slice sampling with the given prior
### From doc:
###
f_sample = pm.Flat('f_sample', shape=(n, ))
## Actually, pm.Flat is a zero array of shape n
# Likelihood
## The covariance is only in the diagonal
y = pm.MvNormal('y', observed=sample, mu=f_sample, cov=noise * tt.eye(n), shape=n)
# Interpolate function values using noisy covariance matrix
## Deterministic allows to compose (do algebra) with RV in many different ways.
##While these transformations work seamlessly, its results are not stored automatically.
##Thus, if you want to keep track of a transformed variable, you have to use pm.Determinstic:
## from http://docs.pymc.io/notebooks/api_quickstart.html
## So in this case is transforming the rv into:
## the low triangular cholesky decomposition of the Covariance with nugget
L = tt.slinalg.cholesky(K_noise)
## So this is for calculating the "kernel" part of the MVN i.e. (mu -x).T * (LL.T)^-1 * (mu-x)
## but considering mu = 0 we have that x = linalg.solve(L,y) (because Lx = y)
## Then, L.T*x)
f_pred = pm.Deterministic('f_pred', tt.dot(K_s.T, tt.slinalg.solve(L.T, tt.slinalg.solve(L, f_sample))))
# Use elliptical slice sampling
ess_step = pm.EllipticalSlice(vars=[f_sample], prior_cov=K_stable)
trace = pm.sample(5000, start=model.test_point, step=[ess_step], progressbar=False, random_seed=1)
Explanation: Ok, it's good to have the analitical solution but not always possible sooooo.
Let's do some computing.
Model in PyM3
End of explanation
fig, ax = plt.subplots(figsize=(14, 6));
for idx in np.random.randint(4000, 5000, 500):
ax.plot(X0, trace['f_pred'][idx], alpha=0.02, color='navy')
ax.scatter(X0, f, s=40, color='k', label='True points');
ax.plot(X0, post_mean, color='g', alpha=0.8, label='Posterior mean');
ax.legend();
ax.set_xlim(0, 3);
ax.set_ylim(-2, 2);
pm.traceplot(trace)
Explanation: Evaluate posterior fit
The posterior samples are consistent with the analytically derived posterior and behaves how one would expect–narrower near areas with lots of observations and wider in areas with more uncertainty.
End of explanation
np.random.seed(5)
f = np.random.multivariate_normal(mean=np.zeros(n), cov=K_stable.eval())
# Separate data into positive and negative classes
f[f > 0] = 1
f[f <= 0] = 0
fig, ax = plt.subplots(figsize=(14, 6));
for idx in np.random.randint(4000, 5000, 500):
ax.plot(X, trace['f_pred'][idx], alpha=0.02, color='navy')
ax.scatter(X0, f, s=40, color='k', label='True points');
ax.plot(X, post_mean, color='g', alpha=0.8, label='Posterior mean');
ax.legend();
ax.set_xlim(0, 3);
ax.set_ylim(-2, 2);
Explanation: Clasification
In Gaussian process classification, the likelihood is not normal and thus the posterior is not analytically tractable. The prior is again a multivariate normal with covariance matrix K, and the likelihood is the standard likelihood for logistic regression:
\begin{equation}
L(y | f) = \Pi_n \sigma(y_n, f_n)
\end{equation}
Generate some example data
We generate random samples from a Gaussian process, assign any points greater than zero to a “positive” class, and assign all other points to a “negative” class.
End of explanation
with pm.Model() as model:
# Again, f_sample is just a dummy variable
f_sample = pm.Flat('f_sample', shape=n)
f_transform = pm.invlogit(f_sample)
# Binomial likelihood
y = pm.Binomial('y', observed=f, n=np.ones(n), p=f_transform, shape=n)
# Interpolate function values using noiseless covariance matrix
L = tt.slinalg.cholesky(K_stable)
f_pred = pm.Deterministic('f_pred', tt.dot(K_s.T, tt.slinalg.solve(L.T, tt.slinalg.solve(L, f_transform))))
# Use elliptical slice sampling
ess_step = pm.EllipticalSlice(vars=[f_sample], prior_cov=K_stable)
trace = pm.sample(5000, start=model.test_point, step=[ess_step], progressbar=False, random_seed=1)
Explanation: Sample from posterior distribution
End of explanation
fig, ax = plt.subplots(figsize=(14, 6));
for idx in np.random.randint(4000, 5000, 500):
ax.plot(X, trace['f_pred'][idx], alpha=0.04, color='navy')
ax.scatter(X0, f, s=40, color='k');
ax.set_xlim(0, 3);
ax.set_ylim(-0.1, 1.1);
Explanation: Evaluate posterior fit
The posterior looks good, though the fit is, unsurprisingly, erratic outside the range of the observed data.
End of explanation |
236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Input pipeline into Keras
In this notebook, we will look at how to read large datasets, datasets that may not fit into memory, using TensorFlow. We can use the tf.data pipeline to feed data to Keras models that use a TensorFlow backend.
Learning Objectives
Use tf.data to read CSV files
Load the training data into memory
Prune the data by removing columns
Use tf.data to map features and labels
Adjust the batch size of our dataset
Shuffle the dataset to optimize for deep learning
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Let's start off with the Python imports that we need.
Step1: Let's make sure we install the necessary version of tensorflow. After doing the pip install above, click Restart the kernel on the notebook so that the Python environment picks up the new packages.
Step2: Locating the CSV files
We will start with the CSV files that we wrote out in the first notebook of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data
Step3: Use tf.data to read the CSV files
See the documentation for make_csv_dataset.
If you have TFRecords (which is recommended), use make_batched_features_dataset instead.
Step4: Note that this is a prefetched dataset. If you loop over the dataset, you'll get the rows one-by-one. Let's convert each row into a Python dictionary
Step5: What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary. (1) remove the unwanted column "key" and (2) keep the label separate from the features.
Step6: Batching
Let's do both (loading, features_label)
in our load_dataset function, and also add batching.
Step7: Shuffling
When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely. | Python Code:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
!pip install tensorflow==2.1.0 --user
Explanation: Input pipeline into Keras
In this notebook, we will look at how to read large datasets, datasets that may not fit into memory, using TensorFlow. We can use the tf.data pipeline to feed data to Keras models that use a TensorFlow backend.
Learning Objectives
Use tf.data to read CSV files
Load the training data into memory
Prune the data by removing columns
Use tf.data to map features and labels
Adjust the batch size of our dataset
Shuffle the dataset to optimize for deep learning
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Let's start off with the Python imports that we need.
End of explanation
import os, json, math
import numpy as np
import shutil
import logging
# SET TF ERROR LOG VERBOSITY
logging.getLogger("tensorflow").setLevel(logging.ERROR)
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
# If you're not using TF 2.0+, let's enable eager execution
if tf.version.VERSION < '2.0':
print('Enabling v2 behavior and eager execution; if necessary restart kernel, and rerun notebook')
tf.enable_v2_behavior()
Explanation: Let's make sure we install the necessary version of tensorflow. After doing the pip install above, click Restart the kernel on the notebook so that the Python environment picks up the new packages.
End of explanation
!ls -l ../../data/*.csv
Explanation: Locating the CSV files
We will start with the CSV files that we wrote out in the first notebook of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data
End of explanation
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
# load the training data
def load_dataset(pattern):
return tf.data.experimental.make_csv_dataset(pattern, 1, CSV_COLUMNS, DEFAULTS)
tempds = load_dataset('../../data/taxi-train*')
print(tempds)
Explanation: Use tf.data to read the CSV files
See the documentation for make_csv_dataset.
If you have TFRecords (which is recommended), use make_batched_features_dataset instead.
End of explanation
# print a few of the rows
for n, data in enumerate(tempds):
row_data = {k: v.numpy() for k,v in data.items()}
print(n, row_data)
if n > 2:
break
Explanation: Note that this is a prefetched dataset. If you loop over the dataset, you'll get the rows one-by-one. Let's convert each row into a Python dictionary:
End of explanation
# get features, label
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# print a few rows to make it sure works
for n, data in enumerate(tempds):
row_data = {k: v.numpy() for k,v in data.items()}
features, label = features_and_labels(row_data)
print(n, label, features)
if n > 2:
break
Explanation: What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary. (1) remove the unwanted column "key" and (2) keep the label separate from the features.
End of explanation
def load_dataset(pattern, batch_size):
return (
tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
# try changing the batch size and watch what happens.
tempds = load_dataset('../../data/taxi-train*', batch_size=2)
print(list(tempds.take(3))) # truncate and print as a list
Explanation: Batching
Let's do both (loading, features_label)
in our load_dataset function, and also add batching.
End of explanation
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
.cache())
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
tempds = load_dataset('../../data/taxi-train*', 2, tf.estimator.ModeKeys.TRAIN)
print(list(tempds.take(1)))
tempds = load_dataset('../../data/taxi-valid*', 2, tf.estimator.ModeKeys.EVAL)
print(list(tempds.take(1)))
Explanation: Shuffling
When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.
End of explanation |
237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Analysis with Pastas
Developed by Mark Bakker, TU Delft
Required files to run this notebook (all available from the data subdirectory)
Step1: Load the head observations
The first step in time series analysis is to load a time series of head observations. The time series needs to be stored as a pandas.Series object where the index is the date (and time, if desired). pandas provides many options to load time series data, depending on the format of the file that contains the time series. In this example, measured heads are stored in the csv file head_nb1.csv.
The heads are read from a csv file with the read_csv function of pandas and are then squeezed to create a pandas Series object. To check if you have the correct data type, use the type command as shown below.
Step2: The variable ho is now a pandas Series object. To see the first five lines, type ho.head().
Step3: The series can be plotted as follows
Step4: Load the stresses
The head variation shown above is believed to be caused by two stresses
Step5: Recharge
As a first simple model, the recharge is approximated as the measured rainfall minus the measured potential evaporation.
Step6: First time series model
Once the time series are read from the data files, a time series model can be constructed by going through the following three steps
Step7: The solve function has a number of default options that can be specified with keyword arguments. One of these options is that by default a fit report is printed to the screen. The fit report includes a summary of the fitting procedure, the optimal values obtained by the fitting routine, and some basic statistics. The model contains five parameters | Python Code:
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.set_log_level("ERROR")
ps.show_versions()
Explanation: Time Series Analysis with Pastas
Developed by Mark Bakker, TU Delft
Required files to run this notebook (all available from the data subdirectory):
Head files: head_nb1.csv, B58C0698001_1.csv, B50H0026001_1.csv, B22C0090001_1.csv, headwell.csv
Pricipitation files: rain_nb1.csv, neerslaggeg_HEIBLOEM-L_967.txt, neerslaggeg_ESBEEK_831.txt, neerslaggeg_VILSTEREN_342.txt, rainwell.csv
Evaporation files: evap_nb1.csv, etmgeg_380.txt, etmgeg_260.txt, evapwell.csv
Well files: well1.csv, well2.csv
Figure: b58c0698_dino.png
Pastas
Pastas is a computer program for hydrological time series analysis and is available from the Pastas Github . Pastas makes heavy use of pandas timeseries. An introduction to pandas timeseries can be found, for example, here. The Pastas documentation is available here.
End of explanation
ho = pd.read_csv('../data/head_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)
print('The data type of the oseries is:', type(ho))
Explanation: Load the head observations
The first step in time series analysis is to load a time series of head observations. The time series needs to be stored as a pandas.Series object where the index is the date (and time, if desired). pandas provides many options to load time series data, depending on the format of the file that contains the time series. In this example, measured heads are stored in the csv file head_nb1.csv.
The heads are read from a csv file with the read_csv function of pandas and are then squeezed to create a pandas Series object. To check if you have the correct data type, use the type command as shown below.
End of explanation
ho.head()
Explanation: The variable ho is now a pandas Series object. To see the first five lines, type ho.head().
End of explanation
ho.plot(style='.', figsize=(12, 4))
plt.ylabel('Head [m]');
plt.xlabel('Time [years]');
Explanation: The series can be plotted as follows
End of explanation
rain = pd.read_csv('../data/rain_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)
print('The data type of the rain series is:', type(rain))
evap = pd.read_csv('../data/evap_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)
print('The data type of the evap series is', type(evap))
plt.figure(figsize=(12, 4))
rain.plot(label='rain')
evap.plot(label='evap')
plt.xlabel('Time [years]')
plt.ylabel('Rainfall/Evaporation (m/d)')
plt.legend(loc='best');
Explanation: Load the stresses
The head variation shown above is believed to be caused by two stresses: rainfall and evaporation. Measured rainfall is stored in the file rain_nb1.csv and measured potential evaporation is stored in the file evap_nb1.csv.
The rainfall and potential evaporation are loaded and plotted.
End of explanation
recharge = rain - evap
plt.figure(figsize=(12, 4))
recharge.plot()
plt.xlabel('Time [years]')
plt.ylabel('Recharge (m/d)');
Explanation: Recharge
As a first simple model, the recharge is approximated as the measured rainfall minus the measured potential evaporation.
End of explanation
ml = ps.Model(ho)
sm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm1)
ml.solve(tmin='1985', tmax='2010')
Explanation: First time series model
Once the time series are read from the data files, a time series model can be constructed by going through the following three steps:
Creat a Model object by passing it the observed head series. Store your model in a variable so that you can use it later on.
Add the stresses that are expected to cause the observed head variation to the model. In this example, this is only the recharge series. For each stess, a StressModel object needs to be created. Each StressModel object needs three input arguments: the time series of the stress, the response function that is used to simulate the effect of the stress, and a name. In addition, it is recommended to specified the kind of series, which is used to perform a number of checks on the series and fix problems when needed. This checking and fixing of problems (for example, what to substitute for a missing value) depends on the kind of series. In this case, the time series of the stress is stored in the variable recharge, the Gamma function is used to simulate the response, the series will be called 'recharge', and the kind is prec which stands for precipitation. One of the other keyword arguments of the StressModel class is up, which means that a positive stress results in an increase (up) of the head. The default value is True, which we use in this case as a positive recharge will result in the heads going up. Each StressModel object needs to be stored in a variable, after which it can be added to the model.
When everything is added, the model can be solved. The default option is to minimize the sum of the squares of the errors between the observed and modeled heads.
End of explanation
ml.plot(figsize=(12, 4));
ml = ps.Model(ho)
sm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm1)
ml.solve(tmin='1985', tmax='2010', solver=ps.LeastSquares)
ml = ps.Model(ho)
sm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm1)
ml.set_parameter('recharge_n', vary=False)
ml.solve(tmin='1985', tmax='2010', solver=ps.LeastSquares)
ml.plot(figsize=(10, 4));
Explanation: The solve function has a number of default options that can be specified with keyword arguments. One of these options is that by default a fit report is printed to the screen. The fit report includes a summary of the fitting procedure, the optimal values obtained by the fitting routine, and some basic statistics. The model contains five parameters: the parameters $A$, $n$, and $a$ of the Gamma function used as the response function for the recharge, the parameter $d$, which is a constant base level, and the parameter $\alpha$ of the noise model, which will be explained a little later on in this notebook.
The results of the model are plotted below.
End of explanation |
238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building an ARIMA Model for a Financial Dataset
In this notebook, you will build an ARIMA model for AAPL stock closing prices. The lab objectives are
Step1: Import data from Google Clod Storage
In this section we'll read some ten years' worth of AAPL stock data into a Pandas dataframe. We want to modify the dataframe such that it represents a time series. This is achieved by setting the date as the index.
Step2: Prepare data for ARIMA
The first step in our preparation is to resample the data such that stock closing prices are aggregated on a weekly basis.
Step3: Let's create a column for weekly returns. Take the log to of the returns to normalize large fluctuations.
Step4: Test for stationarity of the udiff series
Time series are stationary if they do not contain trends or seasonal swings. The Dickey-Fuller test can be used to test for stationarity.
Step5: With a p-value < 0.05, we can reject the null hypotehsis. This data set is stationary.
ACF and PACF Charts
Making autocorrelation and partial autocorrelation charts help us choose hyperparameters for the ARIMA model.
The ACF gives us a measure of how much each "y" value is correlated to the previous n "y" values prior.
The PACF is the partial correlation function gives us (a sample of) the amount of correlation between two "y" values separated by n lags excluding the impact of all the "y" values in between them.
Step6: The table below summarizes the patterns of the ACF and PACF.
<img src="../imgs/How_to_Read_PACF_ACF.jpg" alt="drawing" width="300" height="300"/>
The above chart shows that reading PACF gives us a lag "p" = 3 and reading ACF gives us a lag "q" of 1. Let's Use Statsmodel's ARMA with those parameters to build a model. The way to evaluate the model is to look at AIC - see if it reduces or increases. The lower the AIC (i.e. the more negative it is), the better the model.
Build ARIMA Model
Since we differenced the weekly closing prices, we technically only need to build an ARMA model. The data has already been integrated and is stationary.
Step7: Our model doesn't do a good job predicting variance in the original data (peaks and valleys).
Step8: Let's make a forecast 2 weeks ahead | Python Code:
!pip install --user statsmodels
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime
%config InlineBackend.figure_format = 'retina'
Explanation: Building an ARIMA Model for a Financial Dataset
In this notebook, you will build an ARIMA model for AAPL stock closing prices. The lab objectives are:
Pull data from Google Cloud Storage into a Pandas dataframe
Learn how to prepare raw stock closing data for an ARIMA model
Apply the Dickey-Fuller test
Build an ARIMA model using the statsmodels library
Make sure you restart the Python kernel after executing the pip install command below! After you restart the kernel you don't have to execute the command again.
End of explanation
df = pd.read_csv('gs://cloud-training/ai4f/AAPL10Y.csv')
df['date'] = pd.to_datetime(df['date'])
df.sort_values('date', inplace=True)
df.set_index('date', inplace=True)
print(df.shape)
df.head()
Explanation: Import data from Google Clod Storage
In this section we'll read some ten years' worth of AAPL stock data into a Pandas dataframe. We want to modify the dataframe such that it represents a time series. This is achieved by setting the date as the index.
End of explanation
df_week = df.resample('w').mean()
df_week = df_week[['close']]
df_week.head()
Explanation: Prepare data for ARIMA
The first step in our preparation is to resample the data such that stock closing prices are aggregated on a weekly basis.
End of explanation
df_week['weekly_ret'] = np.log(df_week['close']).diff()
df_week.head()
# drop null rows
df_week.dropna(inplace=True)
df_week.weekly_ret.plot(kind='line', figsize=(12, 6));
udiff = df_week.drop(['close'], axis=1)
udiff.head()
Explanation: Let's create a column for weekly returns. Take the log to of the returns to normalize large fluctuations.
End of explanation
import statsmodels.api as sm
from statsmodels.tsa.stattools import adfuller
rolmean = udiff.rolling(20).mean()
rolstd = udiff.rolling(20).std()
plt.figure(figsize=(12, 6))
orig = plt.plot(udiff, color='blue', label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std Deviation')
plt.title('Rolling Mean & Standard Deviation')
plt.legend(loc='best')
plt.show(block=False)
# Perform Dickey-Fuller test
dftest = sm.tsa.adfuller(udiff.weekly_ret, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic', 'p-value', '#Lags Used', 'Number of Observations Used'])
for key, value in dftest[4].items():
dfoutput['Critical Value ({0})'.format(key)] = value
dfoutput
Explanation: Test for stationarity of the udiff series
Time series are stationary if they do not contain trends or seasonal swings. The Dickey-Fuller test can be used to test for stationarity.
End of explanation
from statsmodels.graphics.tsaplots import plot_acf
# the autocorrelation chart provides just the correlation at increasing lags
fig, ax = plt.subplots(figsize=(12,5))
plot_acf(udiff.values, lags=10, ax=ax)
plt.show()
from statsmodels.graphics.tsaplots import plot_pacf
fig, ax = plt.subplots(figsize=(12,5))
plot_pacf(udiff.values, lags=10, ax=ax)
plt.show()
Explanation: With a p-value < 0.05, we can reject the null hypotehsis. This data set is stationary.
ACF and PACF Charts
Making autocorrelation and partial autocorrelation charts help us choose hyperparameters for the ARIMA model.
The ACF gives us a measure of how much each "y" value is correlated to the previous n "y" values prior.
The PACF is the partial correlation function gives us (a sample of) the amount of correlation between two "y" values separated by n lags excluding the impact of all the "y" values in between them.
End of explanation
from statsmodels.tsa.arima_model import ARMA
# Notice that you have to use udiff - the differenced data rather than the original data.
ar1 = ARMA(tuple(udiff.values), (3, 1)).fit()
ar1.summary()
Explanation: The table below summarizes the patterns of the ACF and PACF.
<img src="../imgs/How_to_Read_PACF_ACF.jpg" alt="drawing" width="300" height="300"/>
The above chart shows that reading PACF gives us a lag "p" = 3 and reading ACF gives us a lag "q" of 1. Let's Use Statsmodel's ARMA with those parameters to build a model. The way to evaluate the model is to look at AIC - see if it reduces or increases. The lower the AIC (i.e. the more negative it is), the better the model.
Build ARIMA Model
Since we differenced the weekly closing prices, we technically only need to build an ARMA model. The data has already been integrated and is stationary.
End of explanation
plt.figure(figsize=(12, 8))
plt.plot(udiff.values, color='blue')
preds = ar1.fittedvalues
plt.plot(preds, color='red')
plt.show()
Explanation: Our model doesn't do a good job predicting variance in the original data (peaks and valleys).
End of explanation
steps = 2
forecast = ar1.forecast(steps=steps)[0]
plt.figure(figsize=(12, 8))
plt.plot(udiff.values, color='blue')
preds = ar1.fittedvalues
plt.plot(preds, color='red')
plt.plot(pd.DataFrame(np.array([preds[-1],forecast[0]]).T,index=range(len(udiff.values)+1, len(udiff.values)+3)), color='green')
plt.plot(pd.DataFrame(forecast,index=range(len(udiff.values)+1, len(udiff.values)+1+steps)), color='green')
plt.title('Display the predictions with the ARIMA model')
plt.show()
Explanation: Let's make a forecast 2 weeks ahead:
End of explanation |
239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson18 Individual Assignment
Individual means that you do it yourself. You won't learn to code if you don't struggle for yourself and write your own code. Remember that while you can discuss the general (algorithmic) way to solve a problem, you should not even be looking at anyone else's code or showing anyone else your code for an individual assignment.
Review the Group Work guidelines on Cavas and/or ask an instructor if you have any questions.
Programming Practice
Be sure to spell all function names correctly - misspelled functions will lose points (and often break anyway since no one is sure what to type to call it). If you prefer showing your earlier, scratch work as you figure out what you are doing, please be sure that you make a final, complete, correct last function in its own cell that you then call several times to test. In other words, separate your thought process/working versions from the final one (a comment that tells us which is the final version would be lovely).
Every function should have at least a docstring at the start that states what it does (see Lesson3 Team Notebook if you need a reminder). Make other comments as necessary.
Make sure that you are running test cases (plural) for everything and commenting on the results in markdown. Your comments should discuss how you know that the test case results are correct.
part 1
Step1: notice that the population is an array which is a numpy data type that has elements of list-like and tuple-like behavior. If you want more info, here is some more info from the NumPy docs. We will do more with arrays later. For now the best part is that w ecan easily do mathematical operations on an array without looping through the elements (thank you NumPy!). Also, a hint for later, if you want
Step2: 1. What type of distribution do we have in the population?
B. Define a sample_means function that | Python Code:
## import libraries we need
import numpy as np
import matplotlib.pyplot as plt
## makes our population of 10,000 numbers from 1-100
## you might get a warning that this function is deprecated, but it still works
population = np.random.random_integers(1, high=100, size=10000)
## look at 1st 25 (don't print out all 10,000!)
population[1:25]
Explanation: Lesson18 Individual Assignment
Individual means that you do it yourself. You won't learn to code if you don't struggle for yourself and write your own code. Remember that while you can discuss the general (algorithmic) way to solve a problem, you should not even be looking at anyone else's code or showing anyone else your code for an individual assignment.
Review the Group Work guidelines on Cavas and/or ask an instructor if you have any questions.
Programming Practice
Be sure to spell all function names correctly - misspelled functions will lose points (and often break anyway since no one is sure what to type to call it). If you prefer showing your earlier, scratch work as you figure out what you are doing, please be sure that you make a final, complete, correct last function in its own cell that you then call several times to test. In other words, separate your thought process/working versions from the final one (a comment that tells us which is the final version would be lovely).
Every function should have at least a docstring at the start that states what it does (see Lesson3 Team Notebook if you need a reminder). Make other comments as necessary.
Make sure that you are running test cases (plural) for everything and commenting on the results in markdown. Your comments should discuss how you know that the test case results are correct.
part 1: Sampling
A. Before you code, we need to set up the model:
In this activity, we're going to look at the effect of sampling on summary statistics.
Sampling is when we select a number of individuals out of a larger population.
There are many sampling strategies, and sampling is used daily in most science experiments and data analysis. We will be looking at a "simple random sample" where we select our sample, well, randomly. This fits in nicely with what we've been doing with random numbers.
Let's go
run the code in the cells below and look at the comments to see what it does
End of explanation
## confirm the data type
type(population)
## for example, if we want the population mean
population.mean()
## or, if we want the population standard deviation
population.std()
## you try with .min() and .max()
## make a histogram of the population
%matplotlib inline
Explanation: notice that the population is an array which is a numpy data type that has elements of list-like and tuple-like behavior. If you want more info, here is some more info from the NumPy docs. We will do more with arrays later. For now the best part is that w ecan easily do mathematical operations on an array without looping through the elements (thank you NumPy!). Also, a hint for later, if you want
End of explanation
## import the data from csv into a pandas DataFrame
import pandas as pd
wings = pd.read_csv('fly_wings_v2.csv')
Explanation: 1. What type of distribution do we have in the population?
B. Define a sample_means function that:
+ takes 2 parameters - n = the number of samples, and p = the number of individuals in each sample
+ for example, n=100, p=10 would be 100 samples with 10 numbers in each sample
+(hint look at numpy.random.choice)
+ returns nothing
+ your sample_means function should:
+ use the population that we have defined above
+ print the mean, standard deviation, minimum, and maximum of the collection of samples (n) in a user friendly way
+ put another way, the mean of the means, the std of the means, etc.
+ (hint using a np.array will make your life easier for these calcs)
+ plot a histogram of the the means of the samples
then test your sample_means function with:
+ n=100, p=10
+ n=10, p=100
+ n=100, p=100
+ at least 3 more test cases of your choice
2. Once you are done testing. Look at your results and comment on what happens to the shape of the distribution and the 4 summary statistics that you print for each simulation.
3. Compare the values of the summary statsitics and the shape of the distributions that you got from your tests to the values from the whole population. Explain what you observe.
4. Provide some general conclusions about the simple random sampling that you simulated.
part 2: Histogram Challenge!
Your mission, should you choose to accept it, is to get as far as you can in replicating the histogram below:
A note to my fellow graphing obsessives and the overwhelmed:
Do the best you can, but this question is worth 15 pts and if you get the data graphed and the thing labeled reasonably, you'll get a lot of partial credit. We just wanted to throw out a challenge that pushed your graphing and formatting skills.
First run the code below to get the data, then you're on your own! (make sure the data file fly_wings_v2.csv from canvas is in the same directory as this notebook)
End of explanation |
240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goals for today's pre-class assignment
Write a Python program to make simple calculations
Work with number and string data types
Work with the list data type
Assignment instructions
Watch the videos below, read through Section 3.1 of the Python Tutorial, and complete the assigned programming problems. Please get started early, and come to office hours if you have any questions!
Recall that to make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' key (you'll have to do this to get the YouTube videos to run). To edit a cell (to add answers, for example) you double-click on the cell, add your text, and then enter it by holding down 'shift' and pressing 'enter'
This assignment is due by 11
Step1: Question 1
Step2: Question 2
Step3: Question 3 | Python Code:
# Imports the functionality that we need to display YouTube videos in a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# Display a specific YouTube video, with a given width and height.
# WE STRONGLY RECOMMEND that you can watch the video in full-screen mode
# (much higher resolution) by clicking the little box in the bottom-right
# corner of the video.
YouTubeVideo("cCLB1sNpNYo",width=640,height=360)
Explanation: Goals for today's pre-class assignment
Write a Python program to make simple calculations
Work with number and string data types
Work with the list data type
Assignment instructions
Watch the videos below, read through Section 3.1 of the Python Tutorial, and complete the assigned programming problems. Please get started early, and come to office hours if you have any questions!
Recall that to make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' key (you'll have to do this to get the YouTube videos to run). To edit a cell (to add answers, for example) you double-click on the cell, add your text, and then enter it by holding down 'shift' and pressing 'enter'
This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 3. Submission instructions can be found at the end of the notebook.
End of explanation
# write your program here. Don't forget that you execute your program by holding
# down 'shift' and pressing 'enter'
# Don't forget to watch the video in full-screen mode!
YouTubeVideo("yv7klK57Ezc",width=640,height=360)
Explanation: Question 1: In the cell below, write a simple program to calculate the area of a rectangle where you give it the length and width of the rectangle as variables, store it in a third variable, and print out the resulting area. Add comments to each line to explain what you're doing!
End of explanation
# write your program here, using multiple cells if necessary (adding extra cells using
# the 'Cell' menu at the top of this notebook). Don't forget that you can execute
# your program by holding down 'shift' and pressing 'enter' in each cell!
# Don't forget to watch the video in full-screen mode!
YouTubeVideo("TJ_bGrigAMg",width=640,height=360)
Explanation: Question 2: In the cells below, create a variable containing a floating-point number and a second variable containing an integer. Turn both into strings and concatenate them, and store it in a new variable. Finally, print out the last value in your newly-concatenated variable. You can use more than one cell if you need to print out multiple quantities!
End of explanation
# write your program here, using multiple cells if necessary (adding extra cells using
# the 'Cell' menu at the top of this notebook). Don't forget that you can execute
# your program by holding down 'shift' and pressing 'enter' in each cell!
Explanation: Question 3: In the cells below, create a list that contains, in this order:
your first name as a string
your age as a floating-point number
your room or apartment number as an integar
Print this list out. Then, after you print it out, replace your first name in the list with your last name, and replace your age with the current year as an integer. Then, append one or more new variables (of whatever type), print out the length of the list using the len() function, and then print out the entire list again!
End of explanation |
241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Errors, or bugs, in your software
Today we'll cover dealing with errors in your Python code, an important aspect of writing software.
What is a software bug?
According to Wikipedia (accessed 16 Oct 2018), a software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or behave in unintended ways.
Where did the terminology come from?
Engineers have used the term well before electronic computers and software. Sometimes Thomas Edison is credited with the first recorded use of bug in that fashion. [Wikipedia]
If incorrect code is never executed, is it a bug?
This is the software equivalent to "If a tree falls and no one hears it, does it make a sound?".
Three classes of bugs
Let's discuss three major types of bugs in your code, from easiest to most difficult to diagnose
Step1: Syntax errors
Step2: INSTRUCTOR NOTE
Step3: INSTRUCTOR NOTE
Step4: Runtime errors
Step5: Semantic errors
Say we're trying to confirm that a trigonometric identity holds. Let's use the basic relationship between sine and cosine, given by the Pythagorean identity"
$$
\sin^2 \theta + \cos^2 \theta = 1
$$
We can write a function to check this
Step6: How to find and resolve bugs?
Debugging has the following steps
Step7: If we can't easily see the bug here, let's add print statements to see the variables change over time.
INSTRUCTOR NOTE
Step8: Note that the print statements significantly reduce legibility of the code. We would like to remove them when we're done debugging.
Step9: Now it works fine for the first set of inputs. Let's try other inputs.
We should have documented the inputs to the function!
Step10: We get nan, which stands for "Not a Number". What's going on here?
Let's add our print statements again, but it only fails later in the range of numbers. We may choose to print only if we find a nan.
Step11: By printing some of the intermediate items, we see the problem
Step12: Using Python's debugger, pdb
Python comes with a built-in debugger called pdb. It allows you to step line-by-line through a computation and examine what's happening at each step. Note that this should probably be your last resort in tracing down a bug. I've probably used it a dozen times or so in five years of coding. But it can be a useful tool to have in your toolbelt.
You can use the debugger by inserting the line
python
import pdb; pdb.set_trace()
within your script. To leave the debugger, type "exit()". To see the commands you can use, type "help".
Let's try this out
Step13: This can be a more convenient way to debug programs and step through the actual execution. | Python Code:
import numpy as np
Explanation: Errors, or bugs, in your software
Today we'll cover dealing with errors in your Python code, an important aspect of writing software.
What is a software bug?
According to Wikipedia (accessed 16 Oct 2018), a software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or behave in unintended ways.
Where did the terminology come from?
Engineers have used the term well before electronic computers and software. Sometimes Thomas Edison is credited with the first recorded use of bug in that fashion. [Wikipedia]
If incorrect code is never executed, is it a bug?
This is the software equivalent to "If a tree falls and no one hears it, does it make a sound?".
Three classes of bugs
Let's discuss three major types of bugs in your code, from easiest to most difficult to diagnose:
Syntax errors: Errors where the code is not written in a valid way. (Generally easiest to fix.)
Runtime errors: Errors where code is syntactically valid, but fails to execute. Often throwing exceptions here. (Sometimes easy to fix, harder when in other's code.)
Semantic errors: Errors where code is syntactically valid, but contain errors in logic. (Can be difficult to fix.)
End of explanation
print "This should only work in Python 2.x, not 3.x used in this class.
Explanation: Syntax errors
End of explanation
x = 1; y = 2
b = x == y # Boolean variable that is true when x & y have the same value
b = 1 = 2
Explanation: INSTRUCTOR NOTE:
Run as-is. Run. Error. Returns SyntaxError: Missing parentheses in call to print.
Add parentheses. Run. Still an error. Returns SyntaxError: EOL while scanning string literal.
Add closing quotation mark. Run. Should be successful.
End of explanation
b
Explanation: INSTRUCTOR NOTE:
Emphasize the difference between the single and double equal operator.
End of explanation
# invalid operation
try:
a = 0
5/a # Division by zero
# invalid operation
input = '40'
input/11 # Incompatiable types for the operation
Explanation: Runtime errors
End of explanation
import math
'''Checks that Pythagorean identity holds for one input, theta'''
def check_pythagorean_identity(theta):
return math.sin(theta)**2 + math.cos(theta)*2 == 1
check_pythagorean_identity(12)
Explanation: Semantic errors
Say we're trying to confirm that a trigonometric identity holds. Let's use the basic relationship between sine and cosine, given by the Pythagorean identity"
$$
\sin^2 \theta + \cos^2 \theta = 1
$$
We can write a function to check this:
End of explanation
def entropy(p):
items = p * np.log(p)
return -np.add(items)
Explanation: How to find and resolve bugs?
Debugging has the following steps:
Detection of an exception or invalid results.
Isolation of where the program causes the error. This is often the most difficult step.
Resolution of how to change the code to eliminate the error. Mostly, it's not too bad, but sometimes this can cause major revisions in codes.
Detection of Bugs
The detection of bugs is too often done by chance. While running your Python code, you encounter unexpected functionality, exceptions, or syntax errors. While we'll focus on this in today's lecture, you should never leave this up to chance in the future.
Software testing practices allow for thoughtful detection of bugs in software. We'll discuss more in the lecture on testing.
Isolation of Bugs
There are three main methods commonly used for bug isolation:
1. The "thought" method. Think about how your code is structured and so what part of your could would most likely lead to the exception or invalid result.
2. Inserting print statements (or other logging techniques)
3. Using a line-by-line debugger like pdb.
Typically, all three are used in combination, often repeatedly.
Using print statements
Say we're trying to compute the entropy of a set of probabilities. The
form of the equation is
$$
H = -\sum_i p_i \log(p_i)
$$
We can write the function like this:
End of explanation
np.add?
Explanation: If we can't easily see the bug here, let's add print statements to see the variables change over time.
INSTRUCTOR NOTE:
Add print statements in tiered way, starting with simple print statements.
Point out that may need slight refactor on result.
python
def entropy(p):
print(p)
items = p * np.log(p)
print(items)
result = -np.sum(items)
print(result)
return result
Show complication of reading multiple print statements without labels.
Add labels so code looks like below.
python
def entropy(p):
print("p=%s" % p)
items = p * np.log(p)
print("items=%s" % items)
result = -np.sum(items)
print("result=%s" % result)
return result
End of explanation
def entropy(p):
items = p * np.log(p)
return -np.sum(items)
p = [0.1, 0.3, 0.5, 0.7, 0.9]
entropy(p)
Explanation: Note that the print statements significantly reduce legibility of the code. We would like to remove them when we're done debugging.
End of explanation
# Create a vector of probabilities.
p = np.arange(start=5., stop=-1., step=-0.5)
p /= np.sum(p)
p
entropy(p)
Explanation: Now it works fine for the first set of inputs. Let's try other inputs.
We should have documented the inputs to the function!
End of explanation
def entropy1(p):
print("p=%s" % str(p))
items = p * np.log(p)
if [np.isnan(el) for el in items]:
print(items)
return -np.sum(items)
entropy1([.1, .2])
entropy1(p)
Explanation: We get nan, which stands for "Not a Number". What's going on here?
Let's add our print statements again, but it only fails later in the range of numbers. We may choose to print only if we find a nan.
End of explanation
def entropy2(p):
p = np.asarray(p) # convert p to array if necessary
print(p)
items = []
for val in p:
item = val * np.log(val)
if np.isnan(item):
print("%f makes a nan" % val)
items.append(item)
#items = p * np.log(ps)
return -np.sum(items)
entropy2(p)
Explanation: By printing some of the intermediate items, we see the problem: 0 * np.log(0) is resulting in a NaN. Though mathematically it's true that limx→0[xlog(x)]=0limx→0[xlog(x)]=0, the fact that we're performing the computation numerically means that we don't obtain this result.
Often, inserting a few print statements can be enough to figure out what's going on.
End of explanation
def entropy(p):
import pdb; pdb.set_trace()
items = p * np.log(p)
return -np.sum(items)
Explanation: Using Python's debugger, pdb
Python comes with a built-in debugger called pdb. It allows you to step line-by-line through a computation and examine what's happening at each step. Note that this should probably be your last resort in tracing down a bug. I've probably used it a dozen times or so in five years of coding. But it can be a useful tool to have in your toolbelt.
You can use the debugger by inserting the line
python
import pdb; pdb.set_trace()
within your script. To leave the debugger, type "exit()". To see the commands you can use, type "help".
Let's try this out:
End of explanation
p = [.1, -.2, .3]
entropy(p)
p = "[0.1, 0.3, 0.5, 0.7, 0.9]"
entropy(p)
Explanation: This can be a more convenient way to debug programs and step through the actual execution.
End of explanation |
242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Study of Exploration Algorithms in developmental robotics with Explauto
This tutorial explains and implements some exploration algorithms used in developmental robotics.
The goal is to get used to the concepts of Goal Babbling [Oudeyer and Kaplan, 2007], [Moulin-Frier and Oudeyer, 2013], Active Learning based on learning progress [Baranes and Oudeyer, 2013], and Model Babbling [Forestier and Oudeyer, 2016a], [Forestier and Oudeyer, 2016b], by experimenting with those strategies in simple environments composed of a robotic arm interacting with objects.
We use the Explauto Python library as a toolkit to help implement exploration algorithms.
The progression is the following
Step1: I. Exploring by hand the movements of a robotic arm
This first part is designed to have a first experience of the complexity of the task given to the robot, by the exploration of the motor parameters by hand.
The 3 joints of the robotic arm are controlled through DMPs with 3 basis functions, which results in 9 motor parameters in total that produce a movement of the arm. You can control those parameters by hand through the 9 following sliders (move the sliders and then hit "Run" button).
Try to catch the ball !
Step2: II. Random Motor Babbling
Now, let's use a higher dimensional arm (e.g. 7 joints and 3 DMP basis per joint so 21 motor parameters in total).
You can play with those parameters in the following block.
Step3: We first implement the Random Motor Babbling strategy, where the robot explores the dynamics of its arm by just trying random motor configurations.
In the following, we do 1000 iterations of Random Motor Babbling and for each iteration, if the ball was caught by the arm, we plot the trajectory of the ball (with a curve) and its last position (ball).
Step4: Exploration measure
We compute the number of cells in the above grid where the ball has been placed at the end of the different movements.
Step5: Using 7 joints and 3 DMP basis per joint, we can see that the ball has been caught only a few times and that a small number of cells have been reached with the ball.
If we increase the number of joints, it becomes even harder to randomly grab the ball.
If we increase the number of DMP basis, the movement lasts longer and has more chances to grab the ball.
III. Random Goal Babbling
Goal babbling consists in choosing goals in the sensory space and using the current state of the model of the environment (the sensorimotor model) to infer a motor action supposed to reach the goals (inverse prediction). The robot then executes the command through the environment and observes the actual sensory effect. The sensorimotor model is finally updated according to this experience.
In the Explauto library, there are different algorithms available to perform inverse prediction from the database of sensorimotor experience.
Here we will use the Nearest Neighbor algorithm. This sensorimotor model simply stores sensorimotor experience, ie. (m,s)
pairs where m are motor commands (here arm joint positions) and s are the corresponding sensory effects (here ball end positions).
When asked for an inverse prediction to reach a sensory goal s, it returns the motor command m associated to the nearest neighbor of s in the stored sensory experience, and adds some exploration noise (gaussian with standard deviation 'sigma_explo_ratio').
In Random Goal Babbling, the goals are choosen randomly in the sensory space, thus the bounds of the space are important.
Here the arm can reach the circle of radius 1, and we define the sensory space as the square of size 10, meaning that a lot of this space is unreachable. When the robot chooses an unreachable goal, it will explore around the closest reached points.
Step6: The previous figure is zoomed in to better see the reachable part of the sensory space.
Random Goal Babbling allows a more efficient covering of the reachable sensory space than with the motor babbling strategy (when the agent samples directly in the motor space).
In the next plot, we zoom out to plot the entire sensory space (of size goal_size).
Step7: IV. Active Goal Babbling
In the previous experiment, we have seen that the goals are randomly sampled in the goal space, even if only a small part of this space is reachable.
The agent could monitor how well it is achieving its goals and try to give itself goals that are reachable, but also not too easy.
We call this strategy Active Learning.
In order to monitor its performance on different goals, the goal space is discretized into small cells.
For each goal in a cell, the agent computes the error (distance between goal and reached point) to reach the goal.
If the reached point is not in the same cell as the goal, then the error is considered maximal.
Then, the derivative of this errors gives a measure of progress in each cell.
Here is the computation of the interest of a cell given the 20 (window size ws=20) last distance error in this cell (from $cell.error_1$ to $cell.error_{ws}$)
Step8: The previous figure is zoomed in and shows only 16 of the 400 interest cells.
We can see here that the goals are not sampled uniformly in the bounds but in small cells, depending on their progress.
Let's plot a snapshot of the current progress (at iteration 2000) in each cell of the discretization.
The 2D space of the next plot is of size 'goal_size=10.', so the region of radius 1 that the arm can cover is small, in the middle.
Step9: The progress is 0 in unreachable cells, and can be high in reached cells, in the middle.
V. More complex environment
In the previous environments, the exploration of the sensory variables lead to interesting learning, but in more realistic environments that might not be the case. We define here a similar environment with objects that are not interesting to interact with, either because they are too easy to learn, or because they are not learnable.
The environment is composed of the same arm with 21 parameters plus a stick that can be used as a tool to move a blue ball (the ball can't be grasped by the hand anymore).
We also add a static magenta ball that can't be move in any way, and 2 random balls that do a random walk.
The motor space is the same as before (21D), and the sensory space is composed of the trajectories of each item
Step10: In the following, we display 1 random movement of the arm (but you can run it again and again), and 1 movement tuned to reach the tool and move the blue ball with the tool (as a video).
The probability of getting the tool and moving the blue ball with a random movement (in 21D) is around 1 out of 10000.
You can also change the parameters of the environment and try motor babbling again.
Step11: In the next block, we plot random motor parameters and the associated sensory feedback in a bar plot to visualize all the dimensions.
Warning
Step12: We remind the reader that the sensory space is composed of the trajectories of each item
Step13: Let's try the Random Goal Babbling exploration strategy in this environment
Step14: The red points are the points reached with the tool at the end of each movement that grabbed the tool, and the blue points are reached with the ball when it was caught.
The following image shows an example of exploration for 100000 iterations (10min) of Random Goal Babbling.
The exploration of the ball was 11.
Step15: Here there are 36 sensory dimensions, and Goal Babbling chooses random goals in this space, where most of the space is unreachable (or uncontrollable
Step16: Active Model Babbling has better explored the tool and object spaces than Random Goal Babbling.
In the following figure we plot the interest of each module along time steps in the previous run.
Step17: The interest of the static ball (s_ball2) is always 0.
The interest of random balls (s_ball3 and 4) is small.
The interest of hand is high at the beginning and allows to explore movements of the hand and to discover the tool (no other motor babbling).
In the following images, the same results are shown for 100000 iterations / 20000 steps (10min).
The number of cells reached with the ball was 35.
Step18: We finally show exploration results with more trials of each algorithm.
We run 30 trials for each, and plot the mean exploration against iterations, with standard errors. | Python Code:
import matplotlib
matplotlib.use('TkAgg')
from utils import *
Explanation: Study of Exploration Algorithms in developmental robotics with Explauto
This tutorial explains and implements some exploration algorithms used in developmental robotics.
The goal is to get used to the concepts of Goal Babbling [Oudeyer and Kaplan, 2007], [Moulin-Frier and Oudeyer, 2013], Active Learning based on learning progress [Baranes and Oudeyer, 2013], and Model Babbling [Forestier and Oudeyer, 2016a], [Forestier and Oudeyer, 2016b], by experimenting with those strategies in simple environments composed of a robotic arm interacting with objects.
We use the Explauto Python library as a toolkit to help implement exploration algorithms.
The progression is the following:
I. Exploring by hand the movements of a robotic arm
II. Random Motor Babbling
III. Random Goal Babbling
IV. Active Goal Babbling
V. More complex environment
VI. Active Model Babbling
Requirements:
Python 2.7, Numpy, Scipy >= 0.16, Scikit-learn, Jupyter. For users not familiar with Python, installing Anaconda for Python 2.7 includes all of them.
Explauto, the Python Library for Autonomous Exploration. See installation section: the simplest is to install explauto via pip: 'pip install explauto'.
You might need to run 'jupyter nbextension enable --py --sys-prefix widgetsnbextension' to enable the interaction with sliders in part I.
To run this notebook, run the following command in the ExplorationAlgorithm folder:
jupyter notebook main.ipynb
Let's begin with the import of all the classes and functions we will need in this notebook.
End of explanation
# Had to run 'jupyter nbextension enable --py --sys-prefix widgetsnbextension'
fig, ax = plt.subplots()
environment1 = ArmBall()
def movement(m1=0., m2=0., m3=0., m4=0., m5=0., m6=0., m7=0., m8=0., m9=0.):
environment1.update(array([m1, m2, m3, m4, m5, m6, m7, m8, m9]))
display_movement(fig, ax, environment1)
interact_manual(movement, m1=(-1., 1., 0.01), m2=(-1., 1., 0.01), m3=(-1., 1., 0.01),
m4=(-1., 1., 0.01), m5=(-1., 1., 0.01), m6=(-1., 1., 0.01),
m7=(-1., 1., 0.01), m8=(-1., 1., 0.01), m9=(-1., 1., 0.01))
Explanation: I. Exploring by hand the movements of a robotic arm
This first part is designed to have a first experience of the complexity of the task given to the robot, by the exploration of the motor parameters by hand.
The 3 joints of the robotic arm are controlled through DMPs with 3 basis functions, which results in 9 motor parameters in total that produce a movement of the arm. You can control those parameters by hand through the 9 following sliders (move the sliders and then hit "Run" button).
Try to catch the ball !
End of explanation
# Parameters to change:
n_joints = 7 # Number of joints
n_dmp_basis = 3 # Number of basis per joint
# Definition of the environment with number of joints and DMP basis per joint
environment2 = ArmBall(n_joints, n_dmp_basis)
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-1., 1.))
ax.set_ylim((-1., 1.))
# Plot a random position of the arm
environment2.env.update(environment2.env.random_motors()[0])
environment2.env.plot(ax, 0)
Explanation: II. Random Motor Babbling
Now, let's use a higher dimensional arm (e.g. 7 joints and 3 DMP basis per joint so 21 motor parameters in total).
You can play with those parameters in the following block.
End of explanation
# Parameters to change:
iterations = 2000 # Number of iterations
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-1., 1.))
ax.set_ylim((-1., 1.))
ax.plot(0., 0., 'sk', ms=8)
explored_s_mb = []
for _ in range(iterations):
m = environment2.random_motor() # sample a random motor command m
s = environment2.update(m) # observe the sensory effect s=(x, y): the last position of the ball
if abs(s[-1] - 0.6) > 0.001: # if the ball has been moved by the arm, we plot its trajectory and last position
explored_s_mb += [s] # store s for later evaluation
ax.plot(environment2.s_traj[:,0], environment2.s_traj[:,1], lw=2, alpha=0.3)
ax.add_patch(Circle(tuple(environment2.s_traj[-1,:]), 0.1, fc="b", alpha=0.2))
plt.xticks(linspace(-1., 1., grid_size + 1))
plt.yticks(linspace(-1., 1., grid_size + 1))
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.grid()
Explanation: We first implement the Random Motor Babbling strategy, where the robot explores the dynamics of its arm by just trying random motor configurations.
In the following, we do 1000 iterations of Random Motor Babbling and for each iteration, if the ball was caught by the arm, we plot the trajectory of the ball (with a curve) and its last position (ball).
End of explanation
print "Number of ball catch:", len(explored_s_mb)
print "Measure of exploration:", int(compute_explo(array(explored_s_mb), array([-1., -1.]), array([1., 1.]), gs=grid_size))
Explanation: Exploration measure
We compute the number of cells in the above grid where the ball has been placed at the end of the different movements.
End of explanation
# Parameters to change:
iterations = 2000 # Number of iterations
n_joints = 7 # Number of joints
n_dmp_basis = 3 # Number of basis per joint
goal_size = 10. # Size of the 2D goal space
sigma_explo_ratio = 0.05 # Exploration noise (standard deviation)
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-2., 2.))
ax.set_ylim((-2., 2.))
ax.plot(0., 0., 'sk', ms=8)
goals_s_gb = []
explored_s_gb = []
# Definition of the environment
environment2 = ArmBall(n_joints, n_dmp_basis, goal_size)
# Initialization of the sensorimotor model
sm_model = SensorimotorModel.from_configuration(environment2.conf, 'nearest_neighbor', 'default')
# Bootstrap sensorimotor model: 1 random motor babbling
m = environment2.random_motor()
s = environment2.update(m)
sm_model.update(m, s)
for _ in range(iterations):
if (not sm_model.bootstrapped_s) or (random() < 0.2):
# Do random motor babbling while the ball has not been grasped, and then in 20% of the trials
m = environment2.random_motor()
else:
# Sample a random goal in the sensory space:
s_goal = rand_bounds(environment2.conf.s_bounds)[0]
# Infer a motor command to reach that goal using the Nearest Neighbor algorithm:
m = sm_model.model.infer_order(tuple(s_goal))
# Add exploration noise (of variance sigma) to experiment new motor parameters:
m = normal(m, sigma_explo_ratio)
# Plot the goal in red:
ax.plot([s_goal[0]], [s_goal[1]], 'or', alpha=0.1)
goals_s_gb += [s_goal]
s = environment2.update(m) # observe the sensory effect s=(x, y): the last position of the ball
sm_model.update(m, s) # update sensorimotor model
if abs(s[-1] - 0.6) > 0.001: # if the ball has been moved by the arm, we plot its trajectory and last position
explored_s_gb += [s] # store s for later evaluation
ax.plot(environment2.s_traj[:,0], environment2.s_traj[:,1], lw=2, alpha=0.1)
ax.add_patch(Circle(tuple(environment2.s_traj[-1,:]), 0.1, fc="b", alpha=0.2))
print "Number of ball catch:", len(explored_s_gb)
print "Measure of exploration:", int(compute_explo(array(explored_s_gb), array([-1., -1.]), array([1., 1.]), gs=grid_size))
Explanation: Using 7 joints and 3 DMP basis per joint, we can see that the ball has been caught only a few times and that a small number of cells have been reached with the ball.
If we increase the number of joints, it becomes even harder to randomly grab the ball.
If we increase the number of DMP basis, the movement lasts longer and has more chances to grab the ball.
III. Random Goal Babbling
Goal babbling consists in choosing goals in the sensory space and using the current state of the model of the environment (the sensorimotor model) to infer a motor action supposed to reach the goals (inverse prediction). The robot then executes the command through the environment and observes the actual sensory effect. The sensorimotor model is finally updated according to this experience.
In the Explauto library, there are different algorithms available to perform inverse prediction from the database of sensorimotor experience.
Here we will use the Nearest Neighbor algorithm. This sensorimotor model simply stores sensorimotor experience, ie. (m,s)
pairs where m are motor commands (here arm joint positions) and s are the corresponding sensory effects (here ball end positions).
When asked for an inverse prediction to reach a sensory goal s, it returns the motor command m associated to the nearest neighbor of s in the stored sensory experience, and adds some exploration noise (gaussian with standard deviation 'sigma_explo_ratio').
In Random Goal Babbling, the goals are choosen randomly in the sensory space, thus the bounds of the space are important.
Here the arm can reach the circle of radius 1, and we define the sensory space as the square of size 10, meaning that a lot of this space is unreachable. When the robot chooses an unreachable goal, it will explore around the closest reached points.
End of explanation
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-goal_size, goal_size))
ax.set_ylim((-goal_size, goal_size))
ax.plot(0., 0., 'sk', ms=8)
for s_goal, s in zip(goals_s_gb, explored_s_gb):
ax.add_patch(Circle(tuple(s), 0.1, fc="b", alpha=0.2))
ax.plot([s_goal[0]], [s_goal[1]], 'or', alpha=0.1)
Explanation: The previous figure is zoomed in to better see the reachable part of the sensory space.
Random Goal Babbling allows a more efficient covering of the reachable sensory space than with the motor babbling strategy (when the agent samples directly in the motor space).
In the next plot, we zoom out to plot the entire sensory space (of size goal_size).
End of explanation
# Parameters to change:
iterations = 2000 # Number of iterations
n_joints = 7 # Number of joints
n_dmp_basis = 3 # Number of basis per joint
goal_size = 10. # Size of the 2D goal space
sigma_explo_ratio = 0.05 # Exploration noise (standard deviation)
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-2, 2.))
ax.set_ylim((-2., 2.))
ax.plot(0., 0., 'sk', ms=8)
explored_s_agb = []
# Definition of the environment
environment2 = ArmBall(n_joints, n_dmp_basis, goal_size)
# Initialization of the sensorimotor model
sm_model = SensorimotorModel.from_configuration(environment2.conf, 'nearest_neighbor', 'default')
# Initialization of the interest model
im_model = DiscretizedProgress(environment2.conf, environment2.conf.s_dims,
**{'x_card': 20*20, # 20 is the number of cells on each dimension
'win_size': 20, # window size parameter (ws)
'eps_random': 0.2, # proportion of random choice of cell
'measure': competence_dist})
# Bootstrap model: 1 random motor babbling
m = environment2.random_motor()
s = environment2.update(m)
sm_model.update(m, s)
for _ in range(iterations):
if (not sm_model.bootstrapped_s) or random() < 0.2:
# Do random motor babbling while the ball has not been grasped, and then in 20% of the trials
m = environment2.random_motor()
s = environment2.update(m)
sm_model.update(m, s)
else:
# Sample a sensory goal maximizing learning progress using the interest model:
s_goal = im_model.sample()
# Infer a motor command to reach that goal using the Nearest Neighbor algorithm:
m = sm_model.model.infer_order(tuple(s_goal))
# Add exploration noise (of variance sigma) to experiment new motor parameters:
m = normal(m, sigma_explo_ratio)
# Execute this command and observe the corresponding sensory effect:
s = environment2.update(m)
# Update the sensorimotor model:
sm_model.update(m, s)
# Update the interest model:
im_model.update(hstack((m, s_goal)), hstack((m, s)))
# Plot the goals in red:
ax.plot([s_goal[0]], [s_goal[1]], 'or', alpha=0.1)
if abs(s[-1] - 0.6) > 0.001: # if the ball has been moved by the arm, we plot its trajectory and last position
explored_s_agb += [s] # store s for later evaluation
ax.plot(environment2.s_traj[:,0], environment2.s_traj[:,1], lw=2, alpha=0.1)
ax.add_patch(Circle(tuple(environment2.s_traj[-1,:]), 0.1, fc="b", alpha=0.2))
plt.xticks(linspace(-2., 2., 5))
plt.yticks(linspace(-2., 2., 5))
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.grid()
print "Number of ball catch:", len(explored_s_agb)
print "Measure of exploration:", int(compute_explo(array(explored_s_agb), array([-1., -1.]), array([1., 1.]), gs=grid_size))
Explanation: IV. Active Goal Babbling
In the previous experiment, we have seen that the goals are randomly sampled in the goal space, even if only a small part of this space is reachable.
The agent could monitor how well it is achieving its goals and try to give itself goals that are reachable, but also not too easy.
We call this strategy Active Learning.
In order to monitor its performance on different goals, the goal space is discretized into small cells.
For each goal in a cell, the agent computes the error (distance between goal and reached point) to reach the goal.
If the reached point is not in the same cell as the goal, then the error is considered maximal.
Then, the derivative of this errors gives a measure of progress in each cell.
Here is the computation of the interest of a cell given the 20 (window size ws=20) last distance error in this cell (from $cell.error_1$ to $cell.error_{ws}$):
$Interest~(cell) = \Big| \frac{1}{ws/2}{\Large \Sigma}{i=0}^{i=ws/2}~cell.error_i - \frac{1}{ws/2}{\Large \Sigma}{i=ws/2+1}^{i=ws/2}~cell.error_i \Big|$
The newly reached cells also get a novelty bonus to be explored for $ws$ iterations.
This implementation is an adapted version of the SAGG-RIAC algorithm, with a fixed discretization of the sensory space, and a modification of the computation of the progress.
For instance, if the goals in a cell are unreachable, then the error is always maximal in that cell, and the progress is 0.
Also, if the cell is reachable and already well explored, then the error is always low, and the progress is low.
However, if the error decreases in a given cell, that means the agent is making progress to reach its goals, so the progress is high.
The interest is defined as the absolute value of the progress, so that if the error increases (for instance if a ball has changed its behavior so the sensorimotor model has to be re-explored in some cells), then the progress is negative but the interest is positive.
Given this measure of progress (or interest), the agent chooses at each iteration a cell where the progress is high (but also explores random cells with a small probability), and then a random goal in this cell.
In the Explauto framework, the class 'DiscretizedProgress' already implements this algorithm, and the object monitoring the progress is called an 'Interest Model'. We will use the methods 'sample' to pick a cell and a random point in the cell, and 'update' to give back the information about the reached point so that error and progress can be computed.
End of explanation
progress_flat = abs(im_model.discrete_progress.progress())
progress_array = zeros(im_model.space.cardinalities)
for idx in range(len(progress_flat)):
progress_array[im_model.space.index2multi(idx)] = progress_flat[idx]
fig, ax = plt.subplots()
ax.set_xticklabels([])
ax.set_yticklabels([])
cax = ax.imshow(transpose(progress_array), origin='lower', cmap=cm.jet, interpolation="none")
fig.colorbar(cax)
plt.xticks(linspace(0., 19., 21))
plt.yticks(linspace(0., 19., 21))
ax.grid()
Explanation: The previous figure is zoomed in and shows only 16 of the 400 interest cells.
We can see here that the goals are not sampled uniformly in the bounds but in small cells, depending on their progress.
Let's plot a snapshot of the current progress (at iteration 2000) in each cell of the discretization.
The 2D space of the next plot is of size 'goal_size=10.', so the region of radius 1 that the arm can cover is small, in the middle.
End of explanation
# !! BEFORE RUNNING THIS CELL, RESTART KERNEL NOW to recover TkAgg backend
import matplotlib
matplotlib.use('TkAgg')
from utils import *
# Pop a separate window
fig, ax = plt.subplots()
# Definition of the environment
environment3 = ArmStickBalls(n_joints=7, # Number of joints
n_dmp_basis=3, # Number of basis per joint
goal_size=2., # Size of goal space
stick_handle_tol=0.05, # Maximal distance to grab the stick with the hand
stick_length=0.30, # Stick length,
ball_size=0.10, # Maximal distance to grab the ball with the stick
random_ball_noise=0.20 # Random balls Gaussian noise amplitude
)
Explanation: The progress is 0 in unreachable cells, and can be high in reached cells, in the middle.
V. More complex environment
In the previous environments, the exploration of the sensory variables lead to interesting learning, but in more realistic environments that might not be the case. We define here a similar environment with objects that are not interesting to interact with, either because they are too easy to learn, or because they are not learnable.
The environment is composed of the same arm with 21 parameters plus a stick that can be used as a tool to move a blue ball (the ball can't be grasped by the hand anymore).
We also add a static magenta ball that can't be move in any way, and 2 random balls that do a random walk.
The motor space is the same as before (21D), and the sensory space is composed of the trajectories of each item: the arm, the tool and the 3 balls.
There are 6 dimensions for each item (so 36D), representing the 2D position of the item at 3 time points along the movement duration.
End of explanation
# 1 iteration of Random Motor Babbling
m = environment3.random_motor()
s = environment3.update(m)
display_movement(fig, ax, environment3, time_step=0.1)
Explanation: In the following, we display 1 random movement of the arm (but you can run it again and again), and 1 movement tuned to reach the tool and move the blue ball with the tool (as a video).
The probability of getting the tool and moving the blue ball with a random movement (in 21D) is around 1 out of 10000.
You can also change the parameters of the environment and try motor babbling again.
End of explanation
%matplotlib inline
fig, axes = plt.subplots(1,2)
m = environment3.random_motor()
s = environment3.update(m)
axes[0].bar(range(environment3.conf.m_ndims), m)
axes[0].set_xlabel('Motor Dimensions')
axes[1].bar(range(environment3.conf.s_ndims), s)
axes[1].set_xlabel('Sensory Dimensions')
Explanation: In the next block, we plot random motor parameters and the associated sensory feedback in a bar plot to visualize all the dimensions.
Warning: we go back to the inline backend.
End of explanation
#m = [ 0.54984069, 0.02455233, -0.59603529, -0.6655813, 0.33923315, 0.22605509, 0.81641894, -0.26901228, 0.14363993, -0.71148838, 0.10504532, 0.58099297, -0.80227669, 0.48208766, -0.56107818, -0.68042166, 0.9648434, -0.8760082, -0.19469749, -0.6141873, 0.11389673]
#s = environment3.update(m)
#display(HTML(environment3.plot(fig, ax)))
HTML('<video width="600" height="400" controls> <source src="./environment3.mp4" type="video/mp4"> </video>')
Explanation: We remind the reader that the sensory space is composed of the trajectories of each item: the arm, the tool and the 3 balls.
There are 6 dimensions for each item (so 36D), representing the 2D position of the item at 3 time points along the movement duration.
Here is an example of movement to catch the blue ball (around 1 out of 10000 random motor parameters):
End of explanation
# Parameters to change:
iterations = 20000 # Number of iterations
sigma_explo_ratio = 0.05 # Exploration noise (standard deviation)
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-2, 2.))
ax.set_ylim((-2., 2.))
ax.plot(0., 0., 'sk', ms=8)
explored_s_agb_t = []
explored_s_agb_o = []
# Initialization of the sensorimotor model
sm_model = SensorimotorModel.from_configuration(environment3.conf, 'nearest_neighbor', 'default')
# Bootstrap sensorimotor model: 1 random motor babbling
m = environment3.random_motor()
s = environment3.update(m)
sm_model.update(m, s)
for iteration in range(iterations):
# Print number of iterations up to now:
if (iteration+1) % 100 == 0:
clear_output(wait=True)
print "Iteration:", iteration+1
# Compute the interest of modules
if (not sm_model.bootstrapped_s) or random() < 0.2:
# Do random motor babbling while the ball has not been grasped, and then in 20% of the trials
m = environment3.random_motor()
else:
# Sample a random goal in the sensory space:
s_goal = rand_bounds(environment3.conf.s_bounds)[0]
# Infer a motor command to reach that goal using the Nearest Neighbor algorithm:
m = sm_model.model.infer_order(tuple(s_goal))
# Add exploration noise (of variance sigma) to experiment new motor parameters:
m = normal(m, sigma_explo_ratio)
s = environment3.update(m) # observe the sensory effect s (36D): the trajectory of all objects
sm_model.update(m, s) # update sensorimotor model
if abs(s[17] - 0.6) > 0.001:
explored_s_agb_o += [[s[14], s[17]]]
ax.add_patch(Circle(tuple(environment3.s_traj[-1,4:6]), 0.1, fc="b", alpha=0.3))
if abs(s[11] - 0.46213203) > 0.001:
explored_s_agb_t += [[s[8], s[11]]]
ax.add_patch(Circle(tuple(environment3.s_traj[-1,2:4]), 0.05, fc="r", alpha=0.1))
#plt.savefig('explored_RGB_100000it')
print "Number of tool catch:", len(explored_s_agb_t)
print "Measure of exploration of tool:", int(compute_explo(array(explored_s_agb_t), array([-2., -2.]), array([2., 2.]), gs=grid_size))
print "Number of ball catch:", len(explored_s_agb_o)
print "Measure of exploration of interesting ball:", int(compute_explo(array(explored_s_agb_o), array([-2., -2.]), array([2., 2.]), gs=grid_size))
Explanation: Let's try the Random Goal Babbling exploration strategy in this environment:
End of explanation
Image('./explored_RGB_100000it.png')
Explanation: The red points are the points reached with the tool at the end of each movement that grabbed the tool, and the blue points are reached with the ball when it was caught.
The following image shows an example of exploration for 100000 iterations (10min) of Random Goal Babbling.
The exploration of the ball was 11.
End of explanation
# Parameters to change:
iterations = 20000 # Number of iterations
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-2, 2.))
ax.set_ylim((-2., 2.))
ax.plot(0., 0., 'sk', ms=8)
explored_s_agb_t = []
explored_s_agb_o = []
interests_evolution = []
n_explore = 4
def plot_s(s):
global explored_s_agb_t, explored_s_agb_o
if abs(s[17] - 0.6) > 0.001:
explored_s_agb_o += [[s[14], s[17]]]
ax.add_patch(Circle(tuple(environment3.s_traj[-1,4:6]), 0.1, fc="b", alpha=0.3))
if abs(s[11] - 0.46213203) > 0.001:
explored_s_agb_t += [[s[8], s[11]]]
ax.add_patch(Circle(tuple(environment3.s_traj[-1,2:4]), 0.05, fc="r", alpha=0.1))
# Define motor and sensory spaces:
m_ndims = environment3.conf.m_ndims # number of motor parameters
m_space = range(m_ndims)
s_hand = range(m_ndims, m_ndims+6)
s_tool = range(m_ndims+6, m_ndims+12)
s_ball1 = range(m_ndims+12, m_ndims+18)
s_ball2 = range(m_ndims+18, m_ndims+24)
s_ball3 = range(m_ndims+24, m_ndims+30)
s_ball4 = range(m_ndims+30, m_ndims+36)
# Create the 6 learning modules:
learning_modules = {}
learning_modules['mod1'] = LearningModule("mod1", m_space, s_hand, environment3.conf)
learning_modules['mod2'] = LearningModule("mod2", m_space, s_tool, environment3.conf)
learning_modules['mod3'] = LearningModule("mod3", m_space, s_ball1, environment3.conf)
learning_modules['mod4'] = LearningModule("mod4", m_space, s_ball2, environment3.conf)
learning_modules['mod5'] = LearningModule("mod5", m_space, s_ball3, environment3.conf)
learning_modules['mod6'] = LearningModule("mod6", m_space, s_ball4, environment3.conf)
# Steps of (4 exploring and 1 exploiting iterations):
for step in range(iterations / (n_explore + 1)):
# Print number of iterations up to now:
if (step+1) % 20 == 0:
clear_output(wait=True)
print "Iteration:", (step+1) * (n_explore + 1)
# Compute the interest of modules
interests = [learning_modules[mid].interest() for mid in learning_modules.keys()]
interests_evolution.append(interests)
# Choose the babbling module (probabilities proportional to interests, with epsilon of random choice):
babbling_module = learning_modules.values()[prop_choice(interests, eps=0.2)]
# The babbling module picks a random goal in its sensory space and returns 4 noisy motor commands:
m_list = babbling_module.produce(n=n_explore)
for m in m_list:
s = environment3.update(m) # execute this command and observe the corresponding sensory effect
plot_s(s) # plot the observed sensory effect
# Update each sensorimotor models:
for mid in learning_modules.keys():
learning_modules[mid].update_sm(m, learning_modules[mid].get_s(array(list(m) + list(s))))
# Choose the best motor command to reach current goal (with no noise):
m = babbling_module.infer(babbling_module.expl_dims, babbling_module.inf_dims, babbling_module.x, n=1, explore=False)
s = environment3.update(m) # execute this command and observe the corresponding sensory effect
# Update the interest of the babbling module:
babbling_module.update_im(m, babbling_module.get_s(array(list(m)+list(s))))
# Update each sensorimotor models:
for mid in learning_modules.keys():
learning_modules[mid].update_sm(m, learning_modules[mid].get_s(array(list(m) + list(s))))
#plt.savefig('explored_AMB_100000it')
for mid in learning_modules.keys():
print "Number of babbling steps for module", mid, ":", learning_modules[mid].im.n_points(), "{}%".format(int(100. * (n_explore + 1)*learning_modules[mid].im.n_points() / float(learning_modules[mid].sm.t)))
print
print "Number of tool catch:", len(explored_s_agb_t)
print "Measure of exploration of tool:", int(compute_explo(array(explored_s_agb_t), array([-2., -2.]), array([2., 2.]), gs=grid_size))
print "Number of ball catch:", len(explored_s_agb_o)
print "Measure of exploration of interesting ball:", int(compute_explo(array(explored_s_agb_o), array([-2., -2.]), array([2., 2.]), gs=grid_size))
Explanation: Here there are 36 sensory dimensions, and Goal Babbling chooses random goals in this space, where most of the space is unreachable (or uncontrollable: the random balls). This strategy thus fail to explore interesting parts of the sensory space, and only few cells are reached with the controllable ball.
Active Goal Babbling can't directly help here in 36 dimensions because a discretization of this space would have way too much cells.
Therefore, we need to change the representation of this space in order to discriminate the variables that are controllable from random variables, and to define smaller-dimensional but interesting sensory spaces.
VI. Active Model Babbling
Instead of learning a sensorimotor model that maps the motor space to the whole sensory space (mixing learnable and random variables), we define here Active Model Babbling as the parallel exploration of several sensorimotor models based on a measure of their progress.
As the 36D sensory space represents the trajectory of 6 objects in the environment (hand, tool, 4 balls), we define 6 sensory subspaces corresponding to the variables of those objects. For each of those objects, the agent learns a sensorimotor model that maps the motor space to the corresponding sensory subspace.
The 6 sensorimotor mappings are learnt with the previously defined Goal Babbling strategy, choosing goals in the sensory subspace and trying to reach them given the current knowledge in the corresponding sensorimotor mapping.
For example, if at a given iteration the agent explores the mapping between the motor space and the sensory space of the hand, it learns to move the hand independently of the other objects. However, if the exploration of the trajectory of the hand produced a movement where the tool was caught, then the senrimotor mapping between the motor space and the sensory subspace of the tool is also updated. Exploring a sensory subspace can thus allow to make progress in the other subspaces.
At each iteration, a model (or sensory subspace) to explore is chosen (thus the name Model Babbling), and then a random goal is chosen in this sensory subspace. The agent now does 5 iterations with this goal: 4 adding gaussian exploration noise, and 1 without noise. This last iteration allows to reliably measure the progress made to reach the goal.
In our implementation of Active Model Babbling, at each iteration the agent chooses the model to explore in a probabilistic way: in 20% of the iterations the model is chosen randomly, and in the other iterations the model is chosen with a probability proportional to its learning progress.
End of explanation
fig, ax = plt.subplots()
ax.plot(array(interests_evolution), lw=2)
ax.legend(["s_hand", "s_tool", "s_ball1", "s_ball2", "s_ball3", "s_ball4"], ncol=3)
ax.set_xlabel('Time steps', fontsize=20)
ax.set_ylabel('Learning progress', fontsize=20)
#plt.savefig('learning_progress_AMB_100000it')
Explanation: Active Model Babbling has better explored the tool and object spaces than Random Goal Babbling.
In the following figure we plot the interest of each module along time steps in the previous run.
End of explanation
display(HTML("<img style='width: 400px; margin: 0px; float: left; border: 1px solid black;' src='./learning_progress_AMB_100000it.png' /> <img style='width: 400px; margin: 0px; float: left; border: 1px solid black;' src='./explored_AMB_100000it.png' />"))
Explanation: The interest of the static ball (s_ball2) is always 0.
The interest of random balls (s_ball3 and 4) is small.
The interest of hand is high at the beginning and allows to explore movements of the hand and to discover the tool (no other motor babbling).
In the following images, the same results are shown for 100000 iterations / 20000 steps (10min).
The number of cells reached with the ball was 35.
End of explanation
Image('./exploration_stats.png')
Explanation: We finally show exploration results with more trials of each algorithm.
We run 30 trials for each, and plot the mean exploration against iterations, with standard errors.
End of explanation |
243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyplearnr demo
Here I demonstrate pyplearnr, a wrapper for building/training/validating scikit learn pipelines using GridSearchCV or RandomizedSearchCV.
Quick keyword arguments give access to optional feature selection (e.g. SelectKBest), scaling (e.g. standard scaling), use of feature interactions, and data transformations (e.g. PCA, t-SNE) before being fed to a classifier/regressor.
After building the pipeline, data can be used to perform a nested (stratified if classification) k-folds cross-validation and output an object containing data from the process, including the best model.
Various default pipeline step parameters for the grid-search are available for quick iteration over different pipelines, with the option to ignore/override them in a flexible way.
This is an on-going project that I intend to update with more models and pre-processing options and also with corresponding defaults.
Titanic dataset example
Here I use the Titanic dataset I've cleaned and pickled in a separate tutorial.
Import data
Step1: By "cleaned" I mean I've derived titles (e.g. "Mr.", "Mrs.", "Dr.", etc) from the passenger names, imputed the missing Age values using polynomial regression with grid-searched 10-fold cross-validation, filled in the 3 missing Embarked values with the mode, and removed all fields that could be considered an id for that individual.
Thus, there is no missing/null data.
Set categorical features as type 'category'
In order to one-hot encode categorical data, its best to set the features that are considered categorical
Step2: One-hot encode categorical features
Step3: Now we have 17 features.
Split into input/output data
Step4: Null model
Step5: Thus, null accuracy of ~62% if always predict death.
Import pyplearnr and initialize optimized pipeline collection
Step6: Regression
Step7: KNN with and without pre-processing and various options
Basic KNN
Here we do a K-nearest neighbors (KNN) classification with stratified 10-fold (default) cross-validation with a grid search over the default of 1 to 30 nearest neighbors and the use of either "uniform" or "distance" weights
Step8: Note the default OptimizedPipeline parameters and those for its fit() method.
The OptimizedPipeline class contains all of the data associated with the nested stratified k-folds cross-validation.
After use of the fit() method, this includes the data, its test/train splits (based on the test_size percentage keyword argument), the GridSearchCV or RandomizedGridSearchCV object, the Pipeline object that has been retrained using all of the data with the best parameters, test/train scores, and validation metrics/reports.
A report can be printed immediately after the fit by setting the suppress_output keyword argument to True.
Printing the OptimizedPipeline instance also shows the report
Step9: The report lists the steps in the pipeline, their optimized settings, the test/training accuracy (or L2 regression score), the grid search parameters, and the best parameters.
If the estimator used is a classifier it also includes the confusion matrix, normalized confusion matrix, and a classification report containing precision/recall/f1-score for each class.
Turns out that the best settings for this optimized pipeline are 12 neighbors and the use of the 'uniform' weight.
Note how I've set the random_state keyword agument to 6 so that the models can be compared using the same test/train split.
Default pipeline step grid parameters
The default parameters to grid-search over for k-nearest neighbors are 1 to 30 neighbors and either the 'uniform' or 'distance' weight.
The defaults for the pre-processing steps, classifiers, and regressors can be viewed by using the get_default_pipeline_step_parameters() method with the number of features as the input
Step10: KNN with custom pipeline step grid parameters
These default parameters can be ignored by setting the use_default_param_dist keyword argument to False.
The param_dist keyword argument can be used to keep default parameters (if use_default_param_dist set to True) or to be used as the sole source of parameters (if use_default_param_dist set to False).
Here is a demonstration of generation of default parameters with those in param_dist being overridden
Step11: Note how the n_neighbors parameter was 30 to 499 instead of 1 to 30.
Here's an example of only using param_dist for parameters
Step12: Note how the estimator__weights parameter isn't set for the KNN estimator.
KNN with scaling
The currently supported scaling options are standard, normal, min-max, and binary using scikit-learn's StandardScaler, Normalizer, MinMaxScaler, and Binarizer, respectively. These are set by the pipeline initialization kwarg 'scale_type' like this
Step13: Let's compare the pipelines so far
Step14: Binary scaling fed into a KNN classifier appears to have the best training score.
KNN with custom min-max and binary scaling settings
MinMaxScaler scales each feature value to between 0 and 1 by default. Different scaling ranges can be gridded over by setting the 'scaler__feature_range' keyword argument in param_dist.
Binarizer sets each value to 0 or 1 depending on a threshold. The default for pyplearnr is 0.5. This can be changed by setting 'scaler__threshold' using param_dist.
Here is an example of setting both
Step15: Switching the range for min_max scaling boosted it to rank 1 for pipeline training scores
Step16: The range of 1 to 2 for the MinMaxScaler appeared to be the best.
KNN with feature selection using SelectKBest with f_classif
Currently only one form of feature selection, SelectKBest with f_classif, is supported. This is set using the 'feature_selection_type' keyword argument.
Step17: Feature selection and KNN did had a mid-level training score
Step18: SelectKBest with f_classif chose 5 features as the best to use in the model.
The features selected by SelectKBest can be accessed normally, using the mask obtained from the get_support() method on the columns
Step19: Thus, Pclass 3, being male, and the titles Miss, Mr, and Mrs were considered the most important features by SelectKBest using f_classif.
Setting custom feature selection
The default number of features is 1 to all of them. This can be gridded over different values by setting 'feature_selection__k' in param_dist
Step20: KNN using feature interactions
Feature products of different degrees can be used as additional features by setting the 'feature_interaction' OptimizedPipeline keyword argument to True
Step21: The optimal number of interactions (number of features multiplied by each other at once) was found to be 1.
KNN using custom number of feature interactions
The 'feature_interactions__degree' dictates the number of interactions. The default setting is to try no interactions (degree 1) and 2 interactions. Setting this in param_dist allows custom numbers
Step22: KNN with pre-processing transforms
Currently Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE) are supported as pre-processing options.
KNN with PCA pre-processing
Step23: We can look at the transformed data after PCA normally
Step24: This is currently a very manual process and would be difficult with more and more processing steps. I'm thinking of automating this with a class containing all optimized pipelines in the future.
Any of the parameters displayed in the pipeline section of the report (iterated_power, random_state, whiten, n_components, etc) can be set in param_dist by 'transform__setting' as done previously.
KNN with t-SNE pre-processing
The t-SNE algorithm can be used as a pre-processing algorithm as well by setting the 'transform_type' keyword argument to 't-sne'
Step25: This t-SNE step takes longer than most in pyplearnr unfortunately. It also resulted in the worst score. I'll try to optimize this in the future.
Reducing the number of grid combinations
Setting the 'num_parameter_combos' fit() method keyword argument to an integer will limit the number of grid combinations to perform using RandomizedSearchCV instead of GridSearchCV
Step26: This is a good way to speed up computations and give you an idea as to how long a particular pipeline takes to train.
Here's the corresponding report
Step27: The best parameter combination, of those attempted by RandomizedSearchCV, was 12 nearest neighbors with the 'uniform' weight.
Other models
This code currently supports K-nearest neighbors, logistic regression, support vector machines, multilayer perceptrons, random forest, and adaboost
Step28: Logistic regression, random forest, multilayer perceptron, and adaboost outperform KNN, even with all of the attempted pre-processing so far.
Putting it all together
Different combinations of these options can be strung together simultaneously to iterate over multiple models
Step29: Out of 240 different possible pipelines, best pipeline, with a test score of 0.899, appears to be min-max scaling between 0 and 1 funneled into a PCA and then into a multilayer perceptron with one hidden layer of size 5.
It took roughly 3 hours to find it.
Predicting survival with the optimal model
All one has to do to make a prediction is use the .predict method of the pipeline in the .pipeline field.
Here's an example of predicting whether I would survive on the Titanic. I'm 32, would probably have one family member with me, might be Pclass1 (I'd hope), male, have a Ph.D (if that's what they mean by Dr.). I'm using the median Fare for Pclass 1 and randomly chose a city to have embarked from
Step30: Looks like I died!
Let's look at my predicted probability of surviving
Step31: I would have a 0.77% chance of survival.
Summary
I've shown how to use pyplearnr to try out 240 different pipeline combinations validated with stratified 10-folds cross-validation using a combination of simple keyword arguments with some additional customization options. Also, I've shown how to access the model parameters, predict survival, and check the actual predicted probability according to the optimized pipeline.
Please let me know if you have any questions or suggestions about how to improve this tool, my code, the approach I'm taking, etc. | Python Code:
import pandas as pd
df = pd.read_pickle('trimmed_titanic_data.pkl')
df.info()
Explanation: pyplearnr demo
Here I demonstrate pyplearnr, a wrapper for building/training/validating scikit learn pipelines using GridSearchCV or RandomizedSearchCV.
Quick keyword arguments give access to optional feature selection (e.g. SelectKBest), scaling (e.g. standard scaling), use of feature interactions, and data transformations (e.g. PCA, t-SNE) before being fed to a classifier/regressor.
After building the pipeline, data can be used to perform a nested (stratified if classification) k-folds cross-validation and output an object containing data from the process, including the best model.
Various default pipeline step parameters for the grid-search are available for quick iteration over different pipelines, with the option to ignore/override them in a flexible way.
This is an on-going project that I intend to update with more models and pre-processing options and also with corresponding defaults.
Titanic dataset example
Here I use the Titanic dataset I've cleaned and pickled in a separate tutorial.
Import data
End of explanation
simulation_df = df.copy()
categorical_features = ['Survived','Pclass','Sex','Embarked','Title']
for feature in categorical_features:
simulation_df[feature] = simulation_df[feature].astype('category')
simulation_df.info()
Explanation: By "cleaned" I mean I've derived titles (e.g. "Mr.", "Mrs.", "Dr.", etc) from the passenger names, imputed the missing Age values using polynomial regression with grid-searched 10-fold cross-validation, filled in the 3 missing Embarked values with the mode, and removed all fields that could be considered an id for that individual.
Thus, there is no missing/null data.
Set categorical features as type 'category'
In order to one-hot encode categorical data, its best to set the features that are considered categorical:
End of explanation
simulation_df = pd.get_dummies(simulation_df,drop_first=True)
simulation_df.info()
Explanation: One-hot encode categorical features
End of explanation
# Set output feature
output_feature = 'Survived_1'
# Get all column names
column_names = list(simulation_df.columns)
# Get input features
input_features = [x for x in column_names if x != output_feature]
# Split into features and responses
X = simulation_df[input_features].copy()
y = simulation_df[output_feature].copy()
Explanation: Now we have 17 features.
Split into input/output data
End of explanation
simulation_df['Survived_1'].value_counts().values/float(simulation_df['Survived_1'].value_counts().values.sum())
Explanation: Null model
End of explanation
%matplotlib inline
%load_ext autoreload
import sys
import os
sys.path.append("./pyplearnr")
optimized_pipelines = {}
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
reload(ppl)
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3,
inner_loop_fold_count=3)
pipeline_schematic = [
{'scaler': {
'none': {},
'standard': {},
'min_max': {},
'normal': {}
}},
{'estimator': {
'knn': {
'n_neighbors': range(1,31),
'weights': ['uniform','distance']
}}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='auc')
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {0:59})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=59)
%autoreload
kfcv.plot_best_pipeline_scores()
%autoreload
kfcv.plot_contest(color_by='scaler', all_folds=True, legend_loc='center left')
%autoreload
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {1:6})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=8)
%autoreload
%matplotlib inline
kfcv.plot_best_pipeline_scores(number_size=18, markersize=14)
%autoreload
%matplotlib inline
kfcv.plot_contest(number_size=8, markersize=7, all_folds=True, figsize=(10,40),
color_by='scaler', box_line_thickness=2)
kfcv.pipelines[29]
# cmap = pylab.cm.viridis
# print cmap.__doc__
worst_pipelines = [85, 67, 65, 84, 69, 83]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
worst_pipelines = [86, 75, 84, 79, 85, 83]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
worst_pipelines = [77, 61, 81, 83, 74, 82, 84]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
best_pipelines = [89, 93, 2, 91, 4, 3]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [91, 93, 5, 43, 4, 100]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [5, 4, 91, 3, 55, 49, 2]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
reload(ppl)
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3,
inner_loop_fold_count=3)
pipeline_bundle_schematic = [
{'scaler': {
'standard': {},
'normal': {},
'min_max': {},
'binary': {}
}},
{'estimator': {
'knn': {
'n_neighbors': range(1,30)
},
# 'svm': {
# 'C': np.array([1.00000000e+00])
# }
}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_bundle_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='accuracy')
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {1:24, 2:55})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=55)
%autoreload
%matplotlib inline
kfcv.plot_best_pipeline_scores()
%autoreload
%matplotlib inline
kfcv.plot_contest()
best_pipelines = [91, 44, 89, 45, 3, 90]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [21, 18, 40, 38, 36, 35, 24]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [55, 39, 41, 42, 47, 40, 114, 110]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
%autoreload
kfcv.print_report()
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {2:18})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=18)
%autoreload
kfcv.print_report()
best_inner_fold_pipelines = {
2: 9
}
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = best_inner_fold_pipelines)
best_outer_fold_pipeline = 45
kfcv.fit(X.values, y.values, pipelines,
best_outer_fold_pipeline = best_outer_fold_pipeline)
Explanation: Thus, null accuracy of ~62% if always predict death.
Import pyplearnr and initialize optimized pipeline collection
End of explanation
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
reload(ppl)
data = pd.read_csv('Advertising.csv',index_col=0)
# Start with all features
feature_cols = ['TV','Radio','Newspaper']
# Split data
X = data[feature_cols]
y = data.Sales
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=5,
inner_loop_fold_count=3)
pipeline_bundle_schematic = [
{'scaler': {
'none': {},
'standard': {}
}},
{'pre_estimator': {
'polynomial_features': {
'degree': range(1,5)
}
}},
{'estimator': {
'linear_regression': {},
}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_bundle_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='rmse')
kfcv.fit(X.values, y.values, pipelines, scoring_metric='rmse', best_outer_fold_pipeline=1)
%autoreload
kfcv.print_report()
%autoreload
kfcv.print_report()
%%time
%autoreload
import itertools
estimators = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
feature_interaction_options = [True,False]
feature_selection_options = [None,'select_k_best']
scaling_options = [None,'standard','normal','min_max','binary']
transformations = [None,'pca']
pipeline_steps = [feature_interaction_options,feature_selection_options,scaling_options,
transformations,estimators]
pipeline_options = list(itertools.product(*pipeline_steps))
optimized_pipelines = {}
for pipeline_step_combo in pipeline_options:
model_name = []
feature_interactions = pipeline_step_combo[0]
if feature_interactions:
model_name.append('interactions')
feature_selection_type = pipeline_step_combo[1]
if feature_selection_type:
model_name.append('select')
scale_type = pipeline_step_combo[2]
if scale_type:
model_name.append(scale_type)
transform_type = pipeline_step_combo[3]
if transform_type:
model_name.append(transform_type)
estimator = pipeline_step_combo[4]
model_name.append(estimator)
model_name = '_'.join(model_name)
print model_name
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': feature_selection_type,
'scale_type': scale_type,
'transform_type': transform_type
}
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': None,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save optimized pipeline
optimized_pipelines[model_name] = optimized_pipeline
Explanation: Regression
End of explanation
%%time
estimator = 'knn'
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': None,
'scale_type': None,
'transform_type': None
}
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': 6,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[estimator] = optimized_pipeline
Explanation: KNN with and without pre-processing and various options
Basic KNN
Here we do a K-nearest neighbors (KNN) classification with stratified 10-fold (default) cross-validation with a grid search over the default of 1 to 30 nearest neighbors and the use of either "uniform" or "distance" weights:
End of explanation
print optimized_pipeline
Explanation: Note the default OptimizedPipeline parameters and those for its fit() method.
The OptimizedPipeline class contains all of the data associated with the nested stratified k-folds cross-validation.
After use of the fit() method, this includes the data, its test/train splits (based on the test_size percentage keyword argument), the GridSearchCV or RandomizedGridSearchCV object, the Pipeline object that has been retrained using all of the data with the best parameters, test/train scores, and validation metrics/reports.
A report can be printed immediately after the fit by setting the suppress_output keyword argument to True.
Printing the OptimizedPipeline instance also shows the report:
End of explanation
pre_processing_grid_parameters,classifier_grid_parameters,regression_grid_parameters = \
optimized_pipeline.get_default_pipeline_step_parameters(X.shape[0])
classifier_grid_parameters['knn']
Explanation: The report lists the steps in the pipeline, their optimized settings, the test/training accuracy (or L2 regression score), the grid search parameters, and the best parameters.
If the estimator used is a classifier it also includes the confusion matrix, normalized confusion matrix, and a classification report containing precision/recall/f1-score for each class.
Turns out that the best settings for this optimized pipeline are 12 neighbors and the use of the 'uniform' weight.
Note how I've set the random_state keyword agument to 6 so that the models can be compared using the same test/train split.
Default pipeline step grid parameters
The default parameters to grid-search over for k-nearest neighbors are 1 to 30 neighbors and either the 'uniform' or 'distance' weight.
The defaults for the pre-processing steps, classifiers, and regressors can be viewed by using the get_default_pipeline_step_parameters() method with the number of features as the input:
End of explanation
%%time
estimator_name = 'knn'
model_name = 'custom_override_%s'%(estimator_name)
# Set custom parameters
param_dist = {
'estimator__n_neighbors': range(30,500)
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
Explanation: KNN with custom pipeline step grid parameters
These default parameters can be ignored by setting the use_default_param_dist keyword argument to False.
The param_dist keyword argument can be used to keep default parameters (if use_default_param_dist set to True) or to be used as the sole source of parameters (if use_default_param_dist set to False).
Here is a demonstration of generation of default parameters with those in param_dist being overridden:
End of explanation
%%time
model_name = 'from_scratch_%s'%(estimator_name)
# Set custom parameters
param_dist = {
'estimator__n_neighbors': range(10,30)
}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': False,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
Explanation: Note how the n_neighbors parameter was 30 to 499 instead of 1 to 30.
Here's an example of only using param_dist for parameters:
End of explanation
%%time
estimator = 'knn'
scaling_options = ['standard','normal','min-max','binary']
for scaling_option in scaling_options:
model_name = '%s_%s'%(scaling_option,estimator_name)
optimized_pipeline_kwargs = {
'scale_type': scaling_option
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
Explanation: Note how the estimator__weights parameter isn't set for the KNN estimator.
KNN with scaling
The currently supported scaling options are standard, normal, min-max, and binary using scikit-learn's StandardScaler, Normalizer, MinMaxScaler, and Binarizer, respectively. These are set by the pipeline initialization kwarg 'scale_type' like this:
End of explanation
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
Explanation: Let's compare the pipelines so far:
End of explanation
%%time
reload(ppl)
estimator = 'knn'
scaling_options = ['min_max','binary']
param_dists = {
'min_max': {
'scaler__feature_range': [(1,2),(3,4)]
},
'binary': {
'scaler__threshold': np.arange(0,1,0.1)
}
}
for scaling_option in scaling_options:
model_name = 'custom_%s_%s'%(scaling_option,estimator_name)
optimized_pipeline_kwargs = {
'scale_type': scaling_option
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dists[scaling_option]
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
Explanation: Binary scaling fed into a KNN classifier appears to have the best training score.
KNN with custom min-max and binary scaling settings
MinMaxScaler scales each feature value to between 0 and 1 by default. Different scaling ranges can be gridded over by setting the 'scaler__feature_range' keyword argument in param_dist.
Binarizer sets each value to 0 or 1 depending on a threshold. The default for pyplearnr is 0.5. This can be changed by setting 'scaler__threshold' using param_dist.
Here is an example of setting both:
End of explanation
print optimized_pipelines['custom_min_max_knn']
Explanation: Switching the range for min_max scaling boosted it to rank 1 for pipeline training scores:
End of explanation
%%time
reload(ppl)
estimator = 'knn'
model_name = 'select_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_selection_type': 'select_k_best'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
Explanation: The range of 1 to 2 for the MinMaxScaler appeared to be the best.
KNN with feature selection using SelectKBest with f_classif
Currently only one form of feature selection, SelectKBest with f_classif, is supported. This is set using the 'feature_selection_type' keyword argument.
End of explanation
print optimized_pipelines['select_knn']
Explanation: Feature selection and KNN did had a mid-level training score:
End of explanation
feature_selection_mask = optimized_pipelines['select_knn'].pipeline.named_steps['feature_selection'].get_support()
print np.array(X.columns)[feature_selection_mask]
Explanation: SelectKBest with f_classif chose 5 features as the best to use in the model.
The features selected by SelectKBest can be accessed normally, using the mask obtained from the get_support() method on the columns:
End of explanation
%%time
reload(ppl)
estimator = 'knn'
model_name = 'custom_select_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_selection_type': 'select_k_best'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
param_dist = {
'feature_selection__k': [5,7,8]
}
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dist
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['custom_select_knn']
Explanation: Thus, Pclass 3, being male, and the titles Miss, Mr, and Mrs were considered the most important features by SelectKBest using f_classif.
Setting custom feature selection
The default number of features is 1 to all of them. This can be gridded over different values by setting 'feature_selection__k' in param_dist:
End of explanation
%%time
reload(ppl)
estimator = 'knn'
model_name = 'interaction_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_interactions': True
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['interaction_knn']
Explanation: KNN using feature interactions
Feature products of different degrees can be used as additional features by setting the 'feature_interaction' OptimizedPipeline keyword argument to True:
End of explanation
%%time
reload(ppl)
estimator = 'knn'
model_name = 'custom_interaction_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_interactions': True
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
param_dist = {
'feature_interactions__degree': [2,3,4]
}
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dist
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['custom_interaction_knn']
Explanation: The optimal number of interactions (number of features multiplied by each other at once) was found to be 1.
KNN using custom number of feature interactions
The 'feature_interactions__degree' dictates the number of interactions. The default setting is to try no interactions (degree 1) and 2 interactions. Setting this in param_dist allows custom numbers:
End of explanation
%%time
reload(ppl)
estimator = 'knn'
model_name = 'pca_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'transform_type': 'pca'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['pca_knn']
Explanation: KNN with pre-processing transforms
Currently Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE) are supported as pre-processing options.
KNN with PCA pre-processing
End of explanation
transformed_data = optimized_pipelines['pca_knn'].pipeline.named_steps['transform'].transform(X.values)
column_names = ['PCA_%d'%(feature_ind+1) for feature_ind in range(transformed_data.shape[1])]
pca_df = pd.DataFrame(transformed_data,columns=column_names)
pca_df.plot(x='PCA_1',y='PCA_2',style='ro')
Explanation: We can look at the transformed data after PCA normally:
End of explanation
%%time
reload(ppl)
estimator = 'knn'
model_name = 't-sne_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'transform_type': 't-sne'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
Explanation: This is currently a very manual process and would be difficult with more and more processing steps. I'm thinking of automating this with a class containing all optimized pipelines in the future.
Any of the parameters displayed in the pipeline section of the report (iterated_power, random_state, whiten, n_components, etc) can be set in param_dist by 'transform__setting' as done previously.
KNN with t-SNE pre-processing
The t-SNE algorithm can be used as a pre-processing algorithm as well by setting the 'transform_type' keyword argument to 't-sne':
End of explanation
%%time
reload(ppl)
estimator = 'knn'
model_name = 'less_combos_%s'%(estimator_name)
optimized_pipeline_kwargs = {}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'num_parameter_combos': 5
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
Explanation: This t-SNE step takes longer than most in pyplearnr unfortunately. It also resulted in the worst score. I'll try to optimize this in the future.
Reducing the number of grid combinations
Setting the 'num_parameter_combos' fit() method keyword argument to an integer will limit the number of grid combinations to perform using RandomizedSearchCV instead of GridSearchCV:
End of explanation
print optimized_pipelines['less_combos_knn']
Explanation: This is a good way to speed up computations and give you an idea as to how long a particular pipeline takes to train.
Here's the corresponding report:
End of explanation
%%time
classifiers = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
for estimator in classifiers:
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'suppress_output': True,
'use_default_param_dist': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[estimator] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
Explanation: The best parameter combination, of those attempted by RandomizedSearchCV, was 12 nearest neighbors with the 'uniform' weight.
Other models
This code currently supports K-nearest neighbors, logistic regression, support vector machines, multilayer perceptrons, random forest, and adaboost:
End of explanation
%%time
import itertools
estimators = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
feature_interaction_options = [True,False]
feature_selection_options = [None,'select_k_best']
scaling_options = [None,'standard','normal','min_max','binary']
transformations = [None,'pca']
pipeline_steps = [feature_interaction_options,feature_selection_options,scaling_options,
transformations,estimators]
pipeline_options = list(itertools.product(*pipeline_steps))
optimized_pipelines = {}
for pipeline_step_combo in pipeline_options:
model_name = []
feature_interactions = pipeline_step_combo[0]
if feature_interactions:
model_name.append('interactions')
feature_selection_type = pipeline_step_combo[1]
if feature_selection_type:
model_name.append('select')
scale_type = pipeline_step_combo[2]
if scale_type:
model_name.append(scale_type)
transform_type = pipeline_step_combo[3]
if transform_type:
model_name.append(transform_type)
estimator = pipeline_step_combo[4]
model_name.append(estimator)
model_name = '_'.join(model_name)
print model_name
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': feature_selection_type,
'scale_type': scale_type,
'feature_interactions': feature_interactions,
'transform_type': transform_type
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': None,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save optimized pipeline
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black',figsize=(10,40))
print optimized_pipelines['min_max_pca_multilayer_perceptron']
len(optimized_pipelines.keys())
Explanation: Logistic regression, random forest, multilayer perceptron, and adaboost outperform KNN, even with all of the attempted pre-processing so far.
Putting it all together
Different combinations of these options can be strung together simultaneously to iterate over multiple models:
End of explanation
personal_stats = [32,1,0,df[df['Pclass']==1]['Fare'].median(),0,0,1,1,0,1,0,0,0,0,0,0]
zip(personal_stats,X.columns)
optimized_pipelines['min_max_pca_multilayer_perceptron'].pipeline.predict(personal_stats)
Explanation: Out of 240 different possible pipelines, best pipeline, with a test score of 0.899, appears to be min-max scaling between 0 and 1 funneled into a PCA and then into a multilayer perceptron with one hidden layer of size 5.
It took roughly 3 hours to find it.
Predicting survival with the optimal model
All one has to do to make a prediction is use the .predict method of the pipeline in the .pipeline field.
Here's an example of predicting whether I would survive on the Titanic. I'm 32, would probably have one family member with me, might be Pclass1 (I'd hope), male, have a Ph.D (if that's what they mean by Dr.). I'm using the median Fare for Pclass 1 and randomly chose a city to have embarked from:
End of explanation
optimized_pipelines['min_max_pca_multilayer_perceptron'].pipeline.predict_proba(personal_stats)
Explanation: Looks like I died!
Let's look at my predicted probability of surviving:
End of explanation
%%time
%matplotlib inline
import pyplearnr as ppl
repeated_k_folds = []
for i in range(100):
# Alert user of step number
print('Step %d/%d'%(i+1,100))
# Set custom parameters
param_dist = {}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'use_default_param_dist': True,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
repeated_k_folds.append(optimized_pipeline)
data = {
'train scores': [pipeline_optimization.train_score_
for pipeline_optimization in repeated_k_folds],
'test scores': [pipeline_optimization.test_score_
for pipeline_optimization in repeated_k_folds],
}
repeated_kfcv_df = pd.DataFrame(data)
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
%%time
reload(ppl)
%matplotlib inline
import pyplearnr as ppl
repeated_five_folds = []
for i in range(100):
# Alert user of step number
print('Step %d/%d'%(i+1,100))
# Set custom parameters
param_dist = {}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'use_default_param_dist': True,
'param_dist': param_dist,
'cv': 5,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
repeated_five_folds.append(optimized_pipeline)
data = {
'train scores': [pipeline_optimization.train_score_
for pipeline_optimization in repeated_five_folds],
'test scores': [pipeline_optimization.test_score_
for pipeline_optimization in repeated_five_folds],
}
repeated_fivefcv_df = pd.DataFrame(data)
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_fivefcv_df['test scores'].plot(kind='hist',bins=8,color='red')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
repeated_fivefcv_df['train scores'].plot(kind='hist',bins=8,color='blue')
repeated_fivefcv_df['test scores'].plot(kind='hist',bins=8,color='red')
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
repeated_fivefcv_df['train scores'].plot(kind='hist',bins=8,color='blue')
import sys
sys.path.append('/Users/cmshymansky/documents/code/library/pairplotr')
import pairplotr as ppr
repeated_fivefcv_df.info()
reload(ppr)
ppr.compare_data(repeated_fivefcv_df,bins=8,marker_size=10,plot_medians=True)
reload(ppr)
ppr.compare_data(repeated_fivefcv_df,bins=8,marker_size=10,plot_medians=True)
repeated_fivefcv_df['train scores'].describe()
from matplotlib import pylab as plt
ax = plt.subplot(111)
print ax
# repeated_fivefcv_df.plot(ax=ax,x='train scores',y='test scores',style='bo')
repeated_kfcv_df.plot(ax=ax,x='train scores',y='test scores',style='ro')
print dir(repeated_k_folds[0].grid_search)
all_scores = []
for x in repeated_k_folds[0].grid_search.grid_scores_:
all_scores.extend(list(x.cv_validation_scores))
print max(x.cv_validation_scores),x.best_score_
print repeated_k_folds[0].grid_search.cv_results_
pd.Series(all_scores).plot(kind='hist',color='grey',bins=8)
def get_bootstrapped_datasets(orig_data_set, num_samples=100, points_per_sample=50):
import random
data_sets = []
for i in range(num_samples):
sample = [random.choice(orig_data_set) for x in range(points_per_sample)]
data_sets.append(sample)
return data_sets
def cdf(aList, x):
''' 'aList' must be sorted (low to high) '''
returnVal=0
for v in aList:
if v<=x:
returnVal+=1
return returnVal/float(len(aList))
def inv_cdf(aList, percentile):
''' 'percentile' is between 0 and 1.
'aList' must be sorted (low to high)
'''
returnVal = 0
for i in xrange(len(aList)):
if cdf(aList, aList[i])>=percentile:
returnVal = aList[i]
break
return returnVal
def conf_interval(data_set, alpha=0.05):
data_set.sort()
low_end = inv_cdf(data_set, alpha)
high_end = inv_cdf(data_set, 1-alpha)
return (low_end, high_end)
from matplotlib import pylab as plt
bootstrapped_samples = get_bootstrapped_datasets(repeated_fivefcv_df['test scores'].values)
avg_vals = [float(sum(l))/len(l) for l in bootstrapped_samples]
conf_10000 = conf_interval(avg_vals)
pd.Series(avg_vals).hist(bins=10, normed=True)
plt.axvspan(conf_10000[0],conf_10000[1],alpha=0.5,color='red')
from sklearn.learning_curve import learning_curve
import numpy as np
fig, ax = plt.subplots(1,1, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
N, train_lc, val_lc = learning_curve(optimized_pipeline.pipeline,
X, y, cv=5,
train_sizes=np.linspace(0.3, 1, 25))
ax.plot(N, np.mean(train_lc, 1), color='blue', label='training score')
ax.plot(N, np.mean(val_lc, 1), color='red', label='validation score')
ax.hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
color='gray', linestyle='dashed')
ax.set_ylim(0, 1)
ax.set_xlim(N[0], N[-1])
ax.set_xlabel('training size')
ax.set_ylabel('score')
ax.legend(loc='best')
# ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
# ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
# ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
# color='gray', linestyle='dashed')
# ax[i].set_ylim(0, 1)
# ax[i].set_xlim(N[0], N[-1])
# ax[i].set_xlabel('training size')
# ax[i].set_ylabel('score')
# ax[i].set_title('degree = {0}'.format(degree), size=14)
# ax[i].legend(loc='best')
train_lc
# Set output feature
output_feature = 'diabetes'
# Get input features
input_features = [x for x in X_interaction.columns if x != output_feature]
# Split into features and responses
X = X_interaction.copy()
y = test_df[output_feature].copy()
reload(ppl)
ppl.OptimizationBundle().get_options()
%%time
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Fit data
optimized_pipeline.fit(X,y,random_state=6)
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
import sklearn.metrics as sklearn_metrics
X_array = X.copy().values
y_array = y.copy().values
param_grid = {
'estimator__n_neighbors': range(31),
'estimator__weights': ['uniform', 'distance']
}
X_train, X_val, y_train, y_val = \
train_test_split(X_array,y_array,test_size=0.2,random_state=6,stratify=y_array)
from sklearn.model_selection import StratifiedKFold
kfolds_kwargs = dict(
n_splits=10,
shuffle=True,
random_state=6
)
skf = StratifiedKFold(**kfolds_kwargs)
fold_optimizations = {}
for fold_ind, data_inds in enumerate(skf.split(X_train, y_train)):
fold_optimizations[fold_ind] = {}
train_index, test_index = data_inds[0],data_inds[1]
X_train_inner, X_test_inner = X_array[train_index], X_array[test_index]
y_train_inner, y_test_inner = y_array[train_index], y_array[test_index]
pipeline = Pipeline([('estimator',KNeighborsClassifier(n_neighbors=11,weights='distance'))])
pipeline.fit(X_train_inner,y_train_inner)
y_pred_inner = pipeline.predict(X_test_inner)
confusion_matrix = sklearn_metrics.confusion_matrix(y_test_inner, y_pred_inner)
score = confusion_matrix.trace()/float(confusion_matrix.sum())
fold_optimizations[fold_ind]['confusion_matrix'] = confusion_matrix
fold_optimizations[fold_ind]['score'] = confusion_matrix.trace()/float(confusion_matrix.sum())
fold_optimizations[fold_ind]['pipeline'] = pipeline
print np.array([fold_optimizations[fold_ind]['score'] for fold_ind in fold_optimizations]).mean()
y_pred = pipeline.predict(X_val)
test_confusion_matrix = sklearn_metrics.confusion_matrix(y_val, y_pred)
score = test_confusion_matrix.trace()/float(test_confusion_matrix.sum())
print score
# TRAIN: [1 3] TEST: [0 2]
# TRAIN: [0 2] TEST: [1 3]
fold_optimizations
print dir(optimized_pipeline.grid_search.best_estimator_)
dir(folds[0].named_steps['estimator'])
Explanation: I would have a 0.77% chance of survival.
Summary
I've shown how to use pyplearnr to try out 240 different pipeline combinations validated with stratified 10-folds cross-validation using a combination of simple keyword arguments with some additional customization options. Also, I've shown how to access the model parameters, predict survival, and check the actual predicted probability according to the optimized pipeline.
Please let me know if you have any questions or suggestions about how to improve this tool, my code, the approach I'm taking, etc.
End of explanation |
244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Case Study
Step2: One queue or two?
This notebook presents a solution to an exercise from Modeling and Simulation in Python. It uses features from the first four chapters to answer a question related to queueing theory, which is the study of systems that involve waiting in lines, also known as "queues".
Suppose you are designing the checkout area for a new store. There is room for two checkout counters and a waiting area for customers. You can make two lines, one for each counter, or one line that serves both counters.
In theory, you might expect a single line to be better, but it has some practical drawbacks
Step3: Test this function by creating a System object with lam=1/8 and mu=1/5.
Step5: Write an update function that takes as parameters x, which is the total number of customer in the store, including the one checking out; t, which is the number of minutes that have elapsed in the simulation, and system, which is a System object.
If there's a customer checking out, it should use flip to decide whether they are done. And it should use flip to decide if a new customer has arrived.
It should return the total number of customers at the end of the time step.
Step6: Test your function by calling it with x=1, t=0, and the System object you created. If you run it a few times, you should see different results.
Step8: Now we can run the simulation. Here's a version of run_simulation that creates a TimeSeries with the total number of customers in the store, including the one checking out.
Step9: Call run_simulation with your update function and plot the results.
Step11: After the simulation, we can compute L, which is the average number of customers in the system, and W, which is the average time customers spend in the store. L and W are related by Little's Law
Step12: Call compute_metrics with the results from your simulation.
Step13: Parameter sweep
Since we don't know the actual value of $\lambda$, we can sweep through a range of possibilities, from 10% to 80% of the completion rate, $\mu$. (If customers arrive faster than the completion rate, the queue grows without bound. In that case the metrics L and W just depend on how long the store is open.)
Create an array of values for lam.
Step15: Write a function that takes an array of values for lam, a single value for mu, and an update function.
For each value of lam, it should run a simulation, compute L and W, and store the value of W in a SweepSeries.
It should return the SweepSeries.
Step16: Call your function to generate a SweepSeries, and plot it.
Step17: If we imagine that this range of values represents arrival rates on different days, we can use the average value of W, for a range of values of lam, to compare different queueing strategies.
Step19: Analysis
The model I chose for this system is a common model in queueing theory, in part because many of its properties can be derived analytically.
In particular, we can derive the average time in the store as a function of $\mu$ and $\lambda$
Step20: Use this function to plot the theoretical results, then plot your simulation results again on the same graph. How do they compare?
Step22: Multiple servers
Now let's try the other two queueing strategies
Step23: Use this update function to simulate the system, plot the results, and print the metrics.
Step24: Since we have two checkout counters now, we can consider values for $\lambda$ that exceed $\mu$.
Create a new array of values for lam from 10% to 160% of mu.
Step25: Use your sweep function to simulate the two server, one queue scenario with a range of values for lam.
Plot the results and print the average value of W across all values of lam.
Step27: Multiple queues
To simulate the scenario with two separate queues, we need two state variables to keep track of customers in each queue.
Write an update function that takes x1, x2, t, and system as parameters and returns x1 and x2 as return values. f you are not sure how to return more than one return value, see compute_metrics.
When a customer arrives, which queue do they join?
Step29: Write a version of run_simulation that works with this update function.
Step30: Test your functions by running a simulation with a single value of lam.
Step32: Sweep a range of values for lam, plot the results, and print the average wait time across all values of lam.
How do the results compare to the scenario with two servers and one queue. | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
# set the random number generator
np.random.seed(7)
Explanation: Modeling and Simulation in Python
Case Study: Queueing theory
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
# Solution
def make_system(lam, mu):
Make a System object.
lam: arrival rate, per minute
mu: service completion rate, per minute
returns: System object
# duration is 10 hours, expressed in minutes
return System(lam=lam, mu=mu, duration=10*60)
Explanation: One queue or two?
This notebook presents a solution to an exercise from Modeling and Simulation in Python. It uses features from the first four chapters to answer a question related to queueing theory, which is the study of systems that involve waiting in lines, also known as "queues".
Suppose you are designing the checkout area for a new store. There is room for two checkout counters and a waiting area for customers. You can make two lines, one for each counter, or one line that serves both counters.
In theory, you might expect a single line to be better, but it has some practical drawbacks: in order to maintain a single line, you would have to install rope barriers, and customers might be put off by what seems to be a longer line, even if it moves faster.
So you'd like to check whether the single line is really better and by how much. Simulation can help answer this question.
As we did in the bikeshare model, we'll assume that a customer is equally likely to arrive during any timestep. I'll denote this probability using the Greek letter lambda, $\lambda$, or the variable name lam. The value of $\lambda$ probably varies from day to day, so we'll have to consider a range of possibilities.
Based on data from other stores, you know that it takes 5 minutes for a customer to check out, on average. But checkout times are highly variable: most customers take less than 5 minutes, but some take substantially more. A simple way to model this variability is to assume that when a customer is checking out, they have the same probability of finishing up during each time step. I'll denote this probability using the Greek letter mu, $\mu$, or the variable name mu.
If we choose $\mu=1/5$, the average number of time steps for each checkout will be 5 minutes, which is consistent with the data.
One server, one queue
Write a function called make_system that takes lam and mu as parameters and returns a System object with variables lam, mu, and duration. Set duration, which is the number of time steps to simulate, to 10 hours, expressed in minutes.
End of explanation
# Solution
interarrival_time = 8
service_time = 5
lam = 1 / interarrival_time
mu = 1 / service_time
system = make_system(lam, mu)
Explanation: Test this function by creating a System object with lam=1/8 and mu=1/5.
End of explanation
# Solution
def update_func1(x, t, system):
Simulate one time step.
x: number of people in the shop
t: time step
system: System object
# if there's a customer in service, check if they're done
if x > 0:
if flip(system.mu):
x -= 1
# check for an arrival
if flip(system.lam):
x += 1
return x
Explanation: Write an update function that takes as parameters x, which is the total number of customer in the store, including the one checking out; t, which is the number of minutes that have elapsed in the simulation, and system, which is a System object.
If there's a customer checking out, it should use flip to decide whether they are done. And it should use flip to decide if a new customer has arrived.
It should return the total number of customers at the end of the time step.
End of explanation
# Solution
update_func1(1, 0, system)
Explanation: Test your function by calling it with x=1, t=0, and the System object you created. If you run it a few times, you should see different results.
End of explanation
def run_simulation(system, update_func):
Simulate a queueing system.
system: System object
update_func: function object
x = 0
results = TimeSeries()
results[0] = x
for t in linrange(0, system.duration):
x = update_func(x, t, system)
results[t+1] = x
return results
Explanation: Now we can run the simulation. Here's a version of run_simulation that creates a TimeSeries with the total number of customers in the store, including the one checking out.
End of explanation
# Solution
results = run_simulation(system, update_func1)
plot(results)
decorate(xlabel='Time (min)', ylabel='Customers')
Explanation: Call run_simulation with your update function and plot the results.
End of explanation
def compute_metrics(results, system):
Compute average number of customers and wait time.
results: TimeSeries of queue lengths
system: System object
returns: L, W
L = results.mean()
W = L / system.lam
return L, W
Explanation: After the simulation, we can compute L, which is the average number of customers in the system, and W, which is the average time customers spend in the store. L and W are related by Little's Law:
$L = \lambda W$
Where $\lambda$ is the arrival rate. Here's a function that computes them.
End of explanation
# Solution
compute_metrics(results, system)
Explanation: Call compute_metrics with the results from your simulation.
End of explanation
# Solution
num_vals = 101
lam_array = linspace(0.1*mu, 0.8*mu, num_vals)
lam_array
Explanation: Parameter sweep
Since we don't know the actual value of $\lambda$, we can sweep through a range of possibilities, from 10% to 80% of the completion rate, $\mu$. (If customers arrive faster than the completion rate, the queue grows without bound. In that case the metrics L and W just depend on how long the store is open.)
Create an array of values for lam.
End of explanation
# Solution
def sweep_lam(lam_array, mu, update_func):
Run simulations with a range of values for `lam`
lam_array: array of values for `lam`
mu: probability of finishing a checkout
update_func: passed along to run_simulation
returns: SweepSeries of average wait time vs lam
sweep = SweepSeries()
for lam in lam_array:
system = make_system(lam, mu)
results = run_simulation(system, update_func)
L, W = compute_metrics(results, system)
sweep[lam] = W
return sweep
Explanation: Write a function that takes an array of values for lam, a single value for mu, and an update function.
For each value of lam, it should run a simulation, compute L and W, and store the value of W in a SweepSeries.
It should return the SweepSeries.
End of explanation
# Solution
sweep = sweep_lam(lam_array, mu, update_func1)
# Solution
plot(sweep, 'bo')
decorate(xlabel='Arrival late, lambda (per min)',
ylabel='Average time in system',
title='Single server, single queue')
Explanation: Call your function to generate a SweepSeries, and plot it.
End of explanation
# W_avg = sweep.mean()
Explanation: If we imagine that this range of values represents arrival rates on different days, we can use the average value of W, for a range of values of lam, to compare different queueing strategies.
End of explanation
def plot_W(lam_array, mu):
Plot the theoretical mean wait time.
lam_array: array of values for `lam`
mu: probability of finishing a checkout
W = 1 / (mu - lam_array)
plot(lam_array, W, 'g-')
Explanation: Analysis
The model I chose for this system is a common model in queueing theory, in part because many of its properties can be derived analytically.
In particular, we can derive the average time in the store as a function of $\mu$ and $\lambda$:
$W = 1 / (\mu - \lambda)$
The following function plots the theoretical value of $W$ as a function of $\lambda$.
End of explanation
# Solution
plot_W(lam_array, mu)
plot(sweep, 'bo')
decorate(xlabel='Arrival late, lambda (per min)',
ylabel='Average time in system',
title='Single server, single queue')
Explanation: Use this function to plot the theoretical results, then plot your simulation results again on the same graph. How do they compare?
End of explanation
# Solution
def update_func2(x, t, system):
Simulate a single queue with two servers.
system: System object
# if both servers are busy, check whether the
# second is complete
if x > 1 and flip(system.mu):
x -= 1
# check whether the first is complete
if x > 0 and flip(system.mu):
x -= 1
# check for an arrival
if flip(system.lam):
x += 1
return x
Explanation: Multiple servers
Now let's try the other two queueing strategies:
One queue with two checkout counters.
Two queues, one for each counter.
The following figure shows the three scenarios:
Write an update function for one queue with two servers.
End of explanation
# Solution
system = make_system(lam, mu)
run_simulation(system, update_func2)
plot(results)
decorate(xlabel='Time (min)', ylabel='Customers')
compute_metrics(results, system)
Explanation: Use this update function to simulate the system, plot the results, and print the metrics.
End of explanation
# Solution
lam_array = linspace(0.1*mu, 1.6*mu, num_vals)
Explanation: Since we have two checkout counters now, we can consider values for $\lambda$ that exceed $\mu$.
Create a new array of values for lam from 10% to 160% of mu.
End of explanation
# Solution
sweep = sweep_lam(lam_array, mu, update_func2)
W_avg = sweep.mean()
print('Average of averages = ', W_avg, 'minutes')
# Solution
plot(sweep, 'bo')
decorate(xlabel='Arrival late, lambda (per min)',
ylabel='Average time in system',
title='Multiple server, single queue')
Explanation: Use your sweep function to simulate the two server, one queue scenario with a range of values for lam.
Plot the results and print the average value of W across all values of lam.
End of explanation
# Solution
def update_func3(x1, x2, t, system):
Simulate two queues with one server each.
x1: number of customers in queue 1
x2: number of customers in queue 2
t: time step
system: System object
# if the first servers is busy, check it it's done
if x1 > 0 and flip(system.mu):
x1 -= 1
# if the second queue is busy, check if it's done
if x2 > 0 and flip(system.mu):
x2 -= 1
# check for an arrival
if flip(system.lam):
# join whichever queue is shorter
if x1 < x2:
x1 += 1
else:
x2 += 1
return x1, x2
Explanation: Multiple queues
To simulate the scenario with two separate queues, we need two state variables to keep track of customers in each queue.
Write an update function that takes x1, x2, t, and system as parameters and returns x1 and x2 as return values. f you are not sure how to return more than one return value, see compute_metrics.
When a customer arrives, which queue do they join?
End of explanation
# Solution
def run_simulation(system, update_func):
Simulate a queueing system.
system: System object
update_func: function object
x1, x2 = 0, 0
results = TimeSeries()
results[0] = x1 + x2
for t in linrange(0, system.duration):
x1, x2 = update_func(x1, x2, t, system)
results[t+1] = x1 + x2
return results
Explanation: Write a version of run_simulation that works with this update function.
End of explanation
# Solution
system = make_system(lam, mu)
run_simulation(system, update_func3)
plot(results)
decorate(xlabel='Time (min)', ylabel='Customers')
compute_metrics(results, system)
Explanation: Test your functions by running a simulation with a single value of lam.
End of explanation
# Solution
sweep = sweep_lam(lam_array, mu, update_func3)
W_avg = sweep.mean()
print('Average of averages = ', W_avg, 'minutes')
# Solution
plot(sweep, 'bo')
decorate(xlabel='Arrival late, lambda (per min)',
ylabel='Average time in system',
title='Multiple server, multiple queue')
# Solution
With two queues, the average of averages is slightly higher, most of the time. But the difference is small.
The two configurations are equally good as long as both servers are busy; the only time two lines is worse is if one queue is empty and the other contains more than one customer. In real life, if we allow customers to change lanes, that disadvantage can be eliminated.
From a theoretical point of view, one line is better. From a practical point of view, the difference is small and can be mitigated. So the best choice depends on practical considerations.
On the other hand, you can do substantially better with an express line for customers with short service times. But that's a topic for another notebook.
;
Explanation: Sweep a range of values for lam, plot the results, and print the average wait time across all values of lam.
How do the results compare to the scenario with two servers and one queue.
End of explanation |
245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summary
Use Tensorboard to build and train a neural net for recognizing
Conclusion
The Neural net has mediocre overall performance, likely because I didn't spend that much time optimizing it, and because the problem is somewhat ill-posed, since a string could be a word in multiple different languages. However, for English and Mandarin words it did better than the guessing rate of 33%, which means learning was successful.
Step1: Make training data
Step2: Make the neural net
Step3: Train Neural net and output accuracy
Step4: Test the neural net with some words I made up
This isn't the most rigorous source of test data admittedly, but oh well
Step5: It is pretty good at English.
Most words were right, with most of the confusion mainly being with French (which has a lot of similar words anyways).
Oddly enough it thought the word "quite" was Mandarin.
Step6: It really didn't do very well with French at all...
It seemed to mix it up with English a lot. I am not sure why the confusion is so uneven...
Step7: It did ok with Mandarin
Most of the confusion seems to be with short words (e.g. 'hen', which is a word in english too).
I found it weird that "tebie" and "renmen" were French... | Python Code:
# imports
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import string
import math
import tabulate
import os
Explanation: Summary
Use Tensorboard to build and train a neural net for recognizing
Conclusion
The Neural net has mediocre overall performance, likely because I didn't spend that much time optimizing it, and because the problem is somewhat ill-posed, since a string could be a word in multiple different languages. However, for English and Mandarin words it did better than the guessing rate of 33%, which means learning was successful.
End of explanation
# Define constants for the training data
WORD_LENGTH = 20
feature_length = 26*WORD_LENGTH
languages = "english french mandarin".split()
num_of_languages = len(languages)
# Constants for saving
save_dir = '.\\nn_save\\'
# Function for converting words to vectors
# Letters are stored as a list of 26 integers, all 0 except for one, which is a 1
# E.g. a is [1, 0, 0... <25 0's>]
# E.g. z is [0, 0 ... <25 0's>, 1]
# Overall 20 letters are stored sequentially
# Punctuation and white space is ignored
def vectorize_word(word):
l_final = []
for i in range(WORD_LENGTH):
l_next = [0]*26
try:
l_next[string.ascii_lowercase.index(word[i])] = 1
except:
pass
l_final.extend(l_next)
return l_final
f_out = open(r'.\data\nn_params.txt', 'w')
f_out.write("{}\n".format(WORD_LENGTH))
f_out.write(save_dir+'\n')
f_out.write(" ".join(languages)+'\n')
f_out.close()
# Create training data
training_data = []
training_answers = []
for i, lang in enumerate(languages):
# Read files
f_in = open(".\data\{}.txt".format(lang))
words = [w.strip() for w in f_in.readlines()]
f_in.close()
# Vectorize words
vector_words = [vectorize_word(w) for w in words]
# Vectorize output
l = [0]*num_of_languages
l[i] = 1
vector_language = [l for w in words]
# Add to training data
training_data.extend(vector_words)
training_answers.extend(vector_language)
# Convert data to numpy array
training_data = np.array(training_data)
training_answers = np.array(training_answers)
# Summarize training data
print("Training data shape: {}".format(training_data.shape))
Explanation: Make training data
End of explanation
# Input and output variables
x = tf.placeholder(tf.float32, [None, feature_length])
y_ = tf.placeholder(tf.float32, [None, num_of_languages])
# Define the number of neurons in each layer
layer_lengths = [feature_length, 40, num_of_languages]
# Create each layer
neural_net = []
last_output = x
for i, current_layer_length in enumerate(layer_lengths[1:]):
# Define the length of the last layer
last_layer_length = layer_lengths[i]
# Create the variables for this layer
W = tf.Variable(tf.truncated_normal([last_layer_length, current_layer_length],
stddev=1 / math.sqrt(last_layer_length)))
b = tf.Variable(tf.constant(0.1, shape=[current_layer_length]))
h = tf.sigmoid(tf.matmul(last_output, W) + b)
# Store the variables for this layer
neural_net.append((W, b, h))
# Update the last output
last_output = h
# Output layer (softmax)
y = tf.nn.softmax(last_output)
# Scoring (use cross-entropy storing)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), axis=1))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
Explanation: Make the neural net
End of explanation
# Initialize variables
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init)
# Initialize accuracy metrics
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
accuracy_tracker = []
# Run the training
batch_size = 500
for i in range(40000):
batch_indices = np.random.randint(training_data.shape[0], size=batch_size)
batch_xs = training_data[batch_indices]
batch_ys = training_answers[batch_indices]
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Possibly print readout
if (i+1) % 2000 == 0:
corr_pred = sess.run(correct_prediction, feed_dict={x: training_data, y_: training_answers})
correct, total = len(corr_pred[corr_pred]), len(corr_pred)
acc = float(correct)/total
accuracy_tracker.append((i+1, acc))
print("Batch {:0>5d}- {:.4f} ({:0>5d}/{})".format(i+1, acc, correct, total))
# Plot training accuracy improvement
plt.plot(*zip(*accuracy_tracker))
plt.xlabel("Batch number")
plt.ylabel("Accuracy")
plt.title("Training Accuracy for language recognition neural net")
plt.show()
# Function for testing words
def get_predictions(test_words):
test_words_vectorized = np.array([vectorize_word(w) for w in test_words])
# Get predictions
test_results = sess.run(y, feed_dict={x: test_words_vectorized})
return test_results
# Function that tests words and prints them to make a nice pretty table
def predictions_table(test_words, answers=None):
# test_words is a list of strings (the words)
# Answers will tell the net if it is correct
# Should be a list where the number n of element i correspond means that test_words[i] is of language[n]
predictions = get_predictions(test_words)
table = [[w] for w in test_words] # First column of the table is the word
table = [t + ["{:.1f}".format(p*100) for p in pred] for t, pred in zip(table, predictions)] # Next column is the predictions
headers = ["Word"] + [l.title() for l in languages]
# Possibly print wrong answers
if answers is not None:
# Find the ones it answered correctly
correct = np.array([p[i] == np.max(p) for p, i in zip(predictions, answers)])
# Add an answers column to the table
for i, c in enumerate(correct):
if c:
table[i] += [""]
else:
table[i] += ["Wrong!"]
headers += ["Correct?"]
# Print the table:
print(tabulate.tabulate(table, headers=headers))
# Possibly print the accuracy
if answers is not None:
print("Accuracy: {:.2f}%".format(100.*len(correct[correct])/len(correct)))
Explanation: Train Neural net and output accuracy
End of explanation
# English words
english_words = "hello my dear chap let's have a bit of coffee".split()
english_words += "oh my heavens look at what this neural net can do".split()
english_words += "it looks like english words are often quite similar to french ones".split()
predictions_table(english_words, answers=[0]*len(english_words))
Explanation: Test the neural net with some words I made up
This isn't the most rigorous source of test data admittedly, but oh well
End of explanation
# French words
# Note the lack of accents (the vectorizer doesn't handle accents)
# Note my poor French also
french_words = "bonjour mon ami j'adore le francais. C'est une belle langue".split()
french_words += "je mange une croissant avec une baguette et du brie".split()
french_words += "ca c'est comment on fait des choses en france!".split()
predictions_table(french_words, answers=[1]*len(french_words))
Explanation: It is pretty good at English.
Most words were right, with most of the confusion mainly being with French (which has a lot of similar words anyways).
Oddly enough it thought the word "quite" was Mandarin.
End of explanation
# Mandarin Words
# Note I am typing in pinyin with no tones
mandarin_words = "xuexi zhongwen zhende hen nan".split()
mandarin_words += "wo hen xihuan pinyin yinwei bangzhu wo kanshu de bijiao rongyi".split()
mandarin_words += "sishisi jiu shi tebie nan shuochulai".split()
mandarin_words += "qilai, bu yuan zuo nuli de renmen!".split() # Gotta please the censors ;)
predictions_table(mandarin_words, answers=[2]*len(mandarin_words))
Explanation: It really didn't do very well with French at all...
It seemed to mix it up with English a lot. I am not sure why the confusion is so uneven...
End of explanation
# Save neural net
# saver = tf.train.Saver()
# if not os.path.exists(save_dir):
# os.makedirs(save_dir)
# save_path = saver.save(sess, save_dir)
# print(save_path)
# Close the session
# sess.close()
Explanation: It did ok with Mandarin
Most of the confusion seems to be with short words (e.g. 'hen', which is a word in english too).
I found it weird that "tebie" and "renmen" were French...
End of explanation |
246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 3
Step1: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x
Step2: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
Step3: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree
Step4: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call
Step5: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
Step6: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step7: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
Step8: NOTE
Step9: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
Step10: The resulting model looks like half a parabola. Try on your own to see what the cubic looks like
Step11: Now try a 15th degree polynomial
Step12: What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.
Changing the data and re-learning
We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results.
To split the sales data into four subsets, we perform the following steps
Step13: Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.
Step14: Some questions you will be asked on your quiz
Step15: Next you should write a loop that does the following
Step16: Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 3: Assessing Fit (polynomial regression)
In this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:
* Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed
* Use matplotlib to visualize polynomial regressions
* Use matplotlib to visualize the same polynomial degree on different subsets of the data
* Use a validation set to select a polynomial degree
* Assess the final fit using test data
We will continue to use the House data from previous notebooks.
Fire up graphlab create
End of explanation
tmp = graphlab.SArray([1., 2., 3.])
tmp_cubed = tmp.apply(lambda x: x**3)
print tmp
print tmp_cubed
Explanation: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions.
For example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab)
End of explanation
ex_sframe = graphlab.SFrame()
ex_sframe['power_1'] = tmp
print ex_sframe
Explanation: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
End of explanation
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# then loop over the remaining degrees:
for power in range(1, degree+1):
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe['power_' + str(power)] = feature.apply(lambda x: x**power)
return poly_sframe
Explanation: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:
End of explanation
print polynomial_sframe(tmp, 3)
Explanation: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:
End of explanation
sales = graphlab.SFrame('../Data/kc_house_data.gl/')
sales.head()
Explanation: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
End of explanation
sales = sales.sort(['sqft_living', 'price'])
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
poly1_data = polynomial_sframe(sales['sqft_living'], 1)
print poly1_data
poly1_data['price'] = sales['price'] # add price to the data since it's the target
print poly1_data
Explanation: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
End of explanation
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(poly1_data['power_1'],poly1_data['price'],'.',
poly1_data['power_1'], model1.predict(poly1_data),'-')
Explanation: NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.
End of explanation
poly2_data = polynomial_sframe(sales['sqft_living'], 2)
print poly2_data
my_features = poly2_data.column_names() # get the name of the features
print my_features
poly2_data['price'] = sales['price'] # add price to the data since it's the target
print poly2_data
model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)
model2.get("coefficients")
plt.plot(poly2_data['power_1'],poly2_data['price'],'.',
poly2_data['power_1'], model2.predict(poly2_data),'-')
Explanation: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
End of explanation
poly3_data = polynomial_sframe(sales['sqft_living'], 3)
print poly3_data
my_features3 = poly3_data.column_names() # get the name of the features
print my_features3
poly3_data['price'] = sales['price'] # add price to the data since it's the target
print poly3_data
model3 = graphlab.linear_regression.create(poly3_data, target = 'price', features = my_features3, validation_set = None)
model3.get("coefficients")
plt.plot(poly3_data['power_1'],poly3_data['price'],'.',
poly3_data['power_1'], model3.predict(poly3_data),'-')
Explanation: The resulting model looks like half a parabola. Try on your own to see what the cubic looks like:
End of explanation
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
print poly15_data
my_features15 = poly15_data.column_names() # get the name of the features
print my_features15
poly15_data['price'] = sales['price'] # add price to the data since it's the target
print poly15_data
model15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = my_features15, validation_set = None)
model15.get("coefficients")
plt.plot(poly15_data['power_1'],poly15_data['price'],'.',
poly15_data['power_1'], model15.predict(poly15_data),'-')
Explanation: Now try a 15th degree polynomial:
End of explanation
set_a, set_b = sales.random_split(0.5,seed=0)
set_1, set_2 = set_a.random_split(0.5,seed=0)
set_3, set_4 = set_b.random_split(0.5,seed=0)
print len(set_1), len(set_2), len(set_3), len(set_4)
Explanation: What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.
Changing the data and re-learning
We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results.
To split the sales data into four subsets, we perform the following steps:
* First split sales into 2 subsets with .random_split(0.5, seed=0).
* Next split the resulting subsets into 2 more subsets each. Use .random_split(0.5, seed=0).
We set seed=0 in these steps so that different users get consistent results.
You should end up with 4 subsets (set_1, set_2, set_3, set_4) of approximately equal size.
End of explanation
set_1_15_data = polynomial_sframe(set_1['sqft_living'], 15)
set_2_15_data = polynomial_sframe(set_2['sqft_living'], 15)
set_3_15_data = polynomial_sframe(set_3['sqft_living'], 15)
set_4_15_data = polynomial_sframe(set_4['sqft_living'], 15)
#
my_features_x_15 = set_1_15_data.column_names() # get the name of the features
#
set_1_15_data['price'] = set_1['price'] # add price to the data since it's the target
set_2_15_data['price'] = set_2['price'] # add price to the data since it's the target
set_3_15_data['price'] = set_3['price'] # add price to the data since it's the target
set_4_15_data['price'] = set_4['price'] # add price to the data since it's the target
#
model_1_15 = graphlab.linear_regression.create(set_1_15_data, target='price', features=my_features_x_15, validation_set=None)
model_2_15 = graphlab.linear_regression.create(set_2_15_data, target='price', features=my_features_x_15, validation_set=None)
model_3_15 = graphlab.linear_regression.create(set_3_15_data, target='price', features=my_features_x_15, validation_set=None)
model_4_15 = graphlab.linear_regression.create(set_4_15_data, target='price', features=my_features_x_15, validation_set=None)
model_1_15.get("coefficients").print_rows(num_rows = 16)
model_2_15.get("coefficients").print_rows(num_rows = 16)
model_3_15.get("coefficients").print_rows(num_rows = 16)
model_4_15.get("coefficients").print_rows(num_rows = 16)
plt.plot(set_1_15_data['power_1'],set_1_15_data['price'],'.',set_1_15_data['power_1'],model_1_15.predict(set_1_15_data),'-')
plt.plot(set_2_15_data['power_1'],set_2_15_data['price'],'.',set_2_15_data['power_1'],model_2_15.predict(set_2_15_data),'-')
plt.plot(set_3_15_data['power_1'],set_3_15_data['price'],'.',set_3_15_data['power_1'],model_3_15.predict(set_3_15_data),'-')
plt.plot(set_4_15_data['power_1'],set_4_15_data['price'],'.',set_4_15_data['power_1'],model_4_15.predict(set_4_15_data),'-')
Explanation: Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.
End of explanation
training_and_validation, testing = sales.random_split(0.9,seed=1)
training, validation = training_and_validation.random_split(0.5,seed=1)
Explanation: Some questions you will be asked on your quiz:
Quiz Question: Is the sign (positive or negative) for power_15 the same in all four models?
no
Quiz Question: (True/False) the plotted fitted lines look the same in all four plots
false
Selecting a Polynomial Degree
Whenever we have a "magic" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4).
We split the sales dataset 3-way into training set, test set, and validation set as follows:
Split our sales data into 2 sets: training_and_validation and testing. Use random_split(0.9, seed=1).
Further split our training data into two sets: training and validation. Use random_split(0.5, seed=1).
Again, we set seed=1 to obtain consistent results for different users.
End of explanation
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
predictions = model.predict(data)
# Then compute the residuals/errors
residuals = predictions - outcome
# Then square and add them up
RSS = sum(pow(residuals,2))
return(RSS)
def minimize_rss( training_data, validation_data, degrees ):
degree_rss = {}
for degree in range(1,degrees+1):
poly_degree = polynomial_sframe(training_data['sqft_living'], degree)
poly_features = poly_degree.column_names()
poly_degree['price'] = training_data['price']
poly_model = graphlab.linear_regression.create(
poly_degree, target='price', features=poly_features, validation_set=None, verbose=False)
poly_validation_data = polynomial_sframe(validation_data['sqft_living'], degree)
degree_rss[degree] = get_residual_sum_of_squares(poly_model, poly_validation_data, validation_data['price'])
print (degree,degree_rss[degree])
min_value = min(degree_rss.values())
min_key = [key for key, value in degree_rss.iteritems() if value == min_value]
return( min_key[0], min_value )
print minimize_rss( training, validation, 15 )
Explanation: Next you should write a loop that does the following:
* For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1))
* Build an SFrame of polynomial data of train_data['sqft_living'] at the current degree
* hint: my_features = poly_data.column_names() gives you a list e.g. ['power_1', 'power_2', 'power_3'] which you might find useful for graphlab.linear_regression.create( features = my_features)
* Add train_data['price'] to the polynomial SFrame
* Learn a polynomial regression model to sqft vs price with that degree on TRAIN data
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynmial SFrame using validation data.
* Report which degree had the lowest RSS on validation data (remember python indexes from 0)
(Note you can turn off the print out of linear_regression.create() with verbose = False)
End of explanation
print minimize_rss( training, testing, 15 )
Explanation: Quiz Question: Which degree (1, 2, …, 15) had the lowest RSS on Validation data?
6
Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz.
End of explanation |
247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First example
Step1: The step method (Sampling method)
Step2: If we wanted to use the slice sampling algorithm to sigma instead of NUTS (which was assigned automatically), we could have specified this as the step argument for sample.
Step3: Posterior analysis
PyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot.
Step4: Case study 1
Step5: Model Fitting
Step6: Case study 2 Coal Mining disasters
Consider the following time series of recorded coal mining disasters in the UK from 1851 to 1962 (Jarrett, 1979). The number of disasters is thought to have been affected by changes in safety regulations during this period. Unfortunately, we also have pair of years with missing data, identified as missing by a NumPy MaskedArray using -999 as the marker value.
Next we will build a model for this series and attempt to estimate when the change occurred. At the same time, we will see how to handle missing data, use multiple samplers and sample from discrete random variables.
Step7: \begin{split}\begin{aligned}
D_t &\sim \text{Pois}(r_t), r_t= \begin{cases}
l, & \text{if } t \lt s \
e, & \text{if } t \ge s
\end{cases} \
s &\sim \text{Unif}(t_l, t_h)\
e &\sim \text{exp}(1)\
l &\sim \text{exp}(1)
\end{aligned}\end{split}
the parameters are defined as follows | Python Code:
import numpy as np
import matplotlib.pyplot as plt
# Initialize random number generator
np.random.seed(123)
# True parameter values
alpha, sigma = 1, 1
beta = [1, 2.5]
# Size of dataset
size = 100
# Predictor variable
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
# Simulate outcome variable
Y = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma
fig, axes = plt.subplots(1, 2, sharex=True, figsize=(10,4))
axes[0].scatter(X1, Y)
axes[1].scatter(X2, Y)
axes[0].set_ylabel('Y'); axes[0].set_xlabel('X1'); axes[1].set_xlabel('X2');
#import os
#os.environ['MKL_THREADING_LAYER'] = 'GNU'
## Model Specification
import pymc3 as pm
#basic_model = pm.Model()
#with basic_model:
# The priors of the "unknown model"
# alpha = pm.Normal('alpha',mu=0.0,sd=10)
## shape tell that it's a vector of size 2
# beta = pm.Normal('beta',mu=0.0, sd=10,shape=2)
# sigma = pm.HalfNormal('sigma', sd=1.0)
# Expected value of outcome
# mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling observations)
# Y_obs = pm.Normal('Y_obs',mu=mu,sd=sigma,observed=Y)
basic_model = pm.Model()
with basic_model:
# Priors for unknown model parameters
alpha = pm.Normal('alpha', mu=0, sd=10)
beta = pm.Normal('beta', mu=0, sd=10, shape=2)
sigma = pm.HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling distribution) of observations
Y_obs = pm.Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
%time map_estimate = pm.find_MAP(model=basic_model)
map_estimate
from scipy import optimize
map_estimate = pm.find_MAP(model=basic_model, fmin=optimize.fmin_powell)
map_estimate
Explanation: First example:
Linear Regression, bayesian approach
Taken from the PyMC3 webpage (http://docs.pymc.io/notebooks/getting_started.html)
We are interested in predicting outcomes Y as normally-distributed observations with an expected value μ that is a linear function of two predictor variables, X1 and X2
$$ Y \sim N(\mu, \sigma^2) $$
$$ \mu = \alpha + \beta_1 X_1 + \beta_2X_2 $$
where α is the intercept, and βi is the coefficient for covariate Xi, while σ represents the observation error. Since we are constructing a Bayesian model, the unknown variables in the model must be assigned a prior distribution. We choose zero-mean normal priors with variance of 100 for both regression coefficients, which corresponds to weak information regarding the true parameter values. We choose a half-normal distribution (normal distribution bounded at zero) as the prior for σ.
$$ \alpha \sim N(0,100) $$
$$ \beta_i \sim N(0,100) $$
$$ \sigma \sim N(0,1) $$
Simulation
End of explanation
from scipy import optimize
with basic_model:
trace = pm.sample()
trace['alpha'][-5:]
Explanation: The step method (Sampling method)
End of explanation
with basic_model:
# obtain starting values via MAP
start = pm.find_MAP(fmin=optimize.fmin_powell)
# instantiate sampler
step = pm.Slice()
# draw 5000 posterior samples
trace = pm.sample(5000, step=step, start=start)
Explanation: If we wanted to use the slice sampling algorithm to sigma instead of NUTS (which was assigned automatically), we could have specified this as the step argument for sample.
End of explanation
_ = pm.traceplot(trace)
Explanation: Posterior analysis
PyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot.
End of explanation
import pandas_datareader.data as web
import pandas as pd
import pandas_datareader.data as web
import datetime
# Original
start = datetime.datetime(2008, 5, 1)
end = datetime.datetime(2009,12,1)
##Meine parameters
start = datetime.datetime(2018, 1, 1)
end = datetime.datetime(2018,2,12)
returns = web.DataReader('SPY', 'google', start, end)['Close'].pct_change()
#f.ix['2010-01-04']
returns.plot(figsize=(10, 6))
plt.ylabel('daily returns in %');
with pm.Model() as sp500_model:
nu = pm.Exponential('nu', 1./10, testval=5.)
sigma = pm.Exponential('sigma', 1./.02, testval=.1)
s = pm.GaussianRandomWalk('s', sigma**-2, shape=len(returns))
volatility_process = pm.Deterministic('volatility_process', pm.math.exp(-2*s))
r = pm.StudentT('r', nu, lam=1/volatility_process, observed=returns)
Explanation: Case study 1: Stochastic volatility
We present a case study of stochastic volatility, time varying stock market volatility, to illustrate PyMC3’s use in addressing a more realistic problem. The distribution of market returns is highly non-normal, which makes sampling the volatilities significantly more difficult. This example has 400+ parameters so using common sampling algorithms like Metropolis-Hastings would get bogged down, generating highly autocorrelated samples. Instead, we use NUTS, which is dramatically more efficient.
The Model
Asset prices have time-varying volatility (variance of day over day returns). In some periods, returns are highly variable, while in others they are very stable. Stochastic volatility models address this with a latent volatility variable, which changes over time. The following model is similar to the one described in the NUTS paper (Hoffman 2014, p. 21).
$$ \sigma \sim exp(50) $$
$$ \nu \sim exp(0.1) $$
$$ s_i \sim N(s_{i-1},\sigma^{-2}) $$
$$ log(y_i) \sim t(\nu,0,exp(-2s_i)) $$
Here, y is the daily return series which is modeled with a Student-t distribution with an unknown degrees of freedom parameter, and a scale parameter determined by a latent process s. The individual si are the individual daily log volatilities in the latent log volatility process.
The Data
Our data consist of daily returns of the S&P 500 during the 2008 financial crisis. Here, we use pandas-datareader to obtain the price data from Yahoo!-Finance; it can be installed with pip install pandas-datareader.
End of explanation
with sp500_model:
trace = pm.sample(2000)
pm.traceplot(trace, [nu, sigma]);
fig, ax = plt.subplots(figsize=(15, 8))
returns.plot(ax=ax)
ax.plot(returns.index, 1/np.exp(trace['s',::5].T), 'r', alpha=.03);
ax.set(title='volatility_process', xlabel='time', ylabel='volatility');
ax.legend(['S&P500', 'stochastic volatility process'])
Explanation: Model Fitting
End of explanation
disaster_data = np.ma.masked_values([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, -999, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, -999, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1], value=-999)
year = np.arange(1851, 1962)
plt.plot(year, disaster_data, 'o', markersize=8);
plt.ylabel("Disaster count")
plt.xlabel("Year")
Explanation: Case study 2 Coal Mining disasters
Consider the following time series of recorded coal mining disasters in the UK from 1851 to 1962 (Jarrett, 1979). The number of disasters is thought to have been affected by changes in safety regulations during this period. Unfortunately, we also have pair of years with missing data, identified as missing by a NumPy MaskedArray using -999 as the marker value.
Next we will build a model for this series and attempt to estimate when the change occurred. At the same time, we will see how to handle missing data, use multiple samplers and sample from discrete random variables.
End of explanation
with pm.Model() as disaster_model:
switchpoint = pm.DiscreteUniform('switchpoint', lower=year.min(), upper=year.max(), testval=1900)
# Priors for pre- and post-switch rates number of disasters
early_rate = pm.Exponential('early_rate', 1)
late_rate = pm.Exponential('late_rate', 1)
# Allocate appropriate Poisson rates to years before and after current
rate = pm.math.switch(switchpoint >= year, early_rate, late_rate)
disasters = pm.Poisson('disasters', rate, observed=disaster_data)
with disaster_model:
trace = pm.sample(10000)
pm.traceplot(trace)
Explanation: \begin{split}\begin{aligned}
D_t &\sim \text{Pois}(r_t), r_t= \begin{cases}
l, & \text{if } t \lt s \
e, & \text{if } t \ge s
\end{cases} \
s &\sim \text{Unif}(t_l, t_h)\
e &\sim \text{exp}(1)\
l &\sim \text{exp}(1)
\end{aligned}\end{split}
the parameters are defined as follows:
* Dt: The number of disasters in year t
* rt: The rate parameter of the Poisson distribution of disasters in year t.
* s: The year in which the rate parameter changes (the switchpoint).
* e: The rate parameter before the switchpoint s.
* l: The rate parameter after the switchpoint s.
* tl, th: The lower and upper boundaries of year t.
End of explanation |
248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kittens
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: If you have used the Internet, you have probably seen videos of kittens unrolling toilet paper.
And you might have wondered how long it would take a standard kitten to unroll 47 m of paper, the length of a standard roll.
The interactions of the kitten and the paper rolls are complex. To keep things simple, let's assume that the kitten pulls down on the free end of the roll with constant force. And let's neglect the friction between the roll and the axle.
This diagram shows the paper roll with the force applied by the kitten, $F$, the lever arm of the force around the axis of rotation, $r$, and the resulting torque, $\tau$.
Assuming that the force applied by the kitten is 0.002 N, how long would it take to unroll a standard roll of toilet paper?
We'll use the same parameters as in Chapter 24
Step2: Rmin and Rmax are the minimum and maximum radius of the roll, respectively.
Mcore is the weight of the core (the cardboard tube at the center) and Mroll is the total weight of the paper.
L is the unrolled length of the paper.
tension is the force the kitten applies by pulling on the loose end of the roll (I chose this value because it yields reasonable results).
In Chapter 24 we defined $k$ to be the constant that relates a change in the radius of the roll to a change in the rotation of the roll
Step4: Moment of Inertia
To compute angular acceleration, we'll need the moment of inertia for the roll.
At http
Step5: Icore is the moment of inertia of the core; Iroll is the moment of inertia of the paper.
rho_h is the density of the paper in terms of mass per unit of area.
To compute rho_h, we compute the area of the complete roll like this
Step6: And divide the mass of the roll by that area.
Step7: As an example, here's the moment of inertia for the complete roll.
Step8: As r decreases, so does I. Here's the moment of inertia when the roll is empty.
Step9: The way $I$ changes over time might be more of a problem than I have made it seem. In the same way that $F = m a$ only applies when $m$ is constant, $\tau = I \alpha$ only applies when $I$ is constant. When $I$ varies, we usually have to use a more general version of Newton's law. However, I believe that in this example, mass and moment of inertia vary together in a way that makes the simple approach work out.
A friend of mine who is a physicist is not convinced; nevertheless, let's proceed on the assumption that I am right.
Simulation
The state variables we'll use are
theta, the total rotation of the roll in radians,
omega, angular velocity in rad / s,
r, the radius of the roll, and
y, the length of the unrolled paper.
Here's a State object with the initial conditions.
Step10: And here's a System object with the starting conditions and t_end.
Step11: You can take it from here.
Exercise
Step12: Exercise
Step13: Now run the simulation.
Step14: And check the results.
Step15: The final value of theta should be about 200 rotations, the same as in Chapter 24.
The final value of omega should be about 63 rad/s, which is about 10 revolutions per second. That's pretty fast, but it might be plausible.
The final value of y should be L, which is 47 m.
The final value of r should be Rmin, which is 0.02 m.
And the total unrolling time should be about 76 seconds, which seems plausible.
The following cells plot the results.
theta increases slowly at first, then accelerates.
Step16: Angular velocity, omega, increases almost linearly at first, as constant force yields almost constant torque. Then, as the radius decreases, the lever arm decreases, yielding lower torque, but moment of inertia decreases even more, yielding higher angular acceleration.
Step17: y increases slowly and then accelerates.
Step18: r decreases slowly, then accelerates. | Python Code:
# download modsim.py if necessary
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ModSimPy/raw/master/' +
'modsim.py')
# import functions from modsim
from modsim import *
Explanation: Kittens
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
Rmin = 0.02 # m
Rmax = 0.055 # m
Mcore = 15e-3 # kg
Mroll = 215e-3 # kg
L = 47 # m
tension = 0.002 # N
Explanation: If you have used the Internet, you have probably seen videos of kittens unrolling toilet paper.
And you might have wondered how long it would take a standard kitten to unroll 47 m of paper, the length of a standard roll.
The interactions of the kitten and the paper rolls are complex. To keep things simple, let's assume that the kitten pulls down on the free end of the roll with constant force. And let's neglect the friction between the roll and the axle.
This diagram shows the paper roll with the force applied by the kitten, $F$, the lever arm of the force around the axis of rotation, $r$, and the resulting torque, $\tau$.
Assuming that the force applied by the kitten is 0.002 N, how long would it take to unroll a standard roll of toilet paper?
We'll use the same parameters as in Chapter 24:
End of explanation
k = (Rmax**2 - Rmin**2) / 2 / L
k
Explanation: Rmin and Rmax are the minimum and maximum radius of the roll, respectively.
Mcore is the weight of the core (the cardboard tube at the center) and Mroll is the total weight of the paper.
L is the unrolled length of the paper.
tension is the force the kitten applies by pulling on the loose end of the roll (I chose this value because it yields reasonable results).
In Chapter 24 we defined $k$ to be the constant that relates a change in the radius of the roll to a change in the rotation of the roll:
$$dr = k~d\theta$$
And we derived the equation for $k$ in terms of $R_{min}$, $R_{max}$, and $L$.
$$k = \frac{1}{2L} (R_{max}^2 - R_{min}^2)$$
So we can compute k like this:
End of explanation
def moment_of_inertia(r):
Moment of inertia for a roll of toilet paper.
r: current radius of roll in meters
returns: moment of inertia in kg m**2
Icore = Mcore * Rmin**2
Iroll = np.pi * rho_h / 2 * (r**4 - Rmin**4)
return Icore + Iroll
Explanation: Moment of Inertia
To compute angular acceleration, we'll need the moment of inertia for the roll.
At http://modsimpy.com/moment you can find moments of inertia for
simple geometric shapes. I'll model the core as a "thin cylindrical shell", and the paper roll as a "thick-walled cylindrical tube with open ends".
The moment of inertia for a thin shell is just $m r^2$, where $m$ is the mass and $r$ is the radius of the shell.
For a thick-walled tube the moment of inertia is
$$I = \frac{\pi \rho h}{2} (r_2^4 - r_1^4)$$
where $\rho$ is the density of the material, $h$ is the height of the tube (if we think of the roll oriented vertically), $r_2$ is the outer diameter, and $r_1$ is the inner diameter.
Since the outer diameter changes as the kitten unrolls the paper, we
have to compute the moment of inertia, at each point in time, as a
function of the current radius, r, like this:
End of explanation
area = np.pi * (Rmax**2 - Rmin**2)
area
Explanation: Icore is the moment of inertia of the core; Iroll is the moment of inertia of the paper.
rho_h is the density of the paper in terms of mass per unit of area.
To compute rho_h, we compute the area of the complete roll like this:
End of explanation
rho_h = Mroll / area
rho_h
Explanation: And divide the mass of the roll by that area.
End of explanation
moment_of_inertia(Rmax)
Explanation: As an example, here's the moment of inertia for the complete roll.
End of explanation
moment_of_inertia(Rmin)
Explanation: As r decreases, so does I. Here's the moment of inertia when the roll is empty.
End of explanation
init = State(theta=0, omega=0, y=0, r=Rmax)
init
Explanation: The way $I$ changes over time might be more of a problem than I have made it seem. In the same way that $F = m a$ only applies when $m$ is constant, $\tau = I \alpha$ only applies when $I$ is constant. When $I$ varies, we usually have to use a more general version of Newton's law. However, I believe that in this example, mass and moment of inertia vary together in a way that makes the simple approach work out.
A friend of mine who is a physicist is not convinced; nevertheless, let's proceed on the assumption that I am right.
Simulation
The state variables we'll use are
theta, the total rotation of the roll in radians,
omega, angular velocity in rad / s,
r, the radius of the roll, and
y, the length of the unrolled paper.
Here's a State object with the initial conditions.
End of explanation
system = System(init=init, t_end=120)
Explanation: And here's a System object with the starting conditions and t_end.
End of explanation
# Solution goes here
# Solution goes here
Explanation: You can take it from here.
Exercise:
Write a slope function we can use to simulate this system. Test it with the initial conditions. The results should be approximately
0.0, 0.294, 0.0, 0.0
End of explanation
# Solution goes here
# Solution goes here
Explanation: Exercise: Write an event function that stops the simulation when y equals L, that is, when the entire roll is unrolled. Test your function with the initial conditions.
End of explanation
# Solution goes here
Explanation: Now run the simulation.
End of explanation
results.tail()
Explanation: And check the results.
End of explanation
results.theta.plot(color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
Explanation: The final value of theta should be about 200 rotations, the same as in Chapter 24.
The final value of omega should be about 63 rad/s, which is about 10 revolutions per second. That's pretty fast, but it might be plausible.
The final value of y should be L, which is 47 m.
The final value of r should be Rmin, which is 0.02 m.
And the total unrolling time should be about 76 seconds, which seems plausible.
The following cells plot the results.
theta increases slowly at first, then accelerates.
End of explanation
results.omega.plot(color='C2', label='omega')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
Explanation: Angular velocity, omega, increases almost linearly at first, as constant force yields almost constant torque. Then, as the radius decreases, the lever arm decreases, yielding lower torque, but moment of inertia decreases even more, yielding higher angular acceleration.
End of explanation
results.y.plot(color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
Explanation: y increases slowly and then accelerates.
End of explanation
results.r.plot(color='C4', label='r')
decorate(xlabel='Time (s)',
ylabel='Radius (m)')
Explanation: r decreases slowly, then accelerates.
End of explanation |
249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
阅读笔记
作者:方跃文
Email
Step1: 基础知识
语言语义
python语言的设计特点是重视可读性、简洁性和明确性。
缩进,而不是大括号
python是通过空白符(制表符或者空格)来阻止代码的,不像R、C++等用的是大括号。该书原作者建议使用4空格作为缩进量。
万物皆对象
python语言的一个重要特点就是其对象模型的一致性。任何数值、字符串等都待在自己的“盒子”里,而这些“盒子”就是python的对象。
注释
注释符使用#
函数调用和对象方法的调用
几乎所有的python对象都有一些附属函数(也就是方法),它们可以访问该对象的内部数据。方法的调用时这样写的:
```
obj.some_method(x, y, z)
```
变量和按引用传递
在python中对变量赋值时,你其实是在创建等号右侧对象的一个引用。用实际的例子来说,看看下面这个整数列表:
Step2: 用小卡特的爹(好吧,这其实是某本python书的作者,但是我实在想不起来名字了)的话就是:相当于把a和b都贴在了[1,2,3]这个上面,上面a.append[4]之后,相当于把又贴在了[1,2,3,4]上面,因为b是贴在a上的,所以b的值也被改变,成为了[1,2,3,4]。
赋值操作assignment也可以被叫做binding绑定,因为这其实是将一个名称和对象进行了捆绑。因此被赋值了得变量名有时也被称作binding variables。
当我们把对象以参数的形式传入函数时候,我们其实传入的只是引用而已,不会发生任何复制。这是python区别于一些其他语言的重要特征之一。
例如下面这个例子
Step3: 动态引用,强类型
跟Java和C++相反, python中的对象引用没有与之关联的类型信息。从下面的例子可以看出:
Step4: 变量其实是对象在特定命名空间中的名称而已。对象的类型信息是保存在它自己内部的。如果你因为这样就认为python不是类型语言,那就是错了。
例如我们发现把字符串和整型数据直接相加在python中是不能进行的:
Step5: 从上面可以看出类型必须一致才能进行四则运算。从上面我们可以看出type可以用来查看一个对象的类型;反过来,python中,可以通过isinstance函数来检查一个对象是否是某个数据类型的实例。
Step6: instance 可以接受由类型组成的元祖。如果想检查某个对象的类型是否属于元祖中所制定的类型,可以用如下例子中的方法:
Step7: 属性和方法
时间: 2017年9月20日
Python中的对象通常具有属性 attribute,也具有方法。所谓方法,则是与对象有关并且能够访问对象内部数据的函数。它们通过形如 obj.attribute_name 这样的语法进行访问
Step8: "鸭子"类型
(鸭子在这里是哲学隐喻。意思是说:对于一只鸟类动物,不用管它是不是鸭子,只要看它像不像鸭子就可以了。)
一般上,我们并不总是关心对象属于什么类型,但是很多时候我们想知道的是它到底有没有某些方法。比如,只要一个对象实现额迭代器协议(iterator protocol),就可以确认它是可迭代的。对于大部分对象而言,这就意味着它拥有一个
Step9: 魔术方法。当然,还有一个更好一些的验证办法,即利用 iter 函数:
Step10: 常常在编写需要处理多类型输入的函数时使用这个功能,此外还有一个常见的应用场景是:编写可以接受任何序列(列表、元祖、ndarry)或迭代器的函数。例如我们可以先检查对象是不是列表(或NumPy数组),如果不是,就将其转换成是:
Step11: 引入 import
2017年9月21日
在python中,module模块就是一个含有函数和变量定义及从其他.py文件引入的此类东西的.py文件。也许这句话很拗口,所以还是让我用例子说话:
我在appedix-A文件夹中创建了一个新python脚本,取名为simple02.py,它的内容是(注意原书使用的是Python 2.7, 我使用的是python 3.6所以代码细节上有微小的差别)
Step12: 如果我们想要引入 simple02.py 中定义的变量和函数,我们可以在同目录下创建另一个文件,取名为 simple03.py
Step13: 运行它:python simple03.py
(如果是在windows系统的git bash中运行python,需要先设置一下python的路径,方法是在git bash terminal中运行PATH=$PATH
Step14: 二元运算符和比较运算符
一些常用的运算符如下:
Step15: 严格与懒惰
无论使用什么变成语言,都需要了解一下表达式是何时被求值的。
在python中,例如这样一个例子:
Step16: 在python中,只要这些句子被求值,相关计算就会立即发生。上述的例子中,d的值会被立即设定为30。而在其他一些语言(eg. Haskell), d的值在被使用之前是不会被计算出来的。
在一些特定环境下,我们希望求值计算并不是立即进行的。特别是在密集型计算中,计算负荷重,延迟计算就可以发挥作用。Python 提供了一些技术来实现这样的计算,但是在本书中并没有做介绍。
可变和不可变对象
大部分python对象是可变的(mutable),比如列表、字典、NumPy数组以及大部分用户自定义类型(类)。也就是说,它们所包含的对象或者值是可被修改的。
Step17: 另外一些对象,诸如tuple元组和字符串是不可变的immutable。
下面这个例子中,我们试图去改变tuple,但是python告诉我们这样的操作是不被支持的。
Step18: 标准类型
Python 有一些用于处理数值数据、字符串、布尔值(True or False)以及日期/时间的内置类型。
这里我们列出主要的标量类型。后面我们将单独讨论日期/时间的处理,因为它们是由标准库中的datetime模块提供的。
标准的Python标量类型
| 类型 | 说明|
| ------| ------ |
| None | Python的'null'值(None只存在一个实例对象) |
| str | 字符串类型。Python 2.x 中只有 ASCII 值,而Python 3中则是Unicode|
| unicode | Unicode 字符串类型 |
| float | 双精度(64)浮点数。注意这里没有专门的doubl类型|
|bool| True或者False|
|int|有符号整数,其最大值由平台系统决定|
|long|任意精度的有符号整数。大的itn值会被自动转换为long|
因为我在这里使用了Markdown输入表格,所以顺便介绍下Markdown中关于表格的一些小细节
语法说明:
|、-、
Step19: 但是在python2.x中,并不能默认产生这个浮点解。要产生浮点解,需要在python2.x的程序中写入
Step20: 如果不加这句话,也可以用显示的方式来转化,如
Step21: 4)如果要得到C风格的整数除法,即除不尽时候丢弃小树,可以使用“除后圆整运算符//”
Step22: 5)在python中,复数的虚部用j表示,例如
Step23: 字符串
字符串在python中的使用十分灵活。可以用单引号或者双引号。对于大段的字符串则可以使用三重引号。例如
Step24: 前面我们已经提到过,字符串在python中是不可以被改变的,即immutable,如果要修改字符串就必须创建一个新的。
Step25: 由于字符串其实是一串字符序列,所以可以被当作某种序列类型(类似列表、元祖等)进行处理
Step26: 如果一个字符串包含很反斜杠,这会很让人懊恼,因为反斜杠常常会将字符进行转移。在python中,为了使得反斜杠就代表反斜杠本身而不去进行转移,可以在字符串前面加上字母r。这个在matplot里面画图添加希腊字母标注时候会显得比较有用。
Step27: 这里重点讲一讲python 3带来的一些新的字符串格式化的手段,其实我在平时也已经在用了,但是这里再进行简短的讨论。因为在数据分析中这个经常会用到。
这里简单说一下主要的机制:以一个%开头且后面跟着一个或者多个格式字符的字符串是需要插入值的目标。这句话读起来也很拗口,让我们看例子
Step28: 上述,%s代表将参数格式化为字符串,%d将参数格式化为整数,这些地方都是需要实际参数来替换的,因此在上述代码的第二行中,我们使用了由值组成的元组,将这些值传入形参的位置。
Step29: 布尔值
比较运算和表达式都可以产生True或者False. 布尔值可以用and和or关键字进行连接
Step30: ** 几乎所有内置的python类型以及任何定义了__nonzero__魔术方法的类都能在if语句中被解释为True或者False
Step31: 如果想知道一个对象会被强制转换成哪个布尔值,可以使用bool函数。
Step32: 类型转换
str、 bool、 int 以及 float 等类型也可用作将值转换成该类型的函数. 我觉得,通过下面的例子不难发现,这些函数并不会从本质上改变原始值,而只是创建了新的引用。
Step33: None
None是python的空值类型。如果一个函数无显示的返回值,则隐式返回None
Step34: None 还是函数可选参数的一种常见默认值;(不过,虽然我知道可以这样用,但是我暂时联想不到有什么场景可以用到)
Step35: 值得注意的是, None并非是保留关键字,它只是NoneType的一个实例
日期和时间
Python 内置的 datetime 模块提供了 datetime, date 以及 time 等类型。datetime 类型是用的最多的,它合并了保存在 date 和 time 中的信息。
Step36: 给定 datetime 一个实例。你可以通过调用其date和time方法提取相应的date和time对象
Step37: strftime 方法用于将datetime格式化为字符串:
Step38: 控制流
if, elif 和 else
if语句是常见的控制语句类型。它用于判断一个条件,如果为 True, 则执行紧跟其后的代码块:
Step39: if 可以和多个 elif 以及一个“滴水不漏”的 else 块(如果所有条件都为False),例如
Step40: for 循环
for 循环用于对集合(比如列表或者元祖)或迭代器进行迭代。for 循环的标准语法是:
for value in collection
Step41: 后面我们会看到,如果集合或者迭代器的元素是序列类型,例如元组或者列表,那么还可以方便将这些元素拆散成for语句中的多个变量.
Step42: while 循环
while 循环定义了一个条件和一个代码块,只要条件不为False或者循环没有被break显示终止,则代码会一直执行下去:
Step43: pss
pass 是 python 中的“空操作”语句。它可以被用在那些没有任何功能的代码块中。由于python是根据空白符划分代码块的,所以它的存在是很有必要的。
Step44: 在开发一个新功能时,常常会将pass用作代码中的占位符:
Step45: 异常处理
优雅地处理Python错误或异常是构建健壮程序的重要环节。 译者翻译的这句真有意思,感觉铺面而来一种“知乎”风格。
在data science中,许多函数只对特定数据类型有效。例,python中的float函数可将字符串转换为浮点数,但是如果输入值不正确就会产生 ValueError | Python Code:
%run appendix-A/simple01.py
Explanation: 阅读笔记
作者:方跃文
Email: fyuewen@gmail.com
时间:始于2017年9月12日, 结束写作于
附录 A
附录A在原书最后,不过我自己为了复习python的一些命令,所以特意将这一部分提前到此。
python 解释器
python解释器通过“一次执行一条语句”的方式运行程序。多加利用Ipython。
通过使用 %run 命令,IPython 会在同个进程中执行指定文件中的代码。例如我在当年目录的下级目录appendix-A中创建了一个simple01.py的程序,它的内容是
```
a = 1
print(a)
```
下面我在jupyter notebook中执行
End of explanation
a = [1, 2, 3]
b = a
a.append(4)
print(a)
print(b)
Explanation: 基础知识
语言语义
python语言的设计特点是重视可读性、简洁性和明确性。
缩进,而不是大括号
python是通过空白符(制表符或者空格)来阻止代码的,不像R、C++等用的是大括号。该书原作者建议使用4空格作为缩进量。
万物皆对象
python语言的一个重要特点就是其对象模型的一致性。任何数值、字符串等都待在自己的“盒子”里,而这些“盒子”就是python的对象。
注释
注释符使用#
函数调用和对象方法的调用
几乎所有的python对象都有一些附属函数(也就是方法),它们可以访问该对象的内部数据。方法的调用时这样写的:
```
obj.some_method(x, y, z)
```
变量和按引用传递
在python中对变量赋值时,你其实是在创建等号右侧对象的一个引用。用实际的例子来说,看看下面这个整数列表:
End of explanation
def append_element(fanglist, element):
fanglist.append(element)
data = [1,2,3]
append_element(data, 5)
print(data)
Explanation: 用小卡特的爹(好吧,这其实是某本python书的作者,但是我实在想不起来名字了)的话就是:相当于把a和b都贴在了[1,2,3]这个上面,上面a.append[4]之后,相当于把又贴在了[1,2,3,4]上面,因为b是贴在a上的,所以b的值也被改变,成为了[1,2,3,4]。
赋值操作assignment也可以被叫做binding绑定,因为这其实是将一个名称和对象进行了捆绑。因此被赋值了得变量名有时也被称作binding variables。
当我们把对象以参数的形式传入函数时候,我们其实传入的只是引用而已,不会发生任何复制。这是python区别于一些其他语言的重要特征之一。
例如下面这个例子
End of explanation
a = 5
type(a)
a = 2.5
type(a)
Explanation: 动态引用,强类型
跟Java和C++相反, python中的对象引用没有与之关联的类型信息。从下面的例子可以看出:
End of explanation
a = 'hello'
b = 5
print('a is %s, and b is %s' % (type(a), type(b)))
c = a + b
Explanation: 变量其实是对象在特定命名空间中的名称而已。对象的类型信息是保存在它自己内部的。如果你因为这样就认为python不是类型语言,那就是错了。
例如我们发现把字符串和整型数据直接相加在python中是不能进行的:
End of explanation
a = 5
isinstance(a, int)
a = 2.2
isinstance(a, int)
Explanation: 从上面可以看出类型必须一致才能进行四则运算。从上面我们可以看出type可以用来查看一个对象的类型;反过来,python中,可以通过isinstance函数来检查一个对象是否是某个数据类型的实例。
End of explanation
a = 5
b = 2.2
isinstance(a,(int,float))
a = 5
b = 2.2
isinstance(b,(int,float))
Explanation: instance 可以接受由类型组成的元祖。如果想检查某个对象的类型是否属于元祖中所制定的类型,可以用如下例子中的方法:
End of explanation
a = 'foo' #定义了对象
a.count #访问对象的方法
#访问对象的方法还可以使用 getattr 函数 (getattr 是 getattribute 的缩写)
getattr(a, 'center')
Explanation: 属性和方法
时间: 2017年9月20日
Python中的对象通常具有属性 attribute,也具有方法。所谓方法,则是与对象有关并且能够访问对象内部数据的函数。它们通过形如 obj.attribute_name 这样的语法进行访问
End of explanation
__inter__
Explanation: "鸭子"类型
(鸭子在这里是哲学隐喻。意思是说:对于一只鸟类动物,不用管它是不是鸭子,只要看它像不像鸭子就可以了。)
一般上,我们并不总是关心对象属于什么类型,但是很多时候我们想知道的是它到底有没有某些方法。比如,只要一个对象实现额迭代器协议(iterator protocol),就可以确认它是可迭代的。对于大部分对象而言,这就意味着它拥有一个
End of explanation
def isiterable(obj):
try:
iter(obj)
return True
except TypeError: #不可迭代
return False
#对于字符串以及大部分Python集合类型,该函数返回True:
print(isiterable([1,2]))
print(isiterable((1,2)))
print(isiterable('1'))
print(isiterable(1))
Explanation: 魔术方法。当然,还有一个更好一些的验证办法,即利用 iter 函数:
End of explanation
x = (1,2) #定义了一个tuple
print(type(x))
print(isinstance(x,list) )#因为x是tuple,不是list,所以返回 False
print(isiterable(x))
if not isinstance(x,list) and isiterable(x):
#if not isiterable(x):
x = list(x)
print(x)
Explanation: 常常在编写需要处理多类型输入的函数时使用这个功能,此外还有一个常见的应用场景是:编写可以接受任何序列(列表、元祖、ndarry)或迭代器的函数。例如我们可以先检查对象是不是列表(或NumPy数组),如果不是,就将其转换成是:
End of explanation
#set up a module
PI = 3.14159
def f(x):
return (x + 2 )
def g(a, b):
return (a + b)
Explanation: 引入 import
2017年9月21日
在python中,module模块就是一个含有函数和变量定义及从其他.py文件引入的此类东西的.py文件。也许这句话很拗口,所以还是让我用例子说话:
我在appedix-A文件夹中创建了一个新python脚本,取名为simple02.py,它的内容是(注意原书使用的是Python 2.7, 我使用的是python 3.6所以代码细节上有微小的差别)
End of explanation
import simple02 as s2
result = s2.f(5)
print(result)
pi = s2.PI
print(pi)
sumvalue = s2.g(2,2)
print(sumvalue)
Explanation: 如果我们想要引入 simple02.py 中定义的变量和函数,我们可以在同目录下创建另一个文件,取名为 simple03.py
End of explanation
#Fang code
import sys
sys.path.append("./appendix-A/simple02.py")
import simple02
result=simple02.f(5)
print(result)
Explanation: 运行它:python simple03.py
(如果是在windows系统的git bash中运行python,需要先设置一下python的路径,方法是在git bash terminal中运行PATH=$PATH:/c/ProgramData/Anaconda3/,然后就可以使用python了, see referene)
或者我们也可以直接在jupyter notebook中运行,但是需要增加一些代码让系统知道module在什么位置。此处我使用了import sys的方法,这种方法参考自名为shuang的github用户:链接
以下是我摘自这位用户的wiki:
为了让我们自己写的模块能够被 Python 解释器知道,需要用sys.path.append("~/sample.py")。其实,在 Python 中,所有模块都被加入到了 sys.path 里面了。用下面的方法可以看到模块所在位置:
import sys
import pprint
pprint.pprint(sys.path)
End of explanation
a + b # 求和
a - b
a*b
a/b
a//b # a 除以 b后向下圆整,丢弃小数部分
a**b # a 的 b次方
a & b # 如果a和b均为True,则结果为True. 对于整数,执行按位与操作 https://zh.wikipedia.org/wiki/%E4%BD%8D%E6%93%8D%E4%BD%9C
a|b #如果a或者b为True, 则结果为True. 对于整数,执行按位与操作
a^b #对于布尔值,如果a或者b为True(但不都为True),则结果为True. 对于整数,执行按位异或操作
a==b #如果a等于b,则结果为True
a !=b #不等,则True
a <= b、a<b # 如果a小于等于(或小于)b,则结果为True
a > b、a >= b # 如果a大于(或大于等于)b,则结果为True
a is b #如果引用a和引用b都指向同一个python对象,则结果为True
a is not b #如果引用a和引用不指向同一个python对象,则结果为True
#举例子来运用这些算符
a = (1,2,3)
b = a
c = 1.2
d = 2.0
e = 2
f = 3
sumtuple = a + b
print (sumtuple)
suminteger = e + f
print(suminteger)
print(c/d)
print(c//d)
print(e/f)
print(e//f)
print(e & f)
print (True & True)
print(e is f)
print (e is not f)
print (True^True)
print (True^False)
if a == b:
print('a=b')
Explanation: 二元运算符和比较运算符
一些常用的运算符如下:
End of explanation
a=b=c=5
d=a+b*c
Explanation: 严格与懒惰
无论使用什么变成语言,都需要了解一下表达式是何时被求值的。
在python中,例如这样一个例子:
End of explanation
a = [1,2,3,4]
a[0]=2
print(a)
Explanation: 在python中,只要这些句子被求值,相关计算就会立即发生。上述的例子中,d的值会被立即设定为30。而在其他一些语言(eg. Haskell), d的值在被使用之前是不会被计算出来的。
在一些特定环境下,我们希望求值计算并不是立即进行的。特别是在密集型计算中,计算负荷重,延迟计算就可以发挥作用。Python 提供了一些技术来实现这样的计算,但是在本书中并没有做介绍。
可变和不可变对象
大部分python对象是可变的(mutable),比如列表、字典、NumPy数组以及大部分用户自定义类型(类)。也就是说,它们所包含的对象或者值是可被修改的。
End of explanation
a_tuple = (2,3,4)
a_tuple[1]=2
Explanation: 另外一些对象,诸如tuple元组和字符串是不可变的immutable。
下面这个例子中,我们试图去改变tuple,但是python告诉我们这样的操作是不被支持的。
End of explanation
3/2
Explanation: 标准类型
Python 有一些用于处理数值数据、字符串、布尔值(True or False)以及日期/时间的内置类型。
这里我们列出主要的标量类型。后面我们将单独讨论日期/时间的处理,因为它们是由标准库中的datetime模块提供的。
标准的Python标量类型
| 类型 | 说明|
| ------| ------ |
| None | Python的'null'值(None只存在一个实例对象) |
| str | 字符串类型。Python 2.x 中只有 ASCII 值,而Python 3中则是Unicode|
| unicode | Unicode 字符串类型 |
| float | 双精度(64)浮点数。注意这里没有专门的doubl类型|
|bool| True或者False|
|int|有符号整数,其最大值由平台系统决定|
|long|任意精度的有符号整数。大的itn值会被自动转换为long|
因为我在这里使用了Markdown输入表格,所以顺便介绍下Markdown中关于表格的一些小细节
语法说明:
|、-、:之间的多余空格会被忽略,不影响布局。
默认标题栏居中对齐,内容居左对齐。
-:表示内容和标题栏居右对齐,:-表示内容和标题栏居左对齐,:-:表示内容和标题栏居中对齐。
内容和|之间的多余空格会被忽略,每行第一个|和最后一个|可以省略,-的数量至少有一个。
数值类型
在 Python 中,用于表述数据的类型主要是 int 和 float.
1)能被保存为int的整数的大小由平台决定,但是Python会把非常大的证书转换为long,它可以存储任意大小的整数。
2)float型可以写成小数形式,也可以写作科学技术法形式如 1e-5
3)python3中整数除法除不尽时就会产生浮点,例如
End of explanation
from __future__ import division
Explanation: 但是在python2.x中,并不能默认产生这个浮点解。要产生浮点解,需要在python2.x的程序中写入
End of explanation
3/float(2)
Explanation: 如果不加这句话,也可以用显示的方式来转化,如
End of explanation
3//2
Explanation: 4)如果要得到C风格的整数除法,即除不尽时候丢弃小树,可以使用“除后圆整运算符//”
End of explanation
a=1+2j
b=3-3j
c=a-b
print(c)
Explanation: 5)在python中,复数的虚部用j表示,例如
End of explanation
a='Hi, this is Yue-Wen FANG from NYU SHANGHAI'
b="He is a visiting researcher!"
c='''
##########################
##########################
##########################
'''
print(a)
print(b)
print(c)
Explanation: 字符串
字符串在python中的使用十分灵活。可以用单引号或者双引号。对于大段的字符串则可以使用三重引号。例如
End of explanation
a="He is a visiting researcher!"
b=a.replace('is a visiting researcher','graduated from East China Normal University') #创建了一个新的字符串,b引用了这个新的字符串
print(a)
print(b)
python中允许将其他类型转化成字符串,用str函数,例如
a=5
str(a)
print(a)
Explanation: 前面我们已经提到过,字符串在python中是不可以被改变的,即immutable,如果要修改字符串就必须创建一个新的。
End of explanation
a="ALOHA!"
print(a)
list(a)
print(a[:3])
Explanation: 由于字符串其实是一串字符序列,所以可以被当作某种序列类型(类似列表、元祖等)进行处理
End of explanation
s = 'This\\is\\a\\flower'
s1 = r'This\\is\\a\\flower'
print(s,'\t',s1)
Explanation: 如果一个字符串包含很反斜杠,这会很让人懊恼,因为反斜杠常常会将字符进行转移。在python中,为了使得反斜杠就代表反斜杠本身而不去进行转移,可以在字符串前面加上字母r。这个在matplot里面画图添加希腊字母标注时候会显得比较有用。
End of explanation
template = '%s was born in %d'
template % ('yue-wen',1990)
Explanation: 这里重点讲一讲python 3带来的一些新的字符串格式化的手段,其实我在平时也已经在用了,但是这里再进行简短的讨论。因为在数据分析中这个经常会用到。
这里简单说一下主要的机制:以一个%开头且后面跟着一个或者多个格式字符的字符串是需要插入值的目标。这句话读起来也很拗口,让我们看例子
End of explanation
# 再看一个例子
a=2e6
b=2.9
print('a is %d and b is %f' %(a, b))
Explanation: 上述,%s代表将参数格式化为字符串,%d将参数格式化为整数,这些地方都是需要实际参数来替换的,因此在上述代码的第二行中,我们使用了由值组成的元组,将这些值传入形参的位置。
End of explanation
True and False
True or False
Explanation: 布尔值
比较运算和表达式都可以产生True或者False. 布尔值可以用and和or关键字进行连接
End of explanation
a = [1,2,3]
b = a[:]
c = []
if a:
print('a is %s' % a)
if not b:
print('b is %s' % b)
if not c: # 因为c是空的,所以会当作False处理,not False则为True
print('c is none')
Explanation: ** 几乎所有内置的python类型以及任何定义了__nonzero__魔术方法的类都能在if语句中被解释为True或者False:
End of explanation
a = [1,2,3]
b = a[:]
c = []
print('The bool values of a, b, and c are %s, %s, and %s, respectively' % (bool(a), bool(b), bool(c)))
bool([]), bool([1]), bool('hello'), bool()
Explanation: 如果想知道一个对象会被强制转换成哪个布尔值,可以使用bool函数。
End of explanation
import math
s = math.pi
a = str(s)
print(type(s))
print(type(a))
b = float(a)
int(float(a))
print(type(a))
Explanation: 类型转换
str、 bool、 int 以及 float 等类型也可用作将值转换成该类型的函数. 我觉得,通过下面的例子不难发现,这些函数并不会从本质上改变原始值,而只是创建了新的引用。
End of explanation
a = None
a is None
b = 5
print(b is not None)
if b is None:
print(b)
Explanation: None
None是python的空值类型。如果一个函数无显示的返回值,则隐式返回None
End of explanation
def add_and_maybe_multipy(a, b, c=None):
if a is not None and b is not None and c is None:
return(a+b)
else:
return(a*c)
x = add_and_maybe_multipy(1, 1, None)
y = add_and_maybe_multipy(1, 2, 3)
print('x = %d and y = %d' % (x, y))
Explanation: None 还是函数可选参数的一种常见默认值;(不过,虽然我知道可以这样用,但是我暂时联想不到有什么场景可以用到)
End of explanation
from datetime import datetime, date, time
dt = datetime(2017, 10, 28, 19, 20, 22)
day = dt.day
print(day)
min = dt.minute
print(min)
Explanation: 值得注意的是, None并非是保留关键字,它只是NoneType的一个实例
日期和时间
Python 内置的 datetime 模块提供了 datetime, date 以及 time 等类型。datetime 类型是用的最多的,它合并了保存在 date 和 time 中的信息。
End of explanation
dt.date()
dt.time()
Explanation: 给定 datetime 一个实例。你可以通过调用其date和time方法提取相应的date和time对象
End of explanation
from datetime import datetime, date, time
dt = datetime(2017, 9, 22, 12, 30, 20)
print(dt.day)
datetime.strptime('20170923', '%Y%m%d')
#完整的格式化定义请参考表书本10-2
#Page 417
Explanation: strftime 方法用于将datetime格式化为字符串:
End of explanation
x=-1
if x<0:
print(repr("hello"))
print("%s" % ("hello"))
a = [2, 3, 4, 5, 6, 8, 10]
if type(a)==list:
print(a)
Explanation: 控制流
if, elif 和 else
if语句是常见的控制语句类型。它用于判断一个条件,如果为 True, 则执行紧跟其后的代码块:
End of explanation
x = 10
if x<0:
print('x is negative')
elif x == 0:
print('x = 0')
elif x > 0:
print('x > 0')
else:
print('x is a complex')
Explanation: if 可以和多个 elif 以及一个“滴水不漏”的 else 块(如果所有条件都为False),例如
End of explanation
sequence = [1, 2, None, 4, None, 5]
total = 0
for value in sequence:
if value is None:
continue
total += value
print('total is %f in this cyle' % (total))
Explanation: for 循环
for 循环用于对集合(比如列表或者元祖)或迭代器进行迭代。for 循环的标准语法是:
for value in collection:
continue关键字用于使for循环提前进入下一次迭代,即跳过本次循环中剩余的代码块。我们来看个例子来理解下continue的功能,其功能是对列表中证书求和并跳过None值.
End of explanation
#使用的语法是
#for a,b,c in iterator:
# do something
#我这里只写一个简单的例子,但是这个例子可能还不是上述语法最正统的例子。
# date: 20170923
sequence = [(1,2,3),(4,5,6)]
total = 0
for (i,j,k) in sequence:
print((i,j,k))
coor = (i,j,k)
for i in coor:
i=i+1
print(i, end=' ')
print('\n')
Explanation: 后面我们会看到,如果集合或者迭代器的元素是序列类型,例如元组或者列表,那么还可以方便将这些元素拆散成for语句中的多个变量.
End of explanation
x = 256
total = 0
while x > 0:
if total > 500:
break
total += x
print('total = %d' %total)
x = x//2
print('x = %d' %x)
Explanation: while 循环
while 循环定义了一个条件和一个代码块,只要条件不为False或者循环没有被break显示终止,则代码会一直执行下去:
End of explanation
if x < 0:
print 'negative'
elif x==0:
#TO DO: here you can add codes in future
pass
else:
print 'positive'
Explanation: pss
pass 是 python 中的“空操作”语句。它可以被用在那些没有任何功能的代码块中。由于python是根据空白符划分代码块的,所以它的存在是很有必要的。
End of explanation
def f(x,y,z):
#TODO: realize a function here
pass
Explanation: 在开发一个新功能时,常常会将pass用作代码中的占位符:
End of explanation
float('1.2345')
float('something')
Explanation: 异常处理
优雅地处理Python错误或异常是构建健壮程序的重要环节。 译者翻译的这句真有意思,感觉铺面而来一种“知乎”风格。
在data science中,许多函数只对特定数据类型有效。例,python中的float函数可将字符串转换为浮点数,但是如果输入值不正确就会产生 ValueError
End of explanation |
250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$V(X) = -w (X) + \frac{w L}{2}$$
Where
Step1: $$\Delta(x) = x^{3} \left(- \frac{L w}{12 E I} - \frac{w x_{0}}{6 E I}\right) + x^{2} \left(\frac{L w x_{0}}{4 E I} + \frac{w x_{0}^{2}}{4 E I}\right) + x \left(\frac{L^{3} w}{24 E I} - \frac{L w x_{0}^{2}}{4 E I} - \frac{w x_{0}^{3}}{6 E I}\right) + y_{0} - \frac{L^{3} w x_{0}}{24 E I} + \frac{L w x_{0}^{3}}{12 E I} + \frac{w x^{4}}{24 E I} + \frac{w x_{0}^{4}}{24 E I}$$
$$\Delta(x) = \frac{w}{2 EI}\left(\frac{x^{4}}{12} - \left(\frac{L}{6 } + \frac{x_{0}}{3}\right)x^{3} + \left(\frac{L x_{0}}{2} + \frac{x_{0}^{2}}{2}\right) x^{2} + \left(\frac{L^{3}}{12} - \frac{L x_{0}^{2}}{2} - \frac{x_{0}^{3}}{3}\right) x \right) +\frac{w}{2 E I}\left(- \frac{L^{3} x_{0}}{12} + \frac{L x_{0}^{3}}{6} + \frac{ x_{0}^{4}}{12}\right)+ y_{0}$$
$$\Delta(x) = \frac{w}{2 EI}\left(\frac{x^{4}}{12} - \frac{1}{3}\left(\frac{L}{2} + x_{0}\right)x^{3} + \frac{1}{2}\left(L x_{0} + x_{0}^{2}\right) x^{2} + \left(\frac{L^{3}}{12} - \frac{L x_{0}^{2}}{2} - \frac{x_{0}^{3}}{3}\right) x \right) +\frac{w}{2 E I}\frac{1}{6}\left(- \frac{L^{3} x_{0}}{2} + L x_{0}^{3} + \frac{ x_{0}^{4}}{2}\right)+ y_{0}$$
The following was not used due to another approach being found.
The following calculations were done in an effort to determine what the loacation of the supports needed to be if they did not move in the drawing when the beam deflects. This is necessary due to the simplifiaction made where the ends of the beam centerline do not move when the bem deflects.
Step2: $$\Delta(x)=\frac{1}{24 E I} \left(24 E I y_{0} - L^{3} w x_{0} + 2 L w x_{0}^{3} + w x^{4} - 2 w x^{3} \left(L + 2 x_{0}\right) + 6 w x^{2} x_{0} \left(L + x_{0}\right) - w x \left(- L^{3} + 6 L x_{0}^{2} + 4 x_{0}^{3}\right) + w x_{0}^{4}\right)$$
$$\theta(x) =\frac{1}{24 E I} \left(4 w x^{3} - 6 w x^{2} \left(L + 2 x_{0}\right) + 12 w x x_{0} \left(L + x_{0}\right) - w \left(- L^{3} + 6 L x_{0}^{2} + 4 x_{0}^{3}\right) \right)$$
$$\theta(x) = \frac{1}{24 E I} \left(4 w x^{3} - 6 w x^{2} \left(L + 2 x_{0}\right) + 12 w x x_{0} \left(L + x_{0}\right) - w \left(- L^{3} + 6 L x_{0}^{2} + 4 x_{0}^{3}\right) \right)$$
$$y_s = y_0 +(y - y_0) +\sqrt{\frac{t^2}{4} - (x-x_0)^2}$$
$$y_s = \frac{-1}{\theta(x)}(x_0-x) + y$$
$$ 0 = y_0 +(y - y_0) +\sqrt{\frac{t^2}{4} - (x-x_0)^2} - \left( \frac{-1}{\theta(x)}(x_0-x) + y \right)$$
$$ 0 = y_0 +y - y_0 +\sqrt{\frac{t^2}{4} - (x-x_0)^2} + \frac{1}{\theta(x)}(x_0-x) - y $$
$$ 0 = \sqrt{\frac{t^2}{4} - (x-x_0)^2} + \frac{x_0-x}{\theta(x)} $$ | Python Code:
from sympy import symbols, collect, expand, latex, simplify
D, x, x_0, E, I, w, y_0, L, y, t = symbols('Delta x x_0 E I w y_0 L y t')
from sympy import init_printing
init_printing(use_unicode=True)
D = w/(2*E*I)*((x-x_0)**4/12-L*(x-x_0)**3/6+L**3/12*(x-x_0)+2*y_0*E*I/w)
D
E = expand(D)
E
F = collect(E,x)
F
latex(F)
Explanation: $$V(X) = -w (X) + \frac{w L}{2}$$
Where:
$X = x-x_0$
$x$ - is relative to the end of the beam
$x_0$ - is the absolute position
$$V(X) = -w X + \frac{w L}{2}$$
$$M(X) = -\frac{w}{2} X^2 + \frac{w L}{2} X$$
$$M(X) = \frac{w}{2}\left(-X^2 + L X\right)$$
$$\theta(X) = \frac{-w}{2 E I}\left(-\frac{X^3}{3} +L \frac{X^2}{2} - C\right)$$
$$\theta(X) = \frac{w}{2 E I}\left(\frac{X^3}{3} - L \frac{X^2}{2} + C\right)$$
$$\Delta(X) = \frac{w}{2 E I}\left(\frac{X^4}{12} - L \frac{X^3}{6} +C X + D\right)$$
$$\Delta(0) = \frac{w}{2 E I}\left(\frac{0^4}{12} - L \frac{0^3}{6} +C\cdot 0 + D\right) = y_0$$
$$\frac{w}{2 E I} D = y_0$$
$$\text{therefore:}\quad D = \frac{ y_0 2 E I}{w}$$
$$\Delta(L) = \frac{w}{2 E I}\left(\frac{L^4}{12} - L \frac{L^3}{6} +C \cdot L + \frac{2 y_0 E I}{w}\right) = y_0$$
$$\frac{w}{2 E I}\left(\frac{L^4}{12} - L \frac{L^3}{6} +C \cdot L\right) + y_0 = y_0$$
$$ \frac{w}{2 E I}\left(\frac{L^4}{12} - L \frac{L^3}{6} +C \cdot L \right) = 0$$
$$ \frac{L^4}{12} - L \frac{L^3}{6} +C \cdot L = 0$$
$$ \frac{L^3}{12} - \frac{L^3}{6} +C = 0$$
$$C = \frac{L^3}{6} - \frac{L^3}{12}$$
$$C = \frac{L^3}{12}$$
$$\Delta(X) = \frac{w}{2 E I}\left(\frac{X^4}{12} - L \frac{X^3}{6} +\frac{L^3}{12} X + \frac{2 y_0 E I}{w}\right)$$
$$X = x-x_0$$
$$\Delta(x) = \frac{w}{2 E I}\left(\frac{(x-x_0)^4}{12} - L \frac{(x-x_0)^3}{6} +\frac{L^3}{12} (x-x_0) + \frac{2 y_0 E I}{w}\right)$$
End of explanation
G = simplify(F)
G
Explanation: $$\Delta(x) = x^{3} \left(- \frac{L w}{12 E I} - \frac{w x_{0}}{6 E I}\right) + x^{2} \left(\frac{L w x_{0}}{4 E I} + \frac{w x_{0}^{2}}{4 E I}\right) + x \left(\frac{L^{3} w}{24 E I} - \frac{L w x_{0}^{2}}{4 E I} - \frac{w x_{0}^{3}}{6 E I}\right) + y_{0} - \frac{L^{3} w x_{0}}{24 E I} + \frac{L w x_{0}^{3}}{12 E I} + \frac{w x^{4}}{24 E I} + \frac{w x_{0}^{4}}{24 E I}$$
$$\Delta(x) = \frac{w}{2 EI}\left(\frac{x^{4}}{12} - \left(\frac{L}{6 } + \frac{x_{0}}{3}\right)x^{3} + \left(\frac{L x_{0}}{2} + \frac{x_{0}^{2}}{2}\right) x^{2} + \left(\frac{L^{3}}{12} - \frac{L x_{0}^{2}}{2} - \frac{x_{0}^{3}}{3}\right) x \right) +\frac{w}{2 E I}\left(- \frac{L^{3} x_{0}}{12} + \frac{L x_{0}^{3}}{6} + \frac{ x_{0}^{4}}{12}\right)+ y_{0}$$
$$\Delta(x) = \frac{w}{2 EI}\left(\frac{x^{4}}{12} - \frac{1}{3}\left(\frac{L}{2} + x_{0}\right)x^{3} + \frac{1}{2}\left(L x_{0} + x_{0}^{2}\right) x^{2} + \left(\frac{L^{3}}{12} - \frac{L x_{0}^{2}}{2} - \frac{x_{0}^{3}}{3}\right) x \right) +\frac{w}{2 E I}\frac{1}{6}\left(- \frac{L^{3} x_{0}}{2} + L x_{0}^{3} + \frac{ x_{0}^{4}}{2}\right)+ y_{0}$$
The following was not used due to another approach being found.
The following calculations were done in an effort to determine what the loacation of the supports needed to be if they did not move in the drawing when the beam deflects. This is necessary due to the simplifiaction made where the ends of the beam centerline do not move when the bem deflects.
End of explanation
from sympy import diff,sqrt, roots
T = diff(G,x)
T
H = sqrt(t**2/4-(x-x_0)**2)*T #+ (x-x_0)
J = H**2
K = collect(expand((J - (x_0-x)**2)/(w**2/(12*E**2*I**2))),x)
K
c = -12*E**2/w**2*I**2*x_0**2 + L**6*t**2/192 - L**6*x_0**2/48 -L**4*t**2/16*x_0**2\
+L**4*x_0**4/4 - L**3*t**2/24*x_0**3 + L**3*x_0**5/6 + 3*L**2/16*t**2*x_0**4 \
-3*L**2/4*x_0**6 + L*t**2/4*x_0**5 - L*x_0**7 + t**2*x_0**6/12 - x_0**8/3
K-c
Explanation: $$\Delta(x)=\frac{1}{24 E I} \left(24 E I y_{0} - L^{3} w x_{0} + 2 L w x_{0}^{3} + w x^{4} - 2 w x^{3} \left(L + 2 x_{0}\right) + 6 w x^{2} x_{0} \left(L + x_{0}\right) - w x \left(- L^{3} + 6 L x_{0}^{2} + 4 x_{0}^{3}\right) + w x_{0}^{4}\right)$$
$$\theta(x) =\frac{1}{24 E I} \left(4 w x^{3} - 6 w x^{2} \left(L + 2 x_{0}\right) + 12 w x x_{0} \left(L + x_{0}\right) - w \left(- L^{3} + 6 L x_{0}^{2} + 4 x_{0}^{3}\right) \right)$$
$$\theta(x) = \frac{1}{24 E I} \left(4 w x^{3} - 6 w x^{2} \left(L + 2 x_{0}\right) + 12 w x x_{0} \left(L + x_{0}\right) - w \left(- L^{3} + 6 L x_{0}^{2} + 4 x_{0}^{3}\right) \right)$$
$$y_s = y_0 +(y - y_0) +\sqrt{\frac{t^2}{4} - (x-x_0)^2}$$
$$y_s = \frac{-1}{\theta(x)}(x_0-x) + y$$
$$ 0 = y_0 +(y - y_0) +\sqrt{\frac{t^2}{4} - (x-x_0)^2} - \left( \frac{-1}{\theta(x)}(x_0-x) + y \right)$$
$$ 0 = y_0 +y - y_0 +\sqrt{\frac{t^2}{4} - (x-x_0)^2} + \frac{1}{\theta(x)}(x_0-x) - y $$
$$ 0 = \sqrt{\frac{t^2}{4} - (x-x_0)^2} + \frac{x_0-x}{\theta(x)} $$
End of explanation |
251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy
for some reason, python is pretty slow at matrix math, so numpy was invented to fix it up. Matrix math and vectors is the backbone of neural nets and much else, so learning numpy is important.
Now, you can learn actual linear algebra from these links, so this is a quick guide on how to use numpy to do the maths for you.
the convention is to import numpy as np
Step1: Mostly we deal with vectors and matrixes
Step2: Accessing elements is similar to how python lists work
Step3: Now arrays can have many dimensions, which is where the math gets hard to visualize.
Step4: Element-wise operations
The power of matrices comes from being able to quickly do the same math on every element in the matrix.
Step5: Matrix multiplication
or dot product. This means taking the the rows of the first matrix and the columns of the second matrix, multiplying the ith element of the row/column together and adding them up.
The number of columns in the left matrix must equal the number of rows in the right matrix.
The answer matrix always has the same number of rows as the left matrix and the same number of columns as the right matrix.
Order matters. Multiplying A•B is not the same as multiplying B•A.
Data in the left matrix should be arranged as rows., while data in the right matrix should be arranged as columns.
Step6: Transpose
rearranges an array so the rows become columns | Python Code:
import numpy as np
# the below line outputs all variables, not just the last one
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Explanation: Numpy
for some reason, python is pretty slow at matrix math, so numpy was invented to fix it up. Matrix math and vectors is the backbone of neural nets and much else, so learning numpy is important.
Now, you can learn actual linear algebra from these links, so this is a quick guide on how to use numpy to do the maths for you.
the convention is to import numpy as np:
End of explanation
v = np.array([1,2,3])
v.shape
Explanation: Mostly we deal with vectors and matrixes
End of explanation
v[1:]
m = np.array([[1,2,3], [4,5,6], [7,8,9]])
m.shape
m[2][2]
m[2,2]
Explanation: Accessing elements is similar to how python lists work:
End of explanation
t = np.array([[[[1],[2]],[[3],[4]],[[5],[6]]],[[[7],[8]],\
[[9],[10]],[[11],[12]]],[[[13],[14]],[[15],[16]],[[17],[17]]]])
t.shape
t[2][1][1][0]
Explanation: Now arrays can have many dimensions, which is where the math gets hard to visualize.
End of explanation
print(v)
print(v*2)
print(v+10)
v*3 - v
Explanation: Element-wise operations
The power of matrices comes from being able to quickly do the same math on every element in the matrix.
End of explanation
a = np.array([[1,2]])
b = np.array([[1,2,3], [1,2,3]])
a.dot(b)
Explanation: Matrix multiplication
or dot product. This means taking the the rows of the first matrix and the columns of the second matrix, multiplying the ith element of the row/column together and adding them up.
The number of columns in the left matrix must equal the number of rows in the right matrix.
The answer matrix always has the same number of rows as the left matrix and the same number of columns as the right matrix.
Order matters. Multiplying A•B is not the same as multiplying B•A.
Data in the left matrix should be arranged as rows., while data in the right matrix should be arranged as columns.
End of explanation
m = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
m
m.T
inputs = np.array([[-0.27, 0.45, 0.64, 0.31]])
inputs.shape
inputs
weights = np.array([[0.02, 0.001, -0.03, 0.036], \
[0.04, -0.003, 0.025, 0.009], [0.012, -0.045, 0.28, -0.067]])
weights.shape
weights
weights.T.shape
np.matmul(inputs, weights)
np.matmul(inputs, weights.T)
np.matmul(weights, inputs.T)
inputs.min()
inputs.shape[1]
inputs.mean()
Explanation: Transpose
rearranges an array so the rows become columns
End of explanation |
252 | Given the following text description, write Python code to implement the functionality described.
Description:
Check if three straight lines are concurrent or not
Return true if three line are concurrent , else false . ; Driven Program
| Python Code:
def checkConcurrent(a1 , b1 , c1 , a2 , b2 , c2 , a3 , b3 , c3 ) :
return(a3 *(b1 * c2 - b2 * c1 ) + b3 *(c1 * a2 - c2 * a1 ) + c3 *(a1 * b2 - a2 * b1 ) == 0 )
a1 = 2
b1 = - 3
c1 = 5
a2 = 3
b2 = 4
c2 = - 7
a3 = 9
b3 = - 5
c3 = 8
if(checkConcurrent(a1 , b1 , c1 , a2 , b2 , c2 , a3 , b3 , c3 ) ) :
print("Yes ")
else :
print("No ")
|
253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyOpenCGA basic alignment and coverage usage
[NOTE] The server methods used by pyopencga client are defined in the following swagger URL
Step1: Setting credentials for LogIn
Credentials
Plese add the credentials for opencga login into a file in json format and read them from there.
i.e
Step2: Creating ConfigClient for server connection configuration
Step3: LogIn with user credentials | Python Code:
# Initialize PYTHONPATH for pyopencga
import sys
import os
from pprint import pprint
cwd = os.getcwd()
print("current_dir: ...."+cwd[-10:])
base_modules_dir = os.path.dirname(cwd)
print("base_modules_dir: ...."+base_modules_dir[-10:])
sys.path.append(base_modules_dir)
from pyopencga.opencga_config import ConfigClient
from pyopencga.opencga_client import OpenCGAClient
import json
Explanation: pyOpenCGA basic alignment and coverage usage
[NOTE] The server methods used by pyopencga client are defined in the following swagger URL:
- http://bioinfodev.hpc.cam.ac.uk/opencga-test/webservices
[NOTE] Current implemented methods are registered at the following spreadsheet:
- https://docs.google.com/spreadsheets/d/1QpU9yl3UTneqwRqFX_WAqCiCfZBk5eU-4E3K-WVvuoc/edit?usp=sharing
Loading pyOpenCGA
End of explanation
## Reading user config/credentials to connect to server
user_config_json = "./__user_config.json"
with open(user_config_json,"r") as f:
user_credentials = json.loads(f.read())
print('User: {}***'.format(user_credentials["user"][:3]))
user = user_credentials["user"]
passwd = user_credentials["pwd"]
Explanation: Setting credentials for LogIn
Credentials
Plese add the credentials for opencga login into a file in json format and read them from there.
i.e:
file: __user_config.json
flie_content: {"user":"xxx","pwd":"yyy"}
End of explanation
## Creating ConfigClient
host = 'http://bioinfodev.hpc.cam.ac.uk/opencga-test'
cc = ConfigClient()
config_dict = cc.get_basic_config_dict(host)
print("Config information:\n",config_dict)
Explanation: Creating ConfigClient for server connection configuration
End of explanation
oc = OpenCGAClient(configuration=config_dict,
user=user,
pwd=passwd)
## Getting the session id / token
token = oc.session_id
print("Session token:\n{}...".format(token[:10]))
oc = OpenCGAClient(configuration=config_dict,
session_id=token)
Explanation: LogIn with user credentials
End of explanation |
254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intermediate Topics in Python
Author
Step1: The eagle-eyed amongst you may notice the quotes around each number - but we'll address that in a bit.
Methods and Attributes
Step2: Our txt file missed a student! Let's say she got an 89 on the test. How can we add her score to our list?
Step3: Looks like what we want, but let's check on insert and extend just to be sure -
Step4: It's possible to define a custom set of methods and attributes in order to create new classes of objects. We won't go into much depth about these here, though. (See
Step5: Apparently not. Perhaps a for loop is the way to go? Let's test that out.
Step6: Whoa, that's not right!
Step7: Looks like Python imported the data as a list of strings, save for that one 89% we appended.
We need to find a way to change those strings to integers instead, and then multiply them by 0.4.
Step8: For large operations, list comprehensions are also faster than using list.append. When the latter is a part of a loop, Python looks up the list in memory at every single iteration, which can really slow things down for large lists. List comprehensions, on the other hand, do not require the look up operation. They're also far more visually compact while conveying the same amount of information.
Step9: Notice how the value we appended is gone. This is because we reassigned the list comprehension to scores above. Unlike append, list comprehensions do not alter an object in place - they are an object in their own right. It can sometimes be useful to print out a list comprehension to stdout before assigning it.
For future reference, we could use an assertion to make sure we did things right. We'll talk about that further down.
Step10: Control Flow Tools - Pass, Continue, Break
So you'll notice that in that sample loop before the list comprehension, we used a 'pass' statement. Here it is again
Step11: pass is part of a family of useful operators within Python that allow for more precise control flow operations.
Step12: Assertions ('Sanity Checks')
Let's say the following function was in the middle of a long script that runs input data through a series of operations.
Step13: What happens if something goes wrong, and a pair of strings are somehow given as input?
Step14: That output could be really bad for the downstream parts of our script if they're expecting numerical input!
Step15: Assertions do stop a script in its tracks though. What if we want Python to ignore an error and move forward?
try/except | Python Code:
# let's make sure of our working directory
import os
os.chdir('/Users/Ahmed/Desktop')
with open('scores.txt', 'r') as file:
scores = file.read().split(',')
print(scores)
Explanation: Intermediate Topics in Python
Author: Ahmed Hasan
Made for U of T Coders, to be delivered on 05/04/2017
Contents
Reading/Writing Files
Methods/Attributes
List Comprehensions
Control Flow Tools - pass, continue, break
assert() - if we have time
Try/except - if we have time
What I'm assuming you know
Everything in Madeleine's Intro to Python lesson! I tried to have this lesson pick up right where the intro material left off. This includes:
- the interpreter
- variables
- lists
- indexing/slicing
- if statements
- for loops (and loop syntax in general)
- functions
- very basic NumPy, Pandas, matplotlib
Reading (and Writing) Files
Let's say we have 50 students who wrote a test. This test was out of 40 points, and their results - in percentages - are stored in a text file called scores.txt. We want to import these into a Python list.
End of explanation
len(scores)
Explanation: The eagle-eyed amongst you may notice the quotes around each number - but we'll address that in a bit.
Methods and Attributes
End of explanation
# should we do this?
scoresfixed = scores + [89]
# or should we open the file in Notepad/TextEdit and manually put the 89 in?
# Python contains built-in methods and attributes for each object type
dir(scores)
help(scores.append)
Explanation: Our txt file missed a student! Let's say she got an 89 on the test. How can we add her score to our list?
End of explanation
help(scores.insert)
help(scores.extend)
# append it is!
scores.append(89)
print(scores)
Explanation: Looks like what we want, but let's check on insert and extend just to be sure -
End of explanation
dir(mylist)
Explanation: It's possible to define a custom set of methods and attributes in order to create new classes of objects. We won't go into much depth about these here, though. (See: Lina's lesson on classes in Python)
List Comprehensions
Let's say we want to get the original scores, out of 40. Is there a built-in method we could use in order to do the same thing to every item in a list?
End of explanation
for i in range(len(scores)):
scores[i] = scores[i] * 0.4
print(scores)
Explanation: Apparently not. Perhaps a for loop is the way to go? Let's test that out.
End of explanation
print(scores)
Explanation: Whoa, that's not right!
End of explanation
# this is one way?
for num in scores:
if isinstance(num, str):
num = int(num)
num = num * 0.4
else:
pass
# or - we could use a list comprehension!
scores = [int(num)*0.4 for num in scores]
# conditionals can also be included in list comprehensions - this is valid syntax
scores = [int(num)*0.4 for num in scores if isinstance(num, str)]
# else clauses in list comprehensions
scores = [int(num)*0.4 if isinstance(num, str) else num*0.4 for num in scores]
Explanation: Looks like Python imported the data as a list of strings, save for that one 89% we appended.
We need to find a way to change those strings to integers instead, and then multiply them by 0.4.
End of explanation
print(scores)
len(scores)
Explanation: For large operations, list comprehensions are also faster than using list.append. When the latter is a part of a loop, Python looks up the list in memory at every single iteration, which can really slow things down for large lists. List comprehensions, on the other hand, do not require the look up operation. They're also far more visually compact while conveying the same amount of information.
End of explanation
# putting it all together - the right way, all along!
with open('scores.txt','r') as f:
scores = [int(num) for num in f.read().split(',')]
print(scores)
Explanation: Notice how the value we appended is gone. This is because we reassigned the list comprehension to scores above. Unlike append, list comprehensions do not alter an object in place - they are an object in their own right. It can sometimes be useful to print out a list comprehension to stdout before assigning it.
For future reference, we could use an assertion to make sure we did things right. We'll talk about that further down.
End of explanation
for num in scores:
if isinstance(num, str):
num = int(num)
num = num * 0.4
else:
pass
Explanation: Control Flow Tools - Pass, Continue, Break
So you'll notice that in that sample loop before the list comprehension, we used a 'pass' statement. Here it is again:
End of explanation
mylist = [1,2,3,4,5,6,7]
# pass is a placeholder
for num in mylist:
if num%2 == 0:
pass
print(num)
# break will exit the loop
for num in mylist:
if num%2 == 0:
break
print(num)
# continue will immediately jump to the next iteration
for num in mylist:
if num%2 == 0:
continue
print(num)
mylist.append('hello')
print(mylist)
# pass is useful to make certain conditions explicit and take care of outside cases
# while it's not 'necessary' here for the loop to function, it makes the operation clearer
for i in range(len(mylist)):
if isinstance(mylist[i], int):
mylist[i] = mylist[i] * 0.5
elif isinstance(mylist[i], str):
pass
print(mylist)
Explanation: pass is part of a family of useful operators within Python that allow for more precise control flow operations.
End of explanation
def sumdoubles(x,y):
'''Multiplies inputs by 2, and returns their sum.'''
out = x*2 + y*2
return out
sumdoubles(2,3)
Explanation: Assertions ('Sanity Checks')
Let's say the following function was in the middle of a long script that runs input data through a series of operations.
End of explanation
sumdoubles('hello', 'how goes?')
Explanation: What happens if something goes wrong, and a pair of strings are somehow given as input?
End of explanation
# assertions are handy to avoid the propagation of errors
def sumdoubles(x,y):
assert(not isinstance(x, str) and not isinstance(y, str))
out = x*2 + y*2
return out
sumdoubles(2,3)
sumdoubles(4,6.5)
sumdoubles('hey', 'strings are cool')
Explanation: That output could be really bad for the downstream parts of our script if they're expecting numerical input!
End of explanation
newlist = [1,2,3,'a',4,5]
for item in newlist:
print(item * 0.5)
# but let's say we just want to ignore the string, instead of ceasing the operation
for item in newlist:
try:
print(item * 0.5)
except TypeError:
print('ignored', item)
Explanation: Assertions do stop a script in its tracks though. What if we want Python to ignore an error and move forward?
try/except
End of explanation |
255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lmec', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-LMEC
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Find frequencies that make only $(\hat{a}^2\hat{b}^2+\hat{a}\hat{b}\hat{d}^2+\hat{d}^4)\hat{c}^\dagger +h.c.$ resonant in the 5th order expansion of $\sin(\hat{a}+\hat{b}+\hat{c}+\hat{d}+h.c.)$
Import the "Hamiltonian-through-Nonlinearities Engineering" module (it can be installed from PyPI using pip).
Step1: Set the letters you want to use for annihilation operators (4 modes in our case).
Step2: Write down (or somehow generate) a list of the monomials that you want to be resonant.
Step3: The convention for typing in or printing out is
Step4: Now generate the terms that you want to be off resonant
Step5: Generate the list of 3rd and 5th order terms in the expansion of $\sin(\hat{a}+\hat{b}+\hat{c}+\hat{d}+h.c.)$.
Step6: Filter out of the list
Step7: How many terms are left.
Step8: Finally, solve the constraints | Python Code:
import hamnonlineng as hnle
Explanation: Find frequencies that make only $(\hat{a}^2\hat{b}^2+\hat{a}\hat{b}\hat{d}^2+\hat{d}^4)\hat{c}^\dagger +h.c.$ resonant in the 5th order expansion of $\sin(\hat{a}+\hat{b}+\hat{c}+\hat{d}+h.c.)$
Import the "Hamiltonian-through-Nonlinearities Engineering" module (it can be installed from PyPI using pip).
End of explanation
letters = 'abcd'
Explanation: Set the letters you want to use for annihilation operators (4 modes in our case).
End of explanation
resonant = [hnle.Monomial(1,'aabbC'), # First argument is the constant real factor in front of the operator
hnle.Monomial(1,'abddC'), # Second argument is the string representing the operators
hnle.Monomial(1,'Cdddd')]
Explanation: Write down (or somehow generate) a list of the monomials that you want to be resonant.
End of explanation
resonant
Explanation: The convention for typing in or printing out is:
- lower 'a' represents $\hat{a}$
- capital 'A' represents $\hat{a}^\dagger$
- the hermitian conjugate is implicit, i.e. Monomial(1,'Aab') is $\hat{a}^\dagger\hat{a}\hat{b}+\hat{a}^\dagger\hat{a}\hat{b}^\dagger$
- the library sorts the expresion to make it "canonical", and given that the presence of a hermitian conjugate is implicit each monomial might print out as its conjugate, i.e. there is no difference between Monomial(1,'a') and Monomial(1,'A')
End of explanation
op_sum = hnle.operator_sum(letters)
op_sum
Explanation: Now generate the terms that you want to be off resonant: start with the sum $\hat{a}+\hat{b}+\hat{c}+\hat{d}+h.c.$.
End of explanation
sine_exp = hnle.sin_terms(op_sum, 3) + hnle.sin_terms(op_sum, 5)
sine_exp_list = sine_exp.m
Explanation: Generate the list of 3rd and 5th order terms in the expansion of $\sin(\hat{a}+\hat{b}+\hat{c}+\hat{d}+h.c.)$.
End of explanation
off_resonant = hnle.drop_single_mode(
hnle.drop_definitely_offresonant(
hnle.drop_matching(sine_exp.m, resonant)))
off_resonant = list(off_resonant)
Explanation: Filter out of the list:
- terms that match the terms we want to be resonant
- terms that are only annihilation or only creation operators (definitely off-resonant)
- terms that contain only one single mode
End of explanation
len(off_resonant)
Explanation: How many terms are left.
End of explanation
res = hnle.head_and_count(
hnle.solve_constraints_gecode(resonant, off_resonant, letters, maxfreq=20))
Explanation: Finally, solve the constraints:
End of explanation |
257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, import relevant modules and configure matplotlib to generate plots inline. The submodule scipy.integrate contains a number of routines to compute integrals numerically.
Step1: Quadrature using scipy
In the following example, an explicit integrand (notably $\cos^2(e^x)$) is defined as a lambda function that is passed to the quadrature routine scipy.integrate.quad for evaluation. The quadrature routine adaptively chooses quadrature points based on an error estimator; where the error estimate for the integral over a given subinterval is too large, the subinterval is subdivided recursively and the approximate integral recomputed on each subinterval until the total error estimate is below a prescribed tolerance. Notice that this quadrature routine expects a function $f$ as input that can be evaluated at any quadrature point in the domain. Given the oscillatory nature of the integrand toward the right end of the interval, more quadrature points are required there.
Step2: As another alternative, the routine scipy.integrate.trapz accepts numerical data in arrays as inputs and computes a trapezoidal approximation of a different integral. An analytical integrand is chosen and defined below as a lambda function for comparison. The numerical data is obtained by randomly drawing numbers from a Gaussian distribution and using those values (scaled and translated) as quadrature points from which to build up a trapezoidal approximation of an integral. | Python Code:
import numpy as np
import matplotlib.pylab as mpl
%matplotlib inline
Explanation: First, import relevant modules and configure matplotlib to generate plots inline. The submodule scipy.integrate contains a number of routines to compute integrals numerically.
End of explanation
fun = lambda x: np.cos(np.exp(x))**2 # define as a one-line "lambda" function
a = 0
b = 3
x = np.linspace(a,b,200)
y = fun(x)
mpl.plot(x,y)
mpl.show()
print fun(b)
from scipy.integrate import quad
(I_val, err) = quad(fun, 0, 3) # Notice return value is a Python *tuple* (here an ordered pair)
print "Approximate value of integral is %15.13g with error estimate %15.10g" % (I_val,err)
Explanation: Quadrature using scipy
In the following example, an explicit integrand (notably $\cos^2(e^x)$) is defined as a lambda function that is passed to the quadrature routine scipy.integrate.quad for evaluation. The quadrature routine adaptively chooses quadrature points based on an error estimator; where the error estimate for the integral over a given subinterval is too large, the subinterval is subdivided recursively and the approximate integral recomputed on each subinterval until the total error estimate is below a prescribed tolerance. Notice that this quadrature routine expects a function $f$ as input that can be evaluated at any quadrature point in the domain. Given the oscillatory nature of the integrand toward the right end of the interval, more quadrature points are required there.
End of explanation
(a,b) = (0,5) # Observe use of tuple for multiple assignment statements
N = 5
xdat = np.unique(np.sort(np.random.rand(N) * (b-a) + a)) # random ordinates uniformly sampled from a to b
fun = lambda x: np.sin(x) * np.cos(x**2) + 1
ydat = fun(xdat)
# Generate plot on slighlty wider axes
delta = 0.1*(b-a)
x = np.linspace(a-delta,b+delta,301)
y = fun(x)
mpl.plot(x,y,'r-',xdat,ydat,'bo',xdat,0*xdat,'bx')
mpl.plot([x.min(),x.max()],[0.,0.],'k-')
mpl.xlim((x.min(),x.max()))
mpl.ylim((y.min()-delta,y.max()+delta))
mpl.show()
print "length = %s" % len(xdat)
(fun_solution, err_fun) = quad(fun, 0, 5) # quad requires a callable function as an argument
from scipy.integrate import trapz
dat_solution = trapz(ydat, xdat) # trapz requires numerical data
print 'Solution obtained by adaptive quadrature: %s' % str(fun_solution)
print 'Trapezoidal solution obtained using random samples: %s' % str(dat_solution)
print "Relative error: %s" % str(abs(fun_solution-dat_solution)/fun_solution)
Explanation: As another alternative, the routine scipy.integrate.trapz accepts numerical data in arrays as inputs and computes a trapezoidal approximation of a different integral. An analytical integrand is chosen and defined below as a lambda function for comparison. The numerical data is obtained by randomly drawing numbers from a Gaussian distribution and using those values (scaled and translated) as quadrature points from which to build up a trapezoidal approximation of an integral.
End of explanation |
258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 4
Problem 4-7
Step1: Description
A 100-MVA, 14.4-kV, 0.8-PF-lagging, 50-Hz, two-pole, Y-connected synchronous generator has a per-unit synchronous reactance of 1.1 and a per-unit armature resistance of 0.011.
Step2: (a)
What are its synchronous reactance and armature resistance in ohms?
(b)
What is the magnitude of the internal generated voltage $E_A$ at the rated conditions?
What is its torque angle $\delta$ at these conditions?
(c)
Ignoring losses in this generator
What torque must be applied to its shaft by the prime mover at full load?
SOLUTION
The base phase voltage of this generator is
Step3: Therefore, the base impedance of the generator is
Step4: (b)
The generator impedance in ohms are
Step5: (b)
The rated armature current is
Step6: The power factor is 0.8 lagging, so
Step7: It is very often the case that especially in larger machines the armature resistance $R_A$ is simply neclected and one calulates the armature voltage simply as
Step8: Therefore, the magnitude of the internal generated voltage $E_A$ is
Step9: V, and the torque angle $\delta$ is
Step10: degrees.
(c)
Ignoring losses, the input power would equal the output power. Since
Step11: and,
$$n_\text{sync} = \frac{120f_{se}}{P}$$
Step12: the applied torque would be | Python Code:
%pylab notebook
%precision 1
Explanation: Excercises Electric Machinery Fundamentals
Chapter 4
Problem 4-7
End of explanation
Vl = 14.4e3 # [V]
S = 100e6 # [VA]
ra = 0.011 # [pu]
xs = 1.1 # [pu]
PF = 0.8
p = 2
fse = 50 # [Hz]
Explanation: Description
A 100-MVA, 14.4-kV, 0.8-PF-lagging, 50-Hz, two-pole, Y-connected synchronous generator has a per-unit synchronous reactance of 1.1 and a per-unit armature resistance of 0.011.
End of explanation
Vphase_base = Vl / sqrt(3)
print('Vphase_base = {:.0f} V'.format(Vphase_base))
Explanation: (a)
What are its synchronous reactance and armature resistance in ohms?
(b)
What is the magnitude of the internal generated voltage $E_A$ at the rated conditions?
What is its torque angle $\delta$ at these conditions?
(c)
Ignoring losses in this generator
What torque must be applied to its shaft by the prime mover at full load?
SOLUTION
The base phase voltage of this generator is:
End of explanation
Zbase = 3*Vphase_base**2 / S
print('Zbase = {:.3f} Ω'.format(Zbase))
Explanation: Therefore, the base impedance of the generator is:
$$Z_\text{base} = \frac{3V^2_{\phi_\text{base}}}{S_\text{base}}$$
End of explanation
Ra = ra * Zbase
Xs = xs * Zbase
print('''
Ra = {:.4f} Ω Xs = {:.3f} Ω
==============================='''.format(Ra, Xs))
Explanation: (b)
The generator impedance in ohms are:
End of explanation
Ia_amp = S / (sqrt(3) * Vl)
print('Ia_amp = {:.0f} A'.format(Ia_amp))
Explanation: (b)
The rated armature current is:
$$I_A = I_L = \frac{S}{\sqrt{3}V_T}$$
End of explanation
Ia_angle = -arccos(PF)
Ia = Ia_amp * (cos(Ia_angle) + sin(Ia_angle)*1j)
print('Ia = {:.0f} ∠{:.2f}° A'.format(abs(Ia), Ia_angle / pi *180))
Explanation: The power factor is 0.8 lagging, so:
End of explanation
EA = Vphase_base + (Ra + Xs*1j) * Ia
EA_angle = arctan(EA.imag/EA.real)
print('EA = {:.1f} V ∠{:.1f}°'.format(abs(EA), EA_angle/pi*180))
Explanation: It is very often the case that especially in larger machines the armature resistance $R_A$ is simply neclected and one calulates the armature voltage simply as:
$$\vec{E}A = \vec{V}\phi + jX_S\vec{I}_A$$
But since in this case we were given the armature resistance explicitly we should also use it.
Therefore, the internal generated voltage is
$$\vec{E}A = \vec{V}\phi + (R_A + jX_S)\vec{I}_A$$
End of explanation
abs(EA)
Explanation: Therefore, the magnitude of the internal generated voltage $E_A$ is:
End of explanation
EA_angle/pi*180
Explanation: V, and the torque angle $\delta$ is:
End of explanation
Pout = PF * S
print('Pout = {:.1F} MW'.format(Pout/1e6))
Explanation: degrees.
(c)
Ignoring losses, the input power would equal the output power. Since
End of explanation
n_sync = 120*fse / p
print('n_sync = {:.0F} r/min'.format(n_sync))
Explanation: and,
$$n_\text{sync} = \frac{120f_{se}}{P}$$
End of explanation
w_sync = n_sync * (2*pi/60.0)
tau_app = Pout / w_sync
print('''
τ_app = {:.0f} Nm
================='''.format(tau_app))
Explanation: the applied torque would be:
$$\tau_\text{app} = \tau_\text{ind} = \frac{P_\text{out}}{\omega_\text{sync}}$$
End of explanation |
259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conv2D
[convolutional.Conv2D.0] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='linear', use_bias=True
Step1: [convolutional.Conv2D.1] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='linear', use_bias=False
Step2: [convolutional.Conv2D.2] 4 3x3 filters on 5x5x2 input, strides=(2,2), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step3: [convolutional.Conv2D.3] 5 4x4 filters on 7x7x3 input, strides=(2,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step4: [convolutional.Conv2D.4] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step5: [convolutional.Conv2D.5] 4 3x3 filters on 4x4x2 input, strides=(2,2), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step6: [convolutional.Conv2D.6] 4 3x3 filters on 6x3x1 input, strides=(3,2), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step7: [convolutional.Conv2D.7] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(2,2), activation='linear', use_bias=True
Step8: [convolutional.Conv2D.8] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(2,2), activation='linear', use_bias=False
Step9: [convolutional.Conv2D.9] 4 3x3 filters on 7x7x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(3,3), activation='relu', use_bias=True
Step10: [convolutional.Conv2D.10] 3 4x4 filters on 4x8x3 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(2,2), activation='relu', use_bias=True
Step11: [convolutional.Conv2D.11] 4 3x3 filters on 8x8x2 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(4,4), activation='relu', use_bias=True
Step12: export for Keras.js tests | Python Code:
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(1,1),
activation='linear', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(100)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: Conv2D
[convolutional.Conv2D.0] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='linear', use_bias=True
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(1,1),
activation='linear', use_bias=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(101)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.1] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='linear', use_bias=False
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(2,2), padding='valid',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(102)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.2] 4 3x3 filters on 5x5x2 input, strides=(2,2), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (7, 7, 3)
conv = Conv2D(5, (4,4), strides=(2,1), padding='valid',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(103)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.3] 5 4x4 filters on 7x7x3 input, strides=(2,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='same',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(104)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.4] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (4, 4, 2)
conv = Conv2D(4, (3,3), strides=(2,2), padding='same',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(105)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.5] 4 3x3 filters on 4x4x2 input, strides=(2,2), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (6, 3, 1)
conv = Conv2D(4, (3,3), strides=(3,2), padding='same',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(106)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.6] 4 3x3 filters on 6x3x1 input, strides=(3,2), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(2,2),
activation='linear', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(100)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.7] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(2,2), activation='linear', use_bias=True
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(2,2),
activation='linear', use_bias=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(101)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.8] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(2,2), activation='linear', use_bias=False
End of explanation
data_in_shape = (7, 7, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(3,3),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(102)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.9] 4 3x3 filters on 7x7x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(3,3), activation='relu', use_bias=True
End of explanation
data_in_shape = (4, 8, 3)
conv = Conv2D(3, (4,4), strides=(1,1), padding='same',
data_format='channels_last', dilation_rate=(2,2),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(103)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.10] 3 4x4 filters on 4x8x3 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(2,2), activation='relu', use_bias=True
End of explanation
data_in_shape = (8, 8, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='same',
data_format='channels_last', dilation_rate=(4,4),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(104)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.11] 4 3x3 filters on 8x8x2 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(4,4), activation='relu', use_bias=True
End of explanation
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 1
Imports
Step1: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step3: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
Step4: Describe the choices you have made in building this visualization and how they make it effective.
YOUR ANSWER HERE
First I streched the graph way out horizontally to lessen the slope, making the data easier to read.
I then labeled each axis and gave the graph a title, because not doing so is crazy talk.
Then I altered what 'ticks' show up, as only certain values are important to what this graph is trying to show.
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 1
Imports
End of explanation
import os
assert os.path.isfile('yearssn.dat')
Explanation: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
End of explanation
data = np.loadtxt('yearssn.dat')
year = data[:,0]
ssc = data[:,1]
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
End of explanation
plt.figure(figsize=(50,5))
plt.plot(year, ssc)
plt.grid(True)
plt.yticks([50,100,150,200], [50,100,150,200])
plt.xticks([1700,1750,1800,1850,1900,1950,2000], [1700,1750,1800,1850,1900,1950,2000])
plt.xlabel('Year')
plt.ylabel('Sun Spot Count')
plt.title('Sun Spot Counts per Year 1700-Now')
assert True # leave for grading
Explanation: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
year
year_1st=year[0:100]
year_2nd=year[101:200]
year_3rd=year[201:300]
year_4th=year[301:400]
ssc_1st=ssc[0:100]
ssc_2nd=ssc[101:200]
ssc_3rd=ssc[201:300]
ssc_4th=ssc[301:400]
plt.figure(figsize=(30,10))
plt.subplot(4,1,1)
plt.title('Sun Spot Counts per Year')
plt.plot(year_1st, ssc_1st)
plt.tight_layout()
plt.yticks([50,100,150,200], [50,100,150,200])
plt.ylabel('Sun Spot Count')
plt.subplot(4,1,2)
plt.plot(year_2nd, ssc_2nd)
plt.tight_layout()
plt.yticks([50,100,150,200], [50,100,150,200])
plt.ylabel('Sun Spot Count')
plt.subplot(4,1,3)
plt.plot(year_3rd, ssc_3rd)
plt.tight_layout()
plt.yticks([50,100,150,200], [50,100,150,200])
plt.ylabel('Sun Spot Count')
plt.subplot(4,1,4)
plt.plot(year_4th, ssc_4th)
plt.tight_layout()
plt.yticks([50,100,150,200], [50,100,150,200])
plt.xlabel('Year')
plt.ylabel('Sun Spot Count')
assert True # leave for grading
Explanation: Describe the choices you have made in building this visualization and how they make it effective.
YOUR ANSWER HERE
First I streched the graph way out horizontally to lessen the slope, making the data easier to read.
I then labeled each axis and gave the graph a title, because not doing so is crazy talk.
Then I altered what 'ticks' show up, as only certain values are important to what this graph is trying to show.
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation |
261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem Set 01
1. COUNTING VOWELS
Step1: 2. COUNTING BOBS | Python Code:
s= 'wordsmith'
vowels = {'a','e','i','o','u'}
count = 0
for char in s:
if char in vowels:
count+=1
print "Number of vowels: " + str(count)
Explanation: Problem Set 01
1. COUNTING VOWELS
End of explanation
s = 'azcbobobegghakl'
pattern = 'bob'
count =0
for position in range(0,len(s)):
if s[position:position+3]==pattern:
count+=1
print count
Explanation: 2. COUNTING BOBS
End of explanation |
262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use of the wflow OpenStreams framework API to connect a reservoir model
http
Step1: Set model run-time parameters
Set the
Step2: Here we make a pit in the middle of the main river. This will be the inflow to the reservoir
Step3: Run for a number of timesteps | Python Code:
# First import the model. Here we use the HBV version
from wflow.wflow_sbm import *
import IPython
from IPython.display import display, clear_output
%pylab inline
#clear_output = IPython.core.display.clear_output
# Here we define a simple fictious reservoir
reservoirstorage = 15000
def simplereservoir(inputq,storage):
K = 0.087
storage = storage + inputq
outflow = storage * K
storage = storage - outflow
return outflow, storage
Explanation: Use of the wflow OpenStreams framework API to connect a reservoir model
http://ops-wflow.sourceforge.net/1.0RC7/
This ipython notebook demonstrates how to load an openstreams python model, execute it step-by-step and investigate the (intermediate) results. It also shows how to re-route surface water through a reservoir model. The first steps is to load the model and framework:
End of explanation
# define start and stop time of the run
startTime = 1
stopTime = 200
currentTime = 1
# set runid, cl;onemap and casename. Also define the ini file
runId = "reservoirtest_1"
#configfile="wflow_hbv_mem.ini"
configfile="wflow_sbm.ini"
wflow_cloneMap = 'wflow_subcatch.map'
# the casename points to the complete model setup with both static and dynamic input
caseName="../examples/wflow_rhine_sbm/"
#make a usermodel object
myModel = WflowModel(wflow_cloneMap, caseName,runId,configfile)
# initialise the framework
dynModelFw = wf_DynamicFramework(myModel, stopTime,startTime)
dynModelFw.createRunId(NoOverWrite=False,level=logging.ERROR)
dynModelFw.setQuiet(1)
# Run the initial part of the model (reads parameters and sets initial values)
dynModelFw._runInitial() # Runs initial part
dynModelFw._runResume() # gets the state variables from disk
# Get list of variables supplied by the model
#print dynModelFw.wf_supplyVariableNamesAndRoles()
Explanation: Set model run-time parameters
Set the:
start and time time
set the runid (this is where the results are stored, relative to the casename)
set the name of the configfile (stire in the case directory
set the clone mape (usually the wflow_subcatch.map)
set the casename. This is where all the model the model resides
End of explanation
# A pit can be set in the ldd by specifying the direction 5
# (see pcraster.eu for the ldd direction conventions)
ret = dynModelFw.wf_setValueLdd("TopoLdd",5.0,8.40943,49.6682)
report(myModel.TopoLdd,"n_ldd.map")
Explanation: Here we make a pit in the middle of the main river. This will be the inflow to the reservoir
End of explanation
f, ax = plt.subplots(1,3,figsize=(14, 4))
plotar = []
plotarstorage = []
plotaroutflow = []
for ts in range(1,45):
# Add inflow to outflow downstream of the pit
# See the API setion of the INI file
# Get Q value at pit, the reservoir inflow
inflowQ = dynModelFw.wf_supplyScalar("SurfaceRunoff",8.40943,49.6682)
# save for plotting
plotar.append(inflowQ)
# Feed to the reservoir model
outflow, reservoirstorage = simplereservoir(inflowQ, reservoirstorage)
# save for plotting
plotarstorage.append(reservoirstorage)
plotaroutflow.append(outflow)
#dynModelFw._userModel().IF = cover(0.0)
dynModelFw.wf_setValue("IF", outflow ,8.40943,49.7085)
# update runoff ONLY NEEDED IF YOU FIDDLE WITH THE KIN_WAVE RESERVOIR
myModel.updateRunOff()
dynModelFw._runDynamic(ts,ts) # runs for this timesteps
# Now get some results for display
run = dynModelFw.wf_supplyMapAsNumpy("SurfaceRunoff")
uz = dynModelFw.wf_supplyMapAsNumpy("FirstZoneCapacity")
sm = dynModelFw.wf_supplyMapAsNumpy("UStoreDepth")
sm[sm == -999] = np.nan
uz[uz == -999] = np.nan
run[run == -999] = np.nan
ax[0].imshow(log(run))
ax[1].plot(plotarstorage,'k')
ax[1].set_title("Reservoir storage")
ax[2].plot(plotar,'b')
ax[2].plot(plotaroutflow,'r')
ax[2].set_title("Blue inflow, red outflow:" + str(ts))
clear_output()
display(f)
plt.close()
dynModelFw._runSuspend() # saves the state variables
dynModelFw._wf_shutdown()
imshow(dynModelFw.wf_supplyMapAsNumpy("SurfaceRunoff"))
Explanation: Run for a number of timesteps
End of explanation |
263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature
Step1: Config
Automatically discover the paths to various data folders and compose the project structure.
Step2: Identifier for storing these features on disk and referring to them later.
Step3: Read Data
Original question datasets.
Step4: Preprocessed and tokenized questions.
Step5: Build Features
Step6: Extract character-based features
Step7: Extract token-based features
Step8: Combine features
Step9: Save features | Python Code:
from pygoose import *
Explanation: Feature: Simple Summary Statistics
Extract rudimentary statistical features, such as question lengths (in words and characters), differences and ratios of these lengths.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
feature_list_id = 'simple_summaries'
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('')
df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('')
Explanation: Read Data
Original question datasets.
End of explanation
tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_train.pickle')
tokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_test.pickle')
Explanation: Preprocessed and tokenized questions.
End of explanation
def word_difference_ratio(q1_tokens, q2_tokens):
return len(set(q1_tokens) ^ set(q2_tokens)) / (len(set(q1_tokens)) + len(set(q2_tokens)))
def extract_original_question_features(row):
q1 = row[0]
q2 = row[1]
shorter_char_length = min(len(q1), len(q2))
longer_char_length = max(len(q1), len(q2))
return [
np.log(shorter_char_length + 1),
np.log(longer_char_length + 1),
np.log(abs(longer_char_length - shorter_char_length) + 1),
shorter_char_length / longer_char_length,
]
def extract_tokenized_features(pair):
q1 = pair[0]
q2 = pair[1]
shorter_token_length = min(len(q1), len(q2))
longer_token_length = max(len(q1), len(q2))
return [
np.log(shorter_token_length + 1),
np.log(longer_token_length + 1),
np.log(abs(longer_token_length - shorter_token_length) + 1),
shorter_token_length / longer_token_length,
word_difference_ratio(q1, q2),
]
Explanation: Build Features
End of explanation
features_original_train = kg.jobs.map_batch_parallel(
df_train.as_matrix(columns=['question1', 'question2']),
item_mapper=extract_original_question_features,
batch_size=1000,
)
features_original_test = kg.jobs.map_batch_parallel(
df_test.as_matrix(columns=['question1', 'question2']),
item_mapper=extract_original_question_features,
batch_size=1000,
)
Explanation: Extract character-based features
End of explanation
features_tokenized_train = kg.jobs.map_batch_parallel(
tokens_train,
item_mapper=extract_tokenized_features,
batch_size=1000,
)
features_tokenized_test = kg.jobs.map_batch_parallel(
tokens_test,
item_mapper=extract_tokenized_features,
batch_size=1000,
)
Explanation: Extract token-based features
End of explanation
X_train = np.hstack([features_original_train, features_tokenized_train])
X_test = np.hstack([features_original_test, features_tokenized_test])
print('X_train:', X_train.shape)
print('X_test: ', X_test.shape)
Explanation: Combine features
End of explanation
feature_names = [
# Character features.
'shorter_char_len_log',
'longer_char_len_log',
'char_len_diff_log',
'char_len_ratio',
# Token features.
'shorter_token_len_log',
'longer_token_len_log',
'token_len_diff_log',
'token_len_ratio',
'word_diff_ratio',
]
project.save_features(X_train, X_test, feature_names, feature_list_id)
Explanation: Save features
End of explanation |
264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this
Step4: Let's run this computation and iterate
Step5: results
lesson 1 sklearn LogisticRegression
50 training samples
Step6: Let's run it | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
pickle_file = '../notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
# With gradient descent training, even this much (10000) data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000 #10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run this computation and iterate:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: results
lesson 1 sklearn LogisticRegression
50 training samples: LogisticRegression score: 0.608200
100 training samples: LogisticRegression score: 0.708200
1000 training samples: LogisticRegression score: 0.829200
5000 training samples: LogisticRegression score: 0.846200
tensor flow results above
50: 43.3%
100: 53.1%
1000: 76.8%
5000: 81.6%
10000: 82.0%
Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run it:
End of explanation |
265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluate mock community classification accuracy for nb-extra
The purpose of this notebook is to evaluate taxonomic classification accuracy of mock communities using different classification methods.
Contains an additional section that uses CART analysis.
Prepare the environment
First we'll import various functions that we'll need for generating the report.
Step1: Configure local environment-specific values
This is the only cell that you will need to edit to generate basic reports locally. After editing this cell, you can run all cells in this notebook to generate your analysis report. This will take a few minutes to run, as results are computed at multiple taxonomic levels.
Values in this cell will not need to be changed, with the exception of project_dir, to generate the default results contained within tax-credit. To analyze results separately from the tax-credit precomputed results, other variables in this cell will need to be set.
Step2: Find mock community pre-computed tables, expected tables, and "query" tables
Next we'll use the paths defined above to find all of the tables that will be compared. These include the pre-computed result tables (i.e., the ones that the new methods will be compared to), the expected result tables (i.e., the tables containing the known composition of the mock microbial communities), and the query result tables (i.e., the tables generated with the new method(s) that we want to compare to the pre-computed result tables).
Note
Step3: Restrict analyses to a set of datasets or references
Step4: Compute and summarize precision, recall, and F-measure for mock communities
In this evaluation, we compute and summarize precision, recall, and F-measure of each result (pre-computed and query) based on the known composition of the mock communities. We then summarize the results in two ways
Step5: CART Analysis
In this section we will use Classification and Regression Trees to try to pick good parameters for the naïve Bayes classifier. In each case, we pick the path to the classification leaf that yields the highest expected F-measure. Also, we unconventionally turn of pruning, so that all parameters are sepecified. This has the effect of picking arbitrary parameters towards the leaves of the decision tree where it doesn't matter as much which parameters we choose.
This section requires the additional dependencies of rpy2 in Python and rpart in R. If you do not wish to install those dependencies, skip the CART Analysis section.
Step6: Split the Parameter String and Aggregate by Community
Step7: Kruskal-Wallis between-method accuracy comparisons
Kruskal-Wallis FDR-corrected p-values comparing classification methods at each level of taxonomic assignment
Step8: Violin plots of per-level accuracy
Heatmaps show the performance of individual method/parameter combinations at each taxonomic level, in each reference database (i.e., for bacterial and fungal mock communities individually).
Now we will focus on results at species level (for genus level, change to level 5)
Step9: Method Optimization
Which method/parameter configuration performed "best" for a given score? We can rank the top-performing configuration by dataset, method, and taxonomic level.
First, the top-performing method/configuration combination by dataset.
Step10: Now we can determine which parameter configuration performed best for each method. Count best values in each column indicate how many samples a given method achieved within one mean absolute deviation of the best result (which is why they may sum to more than the total number of samples).
Step11: Optimized method performance
And, finally, which method performed best at each individual taxonomic level for each reference dataset (i.e., for across all fungal and bacterial mock communities combined)?
For this analysis, we rank the top-performing method/parameter combination for each method at family through species levels. Methods are ranked by top F-measure, and the average value for each metric is shown (rather than count best as above). F-measure distributions are plotted for each method, and compared using paired t-tests with FDR-corrected P-values. This cell does not need to be altered, unless if you wish to change the metric used for sorting best methods and for plotting. | Python Code:
%matplotlib inline
from os.path import join, exists, expandvars
import pandas as pd
from IPython.display import display, Markdown
import seaborn.xkcd_rgb as colors
from tax_credit.plotting_functions import (pointplot_from_data_frame,
boxplot_from_data_frame,
heatmap_from_data_frame,
per_level_kruskal_wallis,
beta_diversity_pcoa,
average_distance_boxplots,
rank_optimized_method_performance_by_dataset)
from tax_credit.eval_framework import (evaluate_results,
method_by_dataset_a1,
parameter_comparisons,
merge_expected_and_observed_tables,
filter_df)
Explanation: Evaluate mock community classification accuracy for nb-extra
The purpose of this notebook is to evaluate taxonomic classification accuracy of mock communities using different classification methods.
Contains an additional section that uses CART analysis.
Prepare the environment
First we'll import various functions that we'll need for generating the report.
End of explanation
## project_dir should be the directory where you've downloaded (or cloned) the
## tax-credit repository.
project_dir = join('..', '..')
## expected_results_dir contains expected composition data in the structure
## expected_results_dir/<dataset name>/<reference name>/expected/
expected_results_dir = join(project_dir, "data/precomputed-results/", "mock-community")
## mock_results_fp designates the files to which summary results are written.
## If this file exists, it can be read in to generate results plots, instead
## of computing new scores.
mock_results_fp = join(expected_results_dir, 'broad_sweep_results.tsv')
## results_dirs should contain the directory or directories where
## results can be found. By default, this is the same location as expected
## results included with the project. If other results should be included,
## absolute paths to those directories should be added to this list.
results_dirs = [expected_results_dir]
## directory containing mock community data, e.g., feature table without taxonomy
mock_dir = join(project_dir, "data", "mock-community")
## Minimum number of times an OTU must be observed for it to be included in analyses. Edit this
## to analyze the effect of the minimum count on taxonomic results.
min_count = 1
## Define the range of taxonomic levels over which to compute accuracy scores.
## The default given below will compute order (level 2) through species (level 6)
taxonomy_level_range = range(2,7)
dataset_ids = ['mock-' + str(m) for m in (3, 12, 18, 22, 24, '26-ITS1', '26-ITS9')]
Explanation: Configure local environment-specific values
This is the only cell that you will need to edit to generate basic reports locally. After editing this cell, you can run all cells in this notebook to generate your analysis report. This will take a few minutes to run, as results are computed at multiple taxonomic levels.
Values in this cell will not need to be changed, with the exception of project_dir, to generate the default results contained within tax-credit. To analyze results separately from the tax-credit precomputed results, other variables in this cell will need to be set.
End of explanation
mock_results = evaluate_results(results_dirs,
expected_results_dir,
mock_results_fp,
mock_dir,
taxonomy_level_range=range(2,7),
min_count=min_count,
taxa_to_keep=None,
md_key='taxonomy',
subsample=False,
per_seq_precision=True,
exclude=['other'],
method_ids=['nb-extra'],
append=False,
force=False)
mock_results['Reference'].unique()
Explanation: Find mock community pre-computed tables, expected tables, and "query" tables
Next we'll use the paths defined above to find all of the tables that will be compared. These include the pre-computed result tables (i.e., the ones that the new methods will be compared to), the expected result tables (i.e., the tables containing the known composition of the mock microbial communities), and the query result tables (i.e., the tables generated with the new method(s) that we want to compare to the pre-computed result tables).
Note: if you have added additional methods to add, set append=True. If you are attempting to recompute pre-computed results, set force=True.
This cell will take a few minutes to run if new results are being added, so hold onto your hat. If you are attempting to re-compute everything, it may take an hour or so, so go take a nap.
End of explanation
#mock_results = filter_df(mock_results, column_name='Reference',
# values=['gg_13_8_otus_amplicon', 'gg_13_8_otus_read', 'gg_13_8_otus_full'],
# exclude=False)
mock_results = mock_results.reset_index(drop=True)
Explanation: Restrict analyses to a set of datasets or references: e.g., exclude taxonomy assignments made for purpose of reference database comparisons. This can be performed as shown below — alternatively, specific reference databases, datasets, methods, or parameters can be chosen by setting dataset_ids, reference_ids, method_ids, and parameter_ids in the evaluate_results command above.
End of explanation
color_pallette={
'nb-extra': 'black'
}
y_vars = ["Precision", "Recall", "F-measure", "Taxon Accuracy Rate", "Taxon Detection Rate"]
pointplot_from_data_frame?
pointplot_from_data_frame(mock_results, "Level", y_vars,
"Reference", "Method", color_pallette)
Explanation: Compute and summarize precision, recall, and F-measure for mock communities
In this evaluation, we compute and summarize precision, recall, and F-measure of each result (pre-computed and query) based on the known composition of the mock communities. We then summarize the results in two ways: first with boxplots, and second with a table of the top methods based on their F-measures. Higher scores = better accuracy
As a first step, we will evaluate average method performance at each taxonomic level for each method within each reference dataset type.
Note that, as parameter configurations can cause results to vary widely, average results are not a good representation of the "best" results. See here for results using optimized parameters for each method.
First we will define our color palette and the variables we want to plot. Via seaborn, we can apply the xkcd crowdsourced color names. If that still doesn't match your hue, use hex codes.
End of explanation
mock_results['Reference'].unique()
from itertools import product
from pandas import DataFrame, concat, to_numeric
from numpy import mean
import rpy2
%load_ext rpy2.ipython
%R require(rpart)
Explanation: CART Analysis
In this section we will use Classification and Regression Trees to try to pick good parameters for the naïve Bayes classifier. In each case, we pick the path to the classification leaf that yields the highest expected F-measure. Also, we unconventionally turn of pruning, so that all parameters are sepecified. This has the effect of picking arbitrary parameters towards the leaves of the decision tree where it doesn't matter as much which parameters we choose.
This section requires the additional dependencies of rpy2 in Python and rpart in R. If you do not wish to install those dependencies, skip the CART Analysis section.
End of explanation
columns = ['Alpha', 'Class-Prior', 'N-Features', 'Ngram-Range', 'Norm', 'Use-IDF', 'Confidence']
params = DataFrame((s.split(':') for s in mock_results['Parameters']), columns=columns)
keepers = ['Dataset', 'Level', 'Reference']
raw_param_results = concat([mock_results[keepers + ['F-measure']], params], axis=1)
raw_param_results = raw_param_results.apply(to_numeric, errors='ignore')
param_results = raw_param_results.groupby(keepers + columns, as_index=False).mean()
len(param_results)
%%R
recommend_params <- function(data, prior, levels, references)
{
data = data[data[,"Reference"] %in% references,]
data = data[data[,"Class.Prior"] == prior,]
data = data[data[,"Level"] %in% levels,]
fit <- rpart(F.measure ~ Confidence + Use.IDF + Ngram.Range + N.Features + Alpha + Reference + Norm,
data=data,
method="anova",
control=rpart.control(cp=0))
rightmost_leaf <- fit$frame[fit$frame[,"yval"] == max(fit$frame[,"yval"]),]
path.rpart(fit, as.numeric(rownames(rightmost_leaf)))
}
priors = ['uniform', 'prior']
reference_sets = [
['gg_13_8_otus_amplicon', 'gg_13_8_otus_full', 'gg_13_8_otus_read'],
['unite_20.11.2016_clean_amplicon', 'unite_20.11.2016_clean_full',
'unite_20.11.2016_clean_read']
]
level_sets = [[2,3,4,5], [6]]
for prior, levels, references in product(priors, level_sets, reference_sets):
display(Markdown("Prior: `" + prior + '`'))
display(Markdown("References: `" + str(references) + '`'))
display(Markdown("Levels: `" + str(levels) + '`'))
%R -i param_results,prior,levels,references recommend_params(param_results, prior, levels, references)
Explanation: Split the Parameter String and Aggregate by Community
End of explanation
result = per_level_kruskal_wallis(mock_results, y_vars, group_by='Method',
dataset_col='Reference', level_name='Level',
levelrange=range(2,7), alpha=0.05,
pval_correction='fdr_bh')
result
Explanation: Kruskal-Wallis between-method accuracy comparisons
Kruskal-Wallis FDR-corrected p-values comparing classification methods at each level of taxonomic assignment
End of explanation
mock_results_6 = mock_results[mock_results['Level'] == 6]
boxplot_from_data_frame?
boxplot_from_data_frame(mock_results_6, group_by="Method", metric="Precision", color_palette=color_pallette)
boxplot_from_data_frame(mock_results_6, group_by="Method", metric="Recall", color_palette=color_pallette)
boxplot_from_data_frame(mock_results_6, group_by="Method", metric="F-measure", color_palette=color_pallette)
boxplot_from_data_frame(mock_results_6, group_by="Method", metric="Taxon Accuracy Rate", color_palette=color_pallette)
boxplot_from_data_frame(mock_results_6, group_by="Method", metric="Taxon Detection Rate", color_palette=color_pallette)
Explanation: Violin plots of per-level accuracy
Heatmaps show the performance of individual method/parameter combinations at each taxonomic level, in each reference database (i.e., for bacterial and fungal mock communities individually).
Now we will focus on results at species level (for genus level, change to level 5)
End of explanation
for i in [n for n in range(1,27)]:
display(Markdown('## mock-{0}'.format(i)))
best = method_by_dataset_a1(mock_results_6, 'mock-{0}'.format(i))
display(best)
Explanation: Method Optimization
Which method/parameter configuration performed "best" for a given score? We can rank the top-performing configuration by dataset, method, and taxonomic level.
First, the top-performing method/configuration combination by dataset.
End of explanation
for method in mock_results_6['Method'].unique():
top_params = parameter_comparisons(
mock_results_6, method,
metrics=['Precision', 'Recall', 'F-measure',
'Taxon Accuracy Rate', 'Taxon Detection Rate'])
display(Markdown('## {0}'.format(method)))
display(top_params[:10])
uniform_6 = mock_results_6[['uniform' in p for p in mock_results_6['Parameters']]]
for method in uniform_6['Method'].unique():
top_params = parameter_comparisons(
uniform_6, method,
metrics=['Precision', 'Recall', 'F-measure',
'Taxon Accuracy Rate', 'Taxon Detection Rate'])
display(Markdown('## {0}'.format(method)))
display(top_params[:10])
Explanation: Now we can determine which parameter configuration performed best for each method. Count best values in each column indicate how many samples a given method achieved within one mean absolute deviation of the best result (which is why they may sum to more than the total number of samples).
End of explanation
rank_optimized_method_performance_by_dataset(mock_results,
dataset="Reference",
metric="F-measure",
level_range=range(4,7),
display_fields=["Method",
"Parameters",
"Taxon Accuracy Rate",
"Taxon Detection Rate",
"Precision",
"Recall",
"F-measure"],
paired=True,
parametric=True,
color=None,
color_palette=color_pallette)
rank_optimized_method_performance_by_dataset(mock_results,
dataset="Reference",
metric="Taxon Accuracy Rate",
level_range=range(6,7),
display_fields=["Method",
"Parameters",
"Taxon Accuracy Rate",
"Taxon Detection Rate",
"Precision",
"Recall",
"F-measure"],
paired=True,
parametric=True,
color=None,
color_palette=color_pallette)
Explanation: Optimized method performance
And, finally, which method performed best at each individual taxonomic level for each reference dataset (i.e., for across all fungal and bacterial mock communities combined)?
For this analysis, we rank the top-performing method/parameter combination for each method at family through species levels. Methods are ranked by top F-measure, and the average value for each metric is shown (rather than count best as above). F-measure distributions are plotted for each method, and compared using paired t-tests with FDR-corrected P-values. This cell does not need to be altered, unless if you wish to change the metric used for sorting best methods and for plotting.
End of explanation |
266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ferrofluid - Part 3
Table of Contents
Susceptibility with fluctuation formulas
Derivation of the fluctuation formula
Simulation
Magnetization curve of a 3D system
Remark
Step1: Now we set up all necessary simulation parameters
Step2: Next we set up the system. As in part I, the orientation of the dipole moments is set directly on the particles, whereas the magnitude of the moments is taken into account when determining the prefactor of the dipolar P3M (for more details see part I).
Hint
Step3: Now we equilibrate for a while
Step4: As we need the magnetization of our system, we use <tt>espressomd.observables.MagneticDipoleMoment</tt> to calculate the total dipole moment of the system which is the magnetization times the volume of the system.
Step5: Now we set the desired number of loops for the sampling
Step6: and sample the first and second moment of the magnetization or total dipole moment, by averaging over all total dipole moments occurring during the simulation
Step7: For the estimator of the initial susceptibility $\chi_\mathrm{init}$ we need the magnitude of one single dipole moment
Step8: Now we can calculate $\chi_\mathrm{init}$ from our simulation data
Step9: and print the result
Step10: Compared with the value $\chi = 0.822 \pm 0.017$ of Ref. <a href='#[1]'>[1]</a> (see table 1) it should be very similar.
Now we want to compare the result with the theoretical expectations.
At first with the simple Langevin susceptibility
Step11: and at second with the more advanced one (see Ref. <a href='#[1]'>[1]</a> eq. (6)) which has a cubic accuracy in $\chi_\mathrm{L}$ and reads
\begin{equation}
\chi = \chi_\mathrm{L} \left( 1 + \frac{\chi_\mathrm{L}}{3} + \frac{\chi_\mathrm{L}^2}{144} \right)
\end{equation}
Step12: Both of them should be smaller than our result, but the second one should be closer to our one. The deviation of the theoretical results to our simulation result can be explained by the fact that in the Langevin model there are no interactions between the particles incorporated at all and the more advanced (mean-field-type) one of Ref. <a href='#[1]'>[1]</a> do not take occurring cluster formations into account but assumes a homogeneous distribution of the particles. For higher values of the volume fraction $\phi$ and the dipolar interaction parameter $\lambda$ the deviations will increase as the cluster formation will become more pronounced.
Magnetization curve of a 3D system
At the end of this tutorial we now want to sample the magnetization curve of a three dimensional system and compare the results with analytical solutions. Again we will compare with the Langevin function but also with the approximation of Ref. <a href='#[2]'>[2]</a> (see also Ref. <a href='#[1]'>[1]</a> for the right coefficients) which takes the dipole-dipole interaction into account. For this approximation, which is a modified mean-field theory based on the pair correlation function, the Langevin parameter $\alpha$ is replaced by
\begin{equation}
\alpha' = \alpha + \chi_\mathrm{L}~L(\alpha) + \frac{\chi_\mathrm{L}^{2}}{16} L(\alpha) \frac{\mathrm{d} L(\alpha)}{\mathrm{d}\alpha}
\end{equation}
where $\chi_\mathrm{L}$ is the Langevin susceptibility
\begin{equation}
\chi_\mathrm{L} = \frac{N}{V}\frac{\mu_0 \mu^2}{3k_\mathrm{B}T} = 8 \cdot \lambda \cdot \phi
\end{equation}
Analogous to part II we start at zero external magnetic field and increase the external field successively. At every value of the external field we sample the total dipole moment which is proportional to the magnetization as we have a fixed volume.
First we create a list of values of the Langevin parameter $\alpha$. As we already sampled the magnetization at zero external field in the last section, we take this value and continue with the sampling of an external field unequal zero
Step13: Now for each value in this list we sample the total dipole moment / magnetization of the system for a while. Keep in mind that we only the current orientation of the dipole moments, i.e. the unit vector of the dipole moments, is saved in the particle list but not their magnitude. Thus we have to use $H\cdot \mu$ as the external magnetic field, where $\mu$ is the magnitude of a single magnetic dipole moment.
We will apply the field in x-direction using the class <tt>constraints</tt> of ESPResSo.
As in part II we use the same system for every value of the Langevin parameter $\alpha$. Thus we use that the system is already pre-equilibrated from the previous run so we save some equilibration time. For scientific purposes one would use a new system for every value for the Langevin parameter to ensure that the systems are independent and no correlation effects are measured. Also one would perform more than just one simulation for each value of $\alpha$ to increase the precision of the results.
Step14: Now we define the Langevin function and the modified mean-field-approximation of the Langevin parameter of Ref. <a href='#[2]'>[2]</a>
Step15: We also want to plot the linear approximation at $\alpha = 0$ to see for which values of $\alpha$ this approximation holds. We use the initial susceptibility calculated in the first chapter of this part as the gradient. As we want the gradient of $M^*$ with respect to $\alpha$ which fulfills the relation
\begin{equation}
\frac{\partial M^*}{\partial \alpha} = \frac{1}{M_\mathrm{sat}}\frac{\partial M}{\partial \left( \frac{\mu_0\mu}{k_\mathrm{B}T} H\right)} = \frac{k_\mathrm{B}T~V}{\mu_0\mu^2N}\frac{\partial M}{\partial H} = \frac{k_\mathrm{B}T~V}{\mu_0\mu^2N}~\chi
\end{equation}
we have to scale our calculated initial susceptibility $\chi_{init}$ by a factor to get it in our dimensionless units.
Now we plot the resulting curves together with our simulation results and the linear approximation | Python Code:
import espressomd
import espressomd.magnetostatics
espressomd.assert_features(['DIPOLES', 'DP3M', 'LENNARD_JONES'])
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 18})
import numpy as np
import tqdm
Explanation: Ferrofluid - Part 3
Table of Contents
Susceptibility with fluctuation formulas
Derivation of the fluctuation formula
Simulation
Magnetization curve of a 3D system
Remark: The equilibration and sampling times used in this tutorial would be not sufficient for scientific purposes, but they are long enough to get at least a qualitative insight of the behaviour of ferrofluids. They have been shortened so we achieve reasonable computation times for the purpose of a tutorial.
Susceptibility with fluctuation formulas
In this part we want to calculate estimators for the initial susceptibility, i.e. the susceptibility at zero external magnetic field. One could carry out several simulations with different external magnetic field strengths and get the initial susceptibility by fitting a line to the results. We want to go a more elegant way by using fluctuation formulas known from statistical mechanics.
In three dimensions the initial susceptibility $\chi_{init}$ can be calculated with zero field simulations through
\begin{equation}
\chi_\mathrm{init} = \frac{V \cdot \mu_0}{3 \cdot k_\mathrm{B} T} \left( \langle \boldsymbol{M}^2 \rangle - \langle \boldsymbol{M} \rangle^2 \right) = \frac{\mu_0}{3 \cdot k_\mathrm{B} T \cdot V} \left( \langle \boldsymbol{\mu}^2 \rangle - \langle \boldsymbol{\mu} \rangle^2 \right)
\end{equation}
where $\boldsymbol{M}$ is the magnetization vector and $\boldsymbol{\mu}$ is the total magnetic dipole moment of the system. In direction $i$ it reads
\begin{equation}
M_i = \frac{1}{V} \Bigg\langle \sum_{j=1}^N \tilde{\mu}_j^i \Bigg\rangle
\end{equation}
where $\tilde{\mu}_j^i$ is the $j$ th dipole moment in direction $i$.
Derivation of the fluctuation formula
We want to derive the fluctuation formula. We start with the definition of the magnetic susceptibility. In general this reads
\begin{equation}
\chi \equiv \frac{\partial}{\partial H} \langle M_{\boldsymbol{H}} \rangle
\end{equation}
with $\langle M_{\boldsymbol{H}} \rangle$ the ensemble averaged magnetization in direction of a homogeneous external magnetic field $\boldsymbol{H}$.
In thermal equilibrium the ensemble average of the magnetization reads
\begin{equation}
\langle M_{\boldsymbol{H}} \rangle = \frac{1}{V Z_\mathrm{c}} \left \lbrack \sum_{\alpha} \mu_{\boldsymbol{H},\alpha} e^{ -\beta E_{\alpha}(H=0) + \beta\mu_0\mu_{\boldsymbol{H},\alpha}H }\right \rbrack
\end{equation}
with $Z_\mathrm{c}$ the canonical partition function, $E_{\alpha}(H=0)$ the energy without an external magnetic field $\boldsymbol{H}$, $\beta$ the inverse thermal energy $\frac{1}{k_\mathrm{B}T}$, $\mu_{\boldsymbol{H},\alpha}$ the total magnetic dipole moment of the system in direction of the external magnetic field $\boldsymbol{H}$ in microstate $\alpha$ and $V$ the system volume.
Now we insert the magnetization $\langle M_{\boldsymbol{H}} \rangle$ in the definition of the magnetic susceptibility $\chi$ and let the derivative operate on the ensemble average. We get the fluctuation formula
\begin{equation}
\chi = \frac{\beta\mu_0}{V} \left \lbrack \frac{1}{Z_\mathrm{c}}\sum_{\alpha} \mu_{\alpha}^2~ e^{ -\beta E_{\alpha}(H=0) + \beta\mu_0\mu_{\boldsymbol{H},\alpha}H } - \frac{1}{Z_\mathrm{c}}\sum_{\alpha} \mu_{\alpha}~ e^{ -\beta E_{\alpha}(H=0) + \beta\mu_0\mu_{\boldsymbol{H},\alpha}H }~~ \frac{1}{Z_\mathrm{c}}\sum_{\alpha'}\mu_{\alpha'}~ e^{ -\beta E_{\alpha'}(H=0) + \beta\mu_0\mu_{\boldsymbol{H},\alpha}H }\right \rbrack = \frac{\beta\mu_0}{V} \left \lbrack \langle \mu_{\boldsymbol{H}}^2 \rangle - \langle \mu_{\boldsymbol{H}} \rangle^2 \right \rbrack = \frac{\beta\mu_0}{V} \left(\Delta \mu_{\boldsymbol{H}}\right)^2
\end{equation}
At zero external magnetic field ($H = 0$) there is no distinct direction for the system, so we can take the fluctuations $\Delta \mu$ in all directions and divide it by the dimension. Thus we can use more data points of our simulation for the average and get a more precise estimator for the susceptibility. Thus finally the fluctuation formula for the initial susceptibility in three dimensions reads
\begin{equation}
\chi_\mathrm{init} = \frac{\beta\mu_0}{3V} \left \lbrack \langle \boldsymbol{\mu}^2 \rangle - \langle \boldsymbol{\mu} \rangle^2 \right \rbrack = \frac{V\beta\mu_0}{3} \left \lbrack \langle \boldsymbol{M}^2 \rangle - \langle \boldsymbol{M} \rangle^2 \right \rbrack
\end{equation}
where $\boldsymbol{\mu}$ and $\boldsymbol{M}$ are defined above.
Simulation
In this part we want to consider a three dimensional ferrofluid system and compare our result for the initial susceptibility $\chi_\mathrm{init}$ with them of Ref. <a href='#[1]'>[1]</a>.
First we import all necessary packages and check for the required ESPResSo features
End of explanation
lj_sigma = 1
lj_epsilon = 1
lj_cut = 2**(1. / 6.) * lj_sigma
# magnetic field constant
mu_0 = 1.
# Particles
N = 1000
# Volume fraction
# phi = rho * 4. / 3. * np.pi * ( lj_sigma / 2 )**3.
phi = 0.0262
# Dipolar interaction parameter lambda = mu_0 m^2 /(4 pi sigma^3 kT)
dip_lambda = 3.
# Temperature
kT = 1.0
# Friction coefficient
gamma = 1.0
# Time step
dt = 0.02
# box size 3d
box_size = (N * np.pi * 4. / 3. * (lj_sigma / 2.)**3. / phi)**(1. / 3.)
Explanation: Now we set up all necessary simulation parameters
End of explanation
system = espressomd.System(box_l=(box_size, box_size, box_size))
system.time_step = dt
# Lennard-Jones interaction
system.non_bonded_inter[0, 0].lennard_jones.set_params(epsilon=lj_epsilon, sigma=lj_sigma, cutoff=lj_cut, shift="auto")
# Random dipole moments
np.random.seed(seed=1)
dip_phi = 2 * np.pi * np.random.random((N, 1))
dip_cos_theta = 2 * np.random.random((N, 1)) - 1
dip_sin_theta = np.sin(np.arccos(dip_cos_theta))
dip = np.hstack((
dip_sin_theta * np.sin(dip_phi),
dip_sin_theta * np.cos(dip_phi),
dip_cos_theta))
# Random positions in system volume
pos = box_size * np.random.random((N, 3))
# Add particles
particles = system.part.add(pos=pos, rotation=N * [(True, True, True)], dip=dip)
# Remove overlap between particles by means of the steepest descent method
system.integrator.set_steepest_descent(
f_max=0, gamma=0.1, max_displacement=0.05)
while system.analysis.energy()["total"] > 5 * kT * N:
system.integrator.run(20)
# Switch to velocity Verlet integrator
system.integrator.set_vv()
system.thermostat.set_langevin(kT=kT, gamma=gamma, seed=1)
# tune verlet list skin
system.cell_system.skin = 0.8
# Setup dipolar P3M
accuracy = 5E-4
system.actors.add(espressomd.magnetostatics.DipolarP3M(accuracy=accuracy, prefactor=dip_lambda * lj_sigma**3 * kT))
Explanation: Next we set up the system. As in part I, the orientation of the dipole moments is set directly on the particles, whereas the magnitude of the moments is taken into account when determining the prefactor of the dipolar P3M (for more details see part I).
Hint:
It should be noted that we seed both the Langevin thermostat and the random number generator of numpy. Latter means that the initial configuration of our system is the same every time this script is executed. As the time evolution of the system depends not solely on the Langevin thermostat but also on the numeric accuracy and DP3M (the tuned parameters are slightly different every time) it is only partly predefined. You can change the seeds to simulate with a different initial configuration and a guaranteed different time evolution.
End of explanation
equil_rounds = 10
equil_steps = 1000
for i in tqdm.trange(equil_rounds):
system.integrator.run(equil_steps)
Explanation: Now we equilibrate for a while
End of explanation
import espressomd.observables
dipm_tot_calc = espressomd.observables.MagneticDipoleMoment(ids=particles.id)
Explanation: As we need the magnetization of our system, we use <tt>espressomd.observables.MagneticDipoleMoment</tt> to calculate the total dipole moment of the system which is the magnetization times the volume of the system.
End of explanation
# Sampling
loops = 2000
Explanation: Now we set the desired number of loops for the sampling
End of explanation
# initialize array for hold the sampled dipole moments
dipms = np.full((loops, 3), np.nan)
# sample dipole moment
for i in tqdm.trange(loops):
system.integrator.run(10)
dipms[i, :] = dipm_tot_calc.calculate()
# calculate average first and second moment of total dipole moment
dipm_tot = np.mean(dipms, axis=0)
dipm_tot_2 = np.mean(dipms**2, axis=0)
Explanation: and sample the first and second moment of the magnetization or total dipole moment, by averaging over all total dipole moments occurring during the simulation
End of explanation
# dipole moment
dipm = np.sqrt(dip_lambda * 4 * np.pi * lj_sigma**3. * kT / mu_0)
print(f'dipm = {dipm:.4f}')
Explanation: For the estimator of the initial susceptibility $\chi_\mathrm{init}$ we need the magnitude of one single dipole moment
End of explanation
# susceptibility in 3d system
chi = mu_0 / (system.volume() * 3. * kT) * (np.sum(dipm_tot_2 * dipm**2.) - np.sum(np.square(dipm_tot * dipm)))
Explanation: Now we can calculate $\chi_\mathrm{init}$ from our simulation data
End of explanation
print(f'chi = {chi:.4f}')
Explanation: and print the result
End of explanation
chi_L = 8. * dip_lambda * phi
print(f'chi_L = {chi_L:.4f}')
Explanation: Compared with the value $\chi = 0.822 \pm 0.017$ of Ref. <a href='#[1]'>[1]</a> (see table 1) it should be very similar.
Now we want to compare the result with the theoretical expectations.
At first with the simple Langevin susceptibility
End of explanation
chi_I = chi_L * (1 + chi_L / 3. + chi_L**2. / 144.)
print(f'chi_I = {chi_I:.4f}')
Explanation: and at second with the more advanced one (see Ref. <a href='#[1]'>[1]</a> eq. (6)) which has a cubic accuracy in $\chi_\mathrm{L}$ and reads
\begin{equation}
\chi = \chi_\mathrm{L} \left( 1 + \frac{\chi_\mathrm{L}}{3} + \frac{\chi_\mathrm{L}^2}{144} \right)
\end{equation}
End of explanation
alphas = np.array([0, 0.25, 0.5, 1, 2, 3, 4, 8])
Explanation: Both of them should be smaller than our result, but the second one should be closer to our one. The deviation of the theoretical results to our simulation result can be explained by the fact that in the Langevin model there are no interactions between the particles incorporated at all and the more advanced (mean-field-type) one of Ref. <a href='#[1]'>[1]</a> do not take occurring cluster formations into account but assumes a homogeneous distribution of the particles. For higher values of the volume fraction $\phi$ and the dipolar interaction parameter $\lambda$ the deviations will increase as the cluster formation will become more pronounced.
Magnetization curve of a 3D system
At the end of this tutorial we now want to sample the magnetization curve of a three dimensional system and compare the results with analytical solutions. Again we will compare with the Langevin function but also with the approximation of Ref. <a href='#[2]'>[2]</a> (see also Ref. <a href='#[1]'>[1]</a> for the right coefficients) which takes the dipole-dipole interaction into account. For this approximation, which is a modified mean-field theory based on the pair correlation function, the Langevin parameter $\alpha$ is replaced by
\begin{equation}
\alpha' = \alpha + \chi_\mathrm{L}~L(\alpha) + \frac{\chi_\mathrm{L}^{2}}{16} L(\alpha) \frac{\mathrm{d} L(\alpha)}{\mathrm{d}\alpha}
\end{equation}
where $\chi_\mathrm{L}$ is the Langevin susceptibility
\begin{equation}
\chi_\mathrm{L} = \frac{N}{V}\frac{\mu_0 \mu^2}{3k_\mathrm{B}T} = 8 \cdot \lambda \cdot \phi
\end{equation}
Analogous to part II we start at zero external magnetic field and increase the external field successively. At every value of the external field we sample the total dipole moment which is proportional to the magnetization as we have a fixed volume.
First we create a list of values of the Langevin parameter $\alpha$. As we already sampled the magnetization at zero external field in the last section, we take this value and continue with the sampling of an external field unequal zero
End of explanation
# remove all constraints
system.constraints.clear()
# array for magnetizations in field direction
magnetizations = np.full_like(alphas, np.nan)
# use result for alpha=0 from previous chapter
magnetizations[0] = np.average(dipm_tot)
# number of loops for sampling
loops_m = 500
for ndx, alpha in enumerate(pbar := tqdm.tqdm(alphas)):
pbar.set_description(f"Sampling for α={alpha:.2f}")
if alpha == 0:
continue
# set magnetic field constraint
H_dipm = (alpha * kT)
H_field = [H_dipm, 0, 0]
H_constraint = espressomd.constraints.HomogeneousMagneticField(H=H_field)
system.constraints.add(H_constraint)
# equilibration
for i in range(equil_rounds):
system.integrator.run(equil_steps)
# sampling
magn_temp = np.full(loops_m, np.nan)
for i in range(loops_m):
system.integrator.run(20)
magn_temp[i] = dipm_tot_calc.calculate()[0]
# save average magnetization
magnetizations[ndx] = np.mean(magn_temp)
# remove constraint
system.constraints.clear()
Explanation: Now for each value in this list we sample the total dipole moment / magnetization of the system for a while. Keep in mind that we only the current orientation of the dipole moments, i.e. the unit vector of the dipole moments, is saved in the particle list but not their magnitude. Thus we have to use $H\cdot \mu$ as the external magnetic field, where $\mu$ is the magnitude of a single magnetic dipole moment.
We will apply the field in x-direction using the class <tt>constraints</tt> of ESPResSo.
As in part II we use the same system for every value of the Langevin parameter $\alpha$. Thus we use that the system is already pre-equilibrated from the previous run so we save some equilibration time. For scientific purposes one would use a new system for every value for the Langevin parameter to ensure that the systems are independent and no correlation effects are measured. Also one would perform more than just one simulation for each value of $\alpha$ to increase the precision of the results.
End of explanation
# Langevin function
def L(y):
return np.cosh(y) / np.sinh(y) - 1 / y
# second order mean-field-model from Ref. [2]
def alpha_mean_field(alpha, dip_lambda, phi):
chi = 8. * dip_lambda * phi
return alpha + chi * L(alpha) + chi**2. / 16. * L(alpha) * (1. / alpha**2. - 1. / np.sinh(alpha)**2.)
Explanation: Now we define the Langevin function and the modified mean-field-approximation of the Langevin parameter of Ref. <a href='#[2]'>[2]</a>
End of explanation
y = np.arange(0.01, 10, 0.1)
initial_susceptibility = system.volume() * kT * chi / (N * mu_0 * dipm**2)
plt.figure(figsize=(10, 10))
plt.ylim(0, 1.)
plt.xlabel(r'$\alpha$', fontsize=20)
plt.ylabel(r'$M^*$', fontsize=20)
plt.plot(y, L(y), label='Langevin function')
plt.plot(y, L(alpha_mean_field(y, dip_lambda, phi)),
label='modified mean-field-theory')
plt.plot(alphas, magnetizations / N, 'o', label='simulation results')
plt.plot(y, initial_susceptibility * y,
label=r'linear approximation at $\alpha = 0$')
plt.legend(fontsize=20)
plt.show()
Explanation: We also want to plot the linear approximation at $\alpha = 0$ to see for which values of $\alpha$ this approximation holds. We use the initial susceptibility calculated in the first chapter of this part as the gradient. As we want the gradient of $M^*$ with respect to $\alpha$ which fulfills the relation
\begin{equation}
\frac{\partial M^*}{\partial \alpha} = \frac{1}{M_\mathrm{sat}}\frac{\partial M}{\partial \left( \frac{\mu_0\mu}{k_\mathrm{B}T} H\right)} = \frac{k_\mathrm{B}T~V}{\mu_0\mu^2N}\frac{\partial M}{\partial H} = \frac{k_\mathrm{B}T~V}{\mu_0\mu^2N}~\chi
\end{equation}
we have to scale our calculated initial susceptibility $\chi_{init}$ by a factor to get it in our dimensionless units.
Now we plot the resulting curves together with our simulation results and the linear approximation
End of explanation |
267 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Drop a specific row in pandas
| Python Code::
dataFrame = dataFrame.drop(row)
|
268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
'lc' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Dataset Parameters
Let's add a lightcurve dataset to the Bundle (see also the lc API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
Step3: times
Step4: fluxes
Step5: sigmas
Step6: compute_times / compute_phases
See the Compute Times & Phases tutorial.
Step7: ld_mode
See the Limb Darkening tutorial
Step8: ld_func
ld_func will only be available if ld_mode is not 'interp', so let's set it to 'lookup'. See the limb darkening tutorial for more details.
Step9: ld_coeffs_source
ld_coeffs_source will only be available if ld_mode is 'lookup'. See the limb darkening tutorial for more details.
Step10: ld_coeffs
ld_coeffs will only be available if ld_mode is set to 'manual'. See the limb darkening tutorial for more details.
Step11: passband
See the Atmospheres & Passbands tutorial
Step12: intens_weighting
See the Intensity Weighting tutorial
Step13: pblum_mode
See the Passband Luminosity tutorial
Step14: pblum_component
pblum_component is only available if pblum_mode is set to 'component-coupled'. See the passband luminosity tutorial for more details.
Step15: pblum_dataset
pblum_dataset is only available if pblum_mode is set to 'dataset-coupled'. In this case we'll get a warning because there is only one dataset. See the passband luminosity tutorial for more details.
Step16: pblum
pblum is only available if pblum_mode is set to 'decoupled' (in which case there is a pblum entry per-star) or 'component-coupled' (in which case there is only an entry for the star chosen by pblum_component). See the passband luminosity tutorial for more details.
Step17: l3_mode
See the "Third" Light tutorial
Step18: l3
l3 is only avaible if l3_mode is set to 'flux'. See the "Third" Light tutorial for more details.
Step19: l3_frac
l3_frac is only avaible if l3_mode is set to 'fraction'. See the "Third" Light tutorial for more details.
Step20: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to computing fluxes and the LC dataset.
Other compute options are covered elsewhere
Step21: irrad_method
Step22: For more details on irradiation, see the Irradiation tutorial
boosting_method
Step23: For more details on boosting, see the Beaming and Boosting example script
atm
Step24: For more details on atmospheres, see the Atmospheres & Passbands tutorial
Synthetics
Step25: Plotting
By default, LC datasets plot as flux vs time.
Step26: Since these are the only two columns available in the synthetic model, the only other option is to plot in phase instead of time.
Step27: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
Step28: Mesh Fields
By adding a mesh dataset and setting the columns parameter, light-curve (i.e. passband-dependent) per-element quantities can be exposed and plotted.
Let's add a single mesh at the first time of the light-curve and re-call run_compute
Step29: These new columns are stored with the lc's dataset tag, but with the 'mesh' dataset-kind.
Step30: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the mesh dataset).
Step31: Now let's look at each of the available fields.
pblum
For more details, see the tutorial on Passband Luminosities
Step32: pblum_ext is the extrinsic passband luminosity of the entire star/mesh - this is a single value (unlike most of the parameters in the mesh) and does not have per-element values.
abs_normal_intensities
Step33: abs_normal_intensities are the absolute normal intensities per-element.
normal_intensities
Step34: normal_intensities are the relative normal intensities per-element.
abs_intensities
Step35: abs_intensities are the projected absolute intensities (towards the observer) per-element.
intensities
Step36: intensities are the projected relative intensities (towards the observer) per-element.
boost_factors | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: 'lc' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
b.add_dataset('lc')
print(b.get_dataset(kind='lc', check_visible=False))
Explanation: Dataset Parameters
Let's add a lightcurve dataset to the Bundle (see also the lc API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
End of explanation
print(b.get_parameter(qualifier='times'))
Explanation: times
End of explanation
print(b.get_parameter(qualifier='fluxes'))
Explanation: fluxes
End of explanation
print(b.get_parameter(qualifier='sigmas'))
Explanation: sigmas
End of explanation
print(b.get_parameter(qualifier='compute_times'))
print(b.get_parameter(qualifier='compute_phases', context='dataset'))
print(b.get_parameter(qualifier='phases_t0'))
Explanation: compute_times / compute_phases
See the Compute Times & Phases tutorial.
End of explanation
print(b.get_parameter(qualifier='ld_mode', component='primary'))
Explanation: ld_mode
See the Limb Darkening tutorial
End of explanation
b.set_value('ld_mode', component='primary', value='lookup')
print(b.get_parameter(qualifier='ld_func', component='primary'))
Explanation: ld_func
ld_func will only be available if ld_mode is not 'interp', so let's set it to 'lookup'. See the limb darkening tutorial for more details.
End of explanation
print(b.get_parameter(qualifier='ld_coeffs_source', component='primary'))
Explanation: ld_coeffs_source
ld_coeffs_source will only be available if ld_mode is 'lookup'. See the limb darkening tutorial for more details.
End of explanation
b.set_value('ld_mode', component='primary', value='manual')
print(b.get_parameter(qualifier='ld_coeffs', component='primary'))
Explanation: ld_coeffs
ld_coeffs will only be available if ld_mode is set to 'manual'. See the limb darkening tutorial for more details.
End of explanation
print(b.get_parameter(qualifier='passband'))
Explanation: passband
See the Atmospheres & Passbands tutorial
End of explanation
print(b.get_parameter(qualifier='intens_weighting'))
Explanation: intens_weighting
See the Intensity Weighting tutorial
End of explanation
print(b.get_parameter(qualifier='pblum_mode'))
Explanation: pblum_mode
See the Passband Luminosity tutorial
End of explanation
b.set_value('pblum_mode', value='component-coupled')
print(b.get_parameter(qualifier='pblum_component'))
Explanation: pblum_component
pblum_component is only available if pblum_mode is set to 'component-coupled'. See the passband luminosity tutorial for more details.
End of explanation
b.set_value('pblum_mode', value='dataset-coupled')
print(b.get_parameter(qualifier='pblum_dataset'))
Explanation: pblum_dataset
pblum_dataset is only available if pblum_mode is set to 'dataset-coupled'. In this case we'll get a warning because there is only one dataset. See the passband luminosity tutorial for more details.
End of explanation
b.set_value('pblum_mode', value='decoupled')
print(b.get_parameter(qualifier='pblum', component='primary'))
Explanation: pblum
pblum is only available if pblum_mode is set to 'decoupled' (in which case there is a pblum entry per-star) or 'component-coupled' (in which case there is only an entry for the star chosen by pblum_component). See the passband luminosity tutorial for more details.
End of explanation
print(b.get_parameter(qualifier='l3_mode'))
Explanation: l3_mode
See the "Third" Light tutorial
End of explanation
b.set_value('l3_mode', value='flux')
print(b.get_parameter(qualifier='l3'))
Explanation: l3
l3 is only avaible if l3_mode is set to 'flux'. See the "Third" Light tutorial for more details.
End of explanation
b.set_value('l3_mode', value='fraction')
print(b.get_parameter(qualifier='l3_frac'))
Explanation: l3_frac
l3_frac is only avaible if l3_mode is set to 'fraction'. See the "Third" Light tutorial for more details.
End of explanation
print(b.get_compute())
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to computing fluxes and the LC dataset.
Other compute options are covered elsewhere:
* parameters related to dynamics are explained in the section on the orb dataset
* parameters related to meshing, eclipse detection, and subdivision are explained in the section on the mesh dataset
End of explanation
print(b.get_parameter(qualifier='irrad_method'))
Explanation: irrad_method
End of explanation
print(b.get_parameter(qualifier='boosting_method'))
Explanation: For more details on irradiation, see the Irradiation tutorial
boosting_method
End of explanation
print(b.get_parameter(qualifier='atm', component='primary'))
Explanation: For more details on boosting, see the Beaming and Boosting example script
atm
End of explanation
b.set_value('times', phoebe.linspace(0,1,101))
b.run_compute()
print(b.filter(context='model').twigs)
print(b.get_parameter(qualifier='times', kind='lc', context='model'))
print(b.get_parameter(qualifier='fluxes', kind='lc', context='model'))
Explanation: For more details on atmospheres, see the Atmospheres & Passbands tutorial
Synthetics
End of explanation
afig, mplfig = b.plot(show=True)
Explanation: Plotting
By default, LC datasets plot as flux vs time.
End of explanation
afig, mplfig = b.plot(x='phases', show=True)
Explanation: Since these are the only two columns available in the synthetic model, the only other option is to plot in phase instead of time.
End of explanation
print(b.filter(qualifier='period').components)
afig, mplfig = b.plot(x='phases:binary', show=True)
Explanation: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
End of explanation
b.add_dataset('mesh', times=[0], dataset='mesh01')
print(b.get_parameter(qualifier='columns').choices)
b.set_value('columns', value=['intensities@lc01',
'abs_intensities@lc01',
'normal_intensities@lc01',
'abs_normal_intensities@lc01',
'pblum_ext@lc01',
'boost_factors@lc01'])
b.run_compute()
print(b.get_model().datasets)
Explanation: Mesh Fields
By adding a mesh dataset and setting the columns parameter, light-curve (i.e. passband-dependent) per-element quantities can be exposed and plotted.
Let's add a single mesh at the first time of the light-curve and re-call run_compute
End of explanation
print(b.filter(dataset='lc01', kind='mesh', context='model').twigs)
Explanation: These new columns are stored with the lc's dataset tag, but with the 'mesh' dataset-kind.
End of explanation
afig, mplfig = b.filter(kind='mesh').plot(fc='intensities', ec='None', show=True)
Explanation: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the mesh dataset).
End of explanation
print(b.get_parameter(qualifier='pblum_ext',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: Now let's look at each of the available fields.
pblum
For more details, see the tutorial on Passband Luminosities
End of explanation
print(b.get_parameter(qualifier='abs_normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: pblum_ext is the extrinsic passband luminosity of the entire star/mesh - this is a single value (unlike most of the parameters in the mesh) and does not have per-element values.
abs_normal_intensities
End of explanation
print(b.get_parameter(qualifier='normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: abs_normal_intensities are the absolute normal intensities per-element.
normal_intensities
End of explanation
print(b.get_parameter(qualifier='abs_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: normal_intensities are the relative normal intensities per-element.
abs_intensities
End of explanation
print(b.get_parameter(qualifier='intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: abs_intensities are the projected absolute intensities (towards the observer) per-element.
intensities
End of explanation
print(b.get_parameter(qualifier='boost_factors',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: intensities are the projected relative intensities (towards the observer) per-element.
boost_factors
End of explanation |
269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
基于FM-index的序列比对算法
1. 什么是序列比对?
为确定两个或多个序列之间的相似性以至于同源性,而将它们按照一定的规律排列。
用个例子来看一下吧!
先看看数据是个啥。
读取Reads.fq文件,这是一个fastq格式的文件,存放了2条Read。read1, read2表示两条read的名字,名字下面是两条read对应的序列信息。
Step1: 现在对这两个Read比对一下看看是什么效果
Step2: 好像不太直观,换个方式来看一看
Step3: 简单解释一下这个结果
这里出现了两个最佳比对,它们的Score都是44。
在每个最佳比对中,用竖线直观显示字符间对齐。
'-'表示空格, 例如,在第一个最佳比对中, read1的"AC-A"与read2的"ACCA"对齐,在第2个位置(位置从0开始)处read2比read1多了一个字符'C'。
我们管read2比read1在第2个位置处多了一个字符的情况,称为1个插入(insertion),反之,称为删除(deletion)。
对齐的字符如果相同,称为__match__,不同称为__mismatch__。
总结一下
insertion
Step4: 2. BWT, SA和FM-index
先说说什么是SA
给定一个字符串S, 它的序列是这个样子的。
Step5: 为了计算的方便,我们给s加一个'$',用来标示字符串结尾。
Step6: __SA(suffix array)__是一个数组,它是以字典序的方式保存了S中所有后缀在S中的开始位置。
下面继续看例子,我们用Pos数组存放了S的所有后缀的开始位置。
Step7: 以字典序的方式对Pos数组排序。
Step8: 再看一下SA这些位置对应的后缀。
Step9: 放在一起对比一下。
Step10: 再来看一看什么是BWT
__BWT__是S的一个排列,其中BWT[i] = S[SA[i]-1]
Step11: 再放一起看一下。
Step12: 发现了什么?
BWT[i]的字符是字符串s中SA[i]-1位置的字符,而Suffix[i]是S中SA[i]位置开始的后缀,也就是说其实BWT[i]是Suffix[i]的前一个字符。
为了更清楚的观察,我们这回用BWM矩阵,代表以字典序的方式排列S中的所有轮转(rotation)。
Step13: 对比BWT和BWM发现,其实BWT就是BWM的最后一列。(想一想可以直接用SA来生成BMW)
来看一个比较有趣的问题
问题: 如果已知BWT串,我们是否能用BWT还原出原始串S?
答案是肯定的。
我们先用BWT还原出BWM的第一列,
只需将BWT排序就可获得BWM的第一列。(想想为什么?)
Step14: BWT作为BWM的最后一列,和firstCol有啥关系呢?
我们换一个方式来表示BWM,我们用字符在S上的位置表示每个rotation。 | Python Code:
from Bio import SeqIO, pairwise2
handle = open("../data/Reads.fa", "r")
records = list(SeqIO.parse(handle, "fasta"))
handle.close()
for record in records:
print record.id
print record.seq
Explanation: 基于FM-index的序列比对算法
1. 什么是序列比对?
为确定两个或多个序列之间的相似性以至于同源性,而将它们按照一定的规律排列。
用个例子来看一下吧!
先看看数据是个啥。
读取Reads.fq文件,这是一个fastq格式的文件,存放了2条Read。read1, read2表示两条read的名字,名字下面是两条read对应的序列信息。
End of explanation
seq1 = str(records[0].seq)
seq2 = str(records[1].seq)
alignments = pairwise2.align.globalxx(seq1, seq2)
print alignments
Explanation: 现在对这两个Read比对一下看看是什么效果
End of explanation
from Bio.pairwise2 import format_alignment
for aln in alignments : print(format_alignment(*aln))
Explanation: 好像不太直观,换个方式来看一看
End of explanation
edit = [(i, j) for i, j in zip(alignments[0][0], alignments[0][1]) if i != j]
for e in edit: print e[0], '->', e[1]
Explanation: 简单解释一下这个结果
这里出现了两个最佳比对,它们的Score都是44。
在每个最佳比对中,用竖线直观显示字符间对齐。
'-'表示空格, 例如,在第一个最佳比对中, read1的"AC-A"与read2的"ACCA"对齐,在第2个位置(位置从0开始)处read2比read1多了一个字符'C'。
我们管read2比read1在第2个位置处多了一个字符的情况,称为1个插入(insertion),反之,称为删除(deletion)。
对齐的字符如果相同,称为__match__,不同称为__mismatch__。
总结一下
insertion: '_' => 'A'
deletion: 'A' => '_'
match: 'A' => 'A'
mismatch: 'A' => 'C'
把第一个最佳比对的mismatch、insertion和deletion都找出来
End of explanation
S = 'ATCGAAGTG'
Explanation: 2. BWT, SA和FM-index
先说说什么是SA
给定一个字符串S, 它的序列是这个样子的。
End of explanation
S += '$'
Explanation: 为了计算的方便,我们给s加一个'$',用来标示字符串结尾。
End of explanation
Pos = range(len(S))
print Pos
Explanation: __SA(suffix array)__是一个数组,它是以字典序的方式保存了S中所有后缀在S中的开始位置。
下面继续看例子,我们用Pos数组存放了S的所有后缀的开始位置。
End of explanation
SA = sorted(Pos, key=lambda x:S[x:])
print SA
Explanation: 以字典序的方式对Pos数组排序。
End of explanation
for i in SA: print S[SA[i]:]
Explanation: 再看一下SA这些位置对应的后缀。
End of explanation
print 'Index\tSA\tSuffix'
for i, p in enumerate(SA): print '{0}\t{1}\t{2}'.format(i, p, S[p:])
Explanation: 放在一起对比一下。
End of explanation
BWT = ''.join([S[x-1]for x in SA])
print BWT
Explanation: 再来看一看什么是BWT
__BWT__是S的一个排列,其中BWT[i] = S[SA[i]-1]
End of explanation
print 'Index\tSA\tBWT\tSuffix'
for i, p in enumerate(SA): print '{0}\t{1}\t{2}\t{3}'.format(i, p, S[p-1],S[p:])
Explanation: 再放一起看一下。
End of explanation
print 'Index\tSA\tBWT\tBWM'
for i, p in enumerate(SA): print '{0}\t{1}\t{2}\t{3}'.format(i, p, s[p-1],(s+s)[p:p+len(s)])
Explanation: 发现了什么?
BWT[i]的字符是字符串s中SA[i]-1位置的字符,而Suffix[i]是S中SA[i]位置开始的后缀,也就是说其实BWT[i]是Suffix[i]的前一个字符。
为了更清楚的观察,我们这回用BWM矩阵,代表以字典序的方式排列S中的所有轮转(rotation)。
End of explanation
firstCol = ''.join(sorted(BWT))
print firstCol
Explanation: 对比BWT和BWM发现,其实BWT就是BWM的最后一列。(想一想可以直接用SA来生成BMW)
来看一个比较有趣的问题
问题: 如果已知BWT串,我们是否能用BWT还原出原始串S?
答案是肯定的。
我们先用BWT还原出BWM的第一列,
只需将BWT排序就可获得BWM的第一列。(想想为什么?)
End of explanation
print 'Index\tSA\tBWT\tBWM\'\tBWM'
for i, p in enumerate(SA):
print '{0}\t{1}\t{2}\t{3}\t{4}'.format(i, p, s[p-1],''.join(map(str, range(len(S))[p:]+range(len(s))[:p])),(s+s)[p:p+len(s)])
Explanation: BWT作为BWM的最后一列,和firstCol有啥关系呢?
我们换一个方式来表示BWM,我们用字符在S上的位置表示每个rotation。
End of explanation |
270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustering
Step1: Fit a simple KMeans cluster model in iris dataset
Step2: Q
Step3: Q
Step4: Always interpret results with caution!
Clustering as Data Compression
Step5: Overview of clustering methods in sklearn
Exercise | Python Code:
import numpy as np
from sklearn.datasets import load_iris, load_digits
from sklearn.metrics import f1_score
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
iris = load_iris()
X = iris.data
y = iris.target
print(X.shape)
pca = PCA(n_components=2)
X = pca.fit_transform(X)
Explanation: Clustering: Unsupervised Grouping of Data
End of explanation
km = KMeans()
km.fit(X)
clusters = km.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=clusters, alpha=0.5)
plt.scatter(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1],
c=np.arange(km.n_clusters), marker='x', s=150, linewidth=3)
Explanation: Fit a simple KMeans cluster model in iris dataset
End of explanation
km = KMeans(n_clusters=3)
km.fit(X)
clusters = km.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=clusters, alpha=0.5)
plt.scatter(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1],
c=np.arange(km.n_clusters), marker='x', s=150, linewidth=3)
print("Clustering F1 Score: %f" % f1_score(y, clusters))
Explanation: Q: What went wrong?
End of explanation
print(y)
print(clusters)
c_mapped = clusters.copy()
c_mapped[clusters == 1] = 0
c_mapped[clusters == 2] = 1
c_mapped[clusters == 0] = 2
print("Clustering F1 Score: %f" % f1_score(y, c_mapped))
Explanation: Q: What went wrong?
End of explanation
from scipy.misc import face
racoon = face(gray=True)
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].imshow(racoon, cmap=plt.cm.gray)
ax[0].set_xticks([])
ax[0].set_yticks([])
_ = ax[1].hist(racoon.reshape(-1, 1), bins=256,
normed=True, color='.5', edgecolor='.5')
plt.tight_layout()
X = racoon.reshape(-1, 1)
km = KMeans(n_clusters=5)
km.fit(X)
values = km.cluster_centers_.ravel()
labels = km.labels_
rac_compressed = np.choose(labels, values)
rac_compressed.shape = racoon.shape
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].imshow(rac_compressed, cmap=plt.cm.gray)
ax[0].set_xticks([])
ax[0].set_yticks([])
_ = ax[1].hist(rac_compressed.reshape(-1, 1), bins=256,
normed=True, color='.5', edgecolor='.5')
plt.tight_layout()
Explanation: Always interpret results with caution!
Clustering as Data Compression: Vector Quantization
End of explanation
digits = load_digits()
X = digits.data
y = digits.target
# enter code here
Explanation: Overview of clustering methods in sklearn
Exercise: Apply KMeans clustering on MNIST digits dataset and figure out which cluster belongs to which digit
Hint: Try to visualize the average of all images that belong to one cluster
End of explanation |
271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
After this assignment you will be able to
Step1: Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the word_to_vec_map.
Step3: You've loaded
Step5: Expected Output
Step6: Run the cell below to test your code, this may take 1-2 minutes.
Step7: Expected Output
Step8: Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
Step9: As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
But let's try with some other words.
Step11: Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to Boliukbasi et al., 2016. Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.
3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts
Step13: Expected Output | Python Code:
import numpy as np
from w2v_utils import *
Explanation: Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
After this assignment you will be able to:
Load pre-trained word vectors, and measure similarity using cosine similarity
Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
Modify word embeddings to reduce their gender bias
Let's get started! Run the following cell to load the packages you will need.
End of explanation
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
Explanation: Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the word_to_vec_map.
End of explanation
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u, v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.linalg.norm(u)
# Compute the L2 norm of v (≈1 line)
norm_v = np.linalg.norm(v)
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot / (norm_u * norm_v)
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
Explanation: You've loaded:
- words: set of words in the vocabulary.
- word_to_vec_map: dictionary mapping words to their GloVe vector representation.
You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are.
1 - Cosine similarity
To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center> Figure 1: The cosine of the angle between two vectors is a measure of how similar they are</center></caption>
Exercise: Implement the function cosine_similarity() to evaluate similarity between word vectors.
Reminder: The norm of $u$ is defined as $ ||u||2 = \sqrt{\sum{i=1}^{n} u_i^2}$
End of explanation
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b - e_a, word_to_vec_map[w] - e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
Explanation: Expected Output:
<table>
<tr>
<td>
**cosine_similarity(father, mother)** =
</td>
<td>
0.890903844289
</td>
</tr>
<tr>
<td>
**cosine_similarity(ball, crocodile)** =
</td>
<td>
0.274392462614
</td>
</tr>
<tr>
<td>
**cosine_similarity(france - paris, rome - italy)** =
</td>
<td>
-0.675147930817
</td>
</tr>
</table>
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave.
2 - Word analogy task
In the word analogy task, we complete the sentence <font color='brown'>"a is to b as c is to ____"</font>. An example is <font color='brown'> 'man is to woman as king is to queen' </font>. In detail, we are trying to find a word d, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
Exercise: Complete the code below to be able to perform word analogies!
End of explanation
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
Explanation: Run the cell below to test your code, this may take 1-2 minutes.
End of explanation
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
Explanation: Expected Output:
<table>
<tr>
<td>
**italy -> italian** ::
</td>
<td>
spain -> spanish
</td>
</tr>
<tr>
<td>
**india -> delhi** ::
</td>
<td>
japan -> tokyo
</td>
</tr>
<tr>
<td>
**man -> woman ** ::
</td>
<td>
boy -> girl
</td>
</tr>
<tr>
<td>
**small -> smaller ** ::
</td>
<td>
large -> larger
</td>
</tr>
</table>
Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?.
Congratulations!
You've come to the end of this assignment. Here are the main points you should remember:
Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.)
For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started.
Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook.
Congratulations on finishing the graded portions of this notebook!
3 - Debiasing word vectors (OPTIONAL/UNGRADED)
In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded.
Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word woman, and $e_{man}$ corresponds to the word vector corresponding to the word man. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
End of explanation
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
Explanation: Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
End of explanation
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
Explanation: As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
But let's try with some other words.
End of explanation
def neutralize(word, g, word_to_vec_map):
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = word_to_vec_map[word]
# Compute e_biascomponent using the formula give above. (≈ 1 line)
e_biascomponent = (np.dot(e, g)/(np.linalg.norm(g)**2)) * g
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = e - e_biascomponent
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
Explanation: Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to Boliukbasi et al., 2016. Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.
3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "othogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center> Figure 2: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption>
Exercise: Implement neutralize() to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
End of explanation
def equalize(pair, bias_axis, word_to_vec_map):
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = pair
e_w1, e_w2 = word_to_vec_map[w1], word_to_vec_map[w2]
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = (e_w1 + e_w2)/2
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = (np.dot(mu, bias_axis)/(np.linalg.norm(bias_axis)**2)) * bias_axis
mu_orth = mu - mu_B
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = (np.dot(e_w1, bias_axis)/(np.linalg.norm(bias_axis)**2)) * bias_axis
e_w2B = (np.dot(e_w2, bias_axis)/(np.linalg.norm(bias_axis)**2)) * bias_axis
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = (np.abs(1 - np.linalg.norm(mu_orth)**2)**0.5) * ((e_w1B - mu_B)/ np.linalg.norm(e_w1 - mu_orth - mu_B))
corrected_e_w2B = (np.abs(1 - np.linalg.norm(mu_orth)**2)**0.5) * ((e_w2B - mu_B)/ np.linalg.norm(e_w2 - mu_orth - mu_B))
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = corrected_e_w1B + mu_orth
e2 = corrected_e_w2B + mu_orth
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
Explanation: Expected Output: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$).
<table>
<tr>
<td>
**cosine similarity between receptionist and g, before neutralizing:** :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
**cosine similarity between receptionist and g, after neutralizing:** :
</td>
<td>
-3.26732746085e-17
</tr>
</table>
3.2 - Equalization algorithm for gender-specific words
Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} \text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} \text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
Exercise: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
End of explanation |
272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive Image Processing with Numba and Bokeh
This demo shows off how interactive image processing can be done in the notebook, using Numba for numerics, Bokeh for plotting, and Ipython interactors for widgets. The demo runs entirely inside the Ipython notebook, with no Bokeh server required.
Numba must be installed in order to run this demo. To run, click on, Cell->Run All in the top menu, then scroll down to individual examples and play around with their controls.
Step1: Gaussian Blur
This first section demonstrates performing a simple Gaussian blur on an image. It presents the image, as well as a slider that controls how much blur is applied. Numba is used to compile the python blur kernel, which is invoked when the user modifies the slider.
Note
Step2: 3x3 Image Kernels
Many image processing filters can be expressed as 3x3 matrices. This more sophisticated example demonstrates how numba can be used to compile kernels for arbitrary 3x3 kernels, and then provides several predefined kernels for the user to experiment with.
The UI presents the image to process (along with a dropdown to select a different image) as well as a dropdown that lets the user select which kernel to apply. Additionally there are sliders the permit adjustment to the bias and scale of the final greyscale image.
Note
Step4: Wavelet Decomposition
This last example demonstrates a Haar wavelet decomposition using a Numba-compiled function. Play around with the slider to see different levels of decomposition of the image. | Python Code:
from timeit import default_timer as timer
from bokeh.plotting import figure, show, output_notebook
from bokeh.models import GlyphRenderer, LinearColorMapper
from bokeh.io import push_notebook
from numba import jit, njit
from ipywidgets import interact
import numpy as np
import scipy.misc
output_notebook()
Explanation: Interactive Image Processing with Numba and Bokeh
This demo shows off how interactive image processing can be done in the notebook, using Numba for numerics, Bokeh for plotting, and Ipython interactors for widgets. The demo runs entirely inside the Ipython notebook, with no Bokeh server required.
Numba must be installed in order to run this demo. To run, click on, Cell->Run All in the top menu, then scroll down to individual examples and play around with their controls.
End of explanation
# smaller image
img_blur = (scipy.misc.ascent()[::-1,:]/255.0)[:250, :250].copy(order='C')
palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]
width, height = img_blur.shape
p_blur = figure(x_range=(0, width), y_range=(0, height))
r_blur = p_blur.image(image=[img_blur], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name='blur')
@njit
def blur(outimg, img, amt):
iw, ih = img.shape
for i in range(amt, iw-amt):
for j in range(amt, ih-amt):
px = 0.
for w in range(-amt//2, amt//2):
for h in range(-amt//2, amt//2):
px += img[i+w, j+h]
outimg[i, j]= px/(amt*amt)
def update(i=0):
level = 2*i + 1
out = img_blur.copy()
ts = timer()
blur(out, img_blur, level)
te = timer()
print('blur takes:', te - ts)
renderer = p_blur.select(dict(name="blur", type=GlyphRenderer))
r_blur.data_source.data['image'] = [out]
push_notebook(handle=t_blur)
t_blur = show(p_blur, notebook_handle=True)
interact(update, i=(0, 10))
Explanation: Gaussian Blur
This first section demonstrates performing a simple Gaussian blur on an image. It presents the image, as well as a slider that controls how much blur is applied. Numba is used to compile the python blur kernel, which is invoked when the user modifies the slider.
Note: This simple example does not handle the edge case, so the edge of the image will remain unblurred as the slider is increased.
End of explanation
@jit
def getitem(img, x, y):
w, h = img.shape
if x >= w:
x = w - 1 - (x - w)
if y >= h:
y = h - 1 - (y - h)
return img[x, y]
def filter_factory(kernel):
ksum = np.sum(kernel)
if ksum == 0:
ksum = 1
k9 = kernel / ksum
@jit
def kernel_apply(img, out, x, y):
tmp = 0
for i in range(3):
for j in range(3):
tmp += img[x+i-1, y+j-1] * k9[i, j]
out[x, y] = tmp
@jit
def kernel_apply_edge(img, out, x, y):
tmp = 0
for i in range(3):
for j in range(3):
tmp += getitem(img, x+i-1, y+j-1) * k9[i, j]
out[x, y] = tmp
@jit
def kernel_k9(img, out):
# Loop through all internals
for x in range(1, img.shape[0] -1):
for y in range(1, img.shape[1] -1):
kernel_apply(img, out, x, y)
# Loop through all the edges
for x in range(img.shape[0]):
kernel_apply_edge(img, out, x, 0)
kernel_apply_edge(img, out, x, img.shape[1] - 1)
for y in range(img.shape[1]):
kernel_apply_edge(img, out, 0, y)
kernel_apply_edge(img, out, img.shape[0] - 1, y)
return kernel_k9
average = np.array([
[1, 1, 1],
[1, 1, 1],
[1, 1, 1],
], dtype=np.float32)
sharpen = np.array([
[-1, -1, -1],
[-1, 12, -1],
[-1, -1, -1],
], dtype=np.float32)
edge = np.array([
[ 0, -1, 0],
[-1, 4, -1],
[ 0, -1, 0],
], dtype=np.float32)
edge_h = np.array([
[ 0, 0, 0],
[-1, 2, -1],
[ 0, 0, 0],
], dtype=np.float32)
edge_v = np.array([
[0, -1, 0],
[0, 2, 0],
[0, -1, 0],
], dtype=np.float32)
gradient_h = np.array([
[-1, -1, -1],
[ 0, 0, 0],
[ 1, 1, 1],
], dtype=np.float32)
gradient_v = np.array([
[-1, 0, 1],
[-1, 0, 1],
[-1, 0, 1],
], dtype=np.float32)
sobol_h = np.array([
[ 1, 2, 1],
[ 0, 0, 0],
[-1, -2, -1],
], dtype=np.float32)
sobol_v = np.array([
[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1],
], dtype=np.float32)
emboss = np.array([
[-2, -1, 0],
[-1, 1, 1],
[ 0, 1, 2],
], dtype=np.float32)
kernels = {
"average" : filter_factory(average),
"sharpen" : filter_factory(sharpen),
"edge (both)" : filter_factory(edge),
"edge (horizontal)" : filter_factory(edge_h),
"edge (vertical)" : filter_factory(edge_v),
"gradient (horizontal)" : filter_factory(gradient_h),
"gradient (vertical)" : filter_factory(gradient_v),
"sobol (horizontal)" : filter_factory(sobol_h),
"sobol (vertical)" : filter_factory(sobol_v),
"emboss" : filter_factory(emboss),
}
images = {
"ascent" : np.copy(scipy.misc.ascent().astype(np.float32)[::-1, :]),
"face" : np.copy(scipy.misc.face(gray=True).astype(np.float32)[::-1, :]),
}
palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]
cm = LinearColorMapper(palette=palette, low=0, high=256)
width, height = images['ascent'].shape
p_kernel = figure(x_range=(0, width), y_range=(0, height))
r_kernel = p_kernel.image(image=[images['ascent']], x=[0], y=[0], dw=[width], dh=[height], color_mapper=cm, name="kernel")
def update(image="ascent", kernel_name="none", scale=100, bias=0):
global _last_kname
global _last_out
img_kernel = images.get(image)
kernel = kernels.get(kernel_name, None)
if kernel == None:
out = np.copy(img_kernel)
else:
out = np.zeros_like(img_kernel)
ts = timer()
kernel(img_kernel, out)
te = timer()
print('kernel takes:', te - ts)
out *= scale / 100.0
out += bias
r_kernel.data_source.data['image'] = [out]
push_notebook(handle=t_kernel)
t_kernel = show(p_kernel, notebook_handle=True)
knames = ["none"] + sorted(kernels.keys())
interact(update, image=["ascent" ,"face"], kernel_name=knames, scale=(10, 100, 10), bias=(0, 255))
Explanation: 3x3 Image Kernels
Many image processing filters can be expressed as 3x3 matrices. This more sophisticated example demonstrates how numba can be used to compile kernels for arbitrary 3x3 kernels, and then provides several predefined kernels for the user to experiment with.
The UI presents the image to process (along with a dropdown to select a different image) as well as a dropdown that lets the user select which kernel to apply. Additionally there are sliders the permit adjustment to the bias and scale of the final greyscale image.
Note: Right now, adjusting the scale and bias are not as efficient as possible, because the update function always also applies the kernel (even if it has not changed). A better implementation might have a class that keeps track of the current kernel and output image so that bias and scale can be applied by themselves.
End of explanation
@njit
def wavelet_decomposition(img, tmp):
Perform inplace wavelet decomposition on `img` with `tmp` as
a temporarily buffer.
This is a very simple wavelet for demonstration
w, h = img.shape
halfwidth, halfheight = w//2, h//2
lefthalf, righthalf = tmp[:halfwidth, :], tmp[halfwidth:, :]
# Along first dimension
for x in range(halfwidth):
for y in range(h):
lefthalf[x, y] = (img[2 * x, y] + img[2 * x + 1, y]) / 2
righthalf[x, y] = img[2 * x, y] - img[2 * x + 1, y]
# Swap buffer
img, tmp = tmp, img
tophalf, bottomhalf = tmp[:, :halfheight], tmp[:, halfheight:]
# Along second dimension
for y in range(halfheight):
for x in range(w):
tophalf[x, y] = (img[x, 2 * y] + img[x, 2 * y + 1]) / 2
bottomhalf[x, y] = img[x, 2 * y] - img[x, 2 * y + 1]
return halfwidth, halfheight
img_wavelet = np.copy(scipy.misc.face(gray=True)[::-1, :])
palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]
width, height = img_wavelet.shape
p_wavelet = figure(x_range=(0, width), y_range=(0, height))
r_wavelet = p_wavelet.image(image=[img_wavelet], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name="wavelet")
def update(level=0):
out = np.copy(img_wavelet)
tmp = np.zeros_like(img_wavelet)
ts = timer()
hw, hh = img_wavelet.shape
while level > 0 and hw > 1 and hh > 1:
hw, hh = wavelet_decomposition(out[:hw, :hh], tmp[:hw, :hh])
level -= 1
te = timer()
print('wavelet takes:', te - ts)
r_wavelet.data_source.data['image'] = [out]
push_notebook(handle=t_wavelet)
t_wavelet = show(p_wavelet, notebook_handle=True)
interact(update, level=(0, 7))
Explanation: Wavelet Decomposition
This last example demonstrates a Haar wavelet decomposition using a Numba-compiled function. Play around with the slider to see different levels of decomposition of the image.
End of explanation |
273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exporting a BigQuery ML Model for Online Prediction
Learning Objectives
Train and deploy a logistic regression model - also applies to DNN classifier, DNN regressor, k-means, linear regression, and matrix factorization models.
Train and deploy a Boosted Tree classifier model - also applies to Boosted Tree regressor model.
Train and deploy an AutoML classifier model - also applies to AutoML regressor model.
Introduction
In this notebook, you will learn how to export a BigQuery ML model and then deploy the model on AI Platform. You will use the iris table from the BigQuery public datasets and work through the three end-to-end scenarios.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Step1: Check that the Google BigQuery library is installed and if not, install it.
Step2: Note
Step3: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
Step4: Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called bqml_tutorial. We'll do the same for a GCS bucket for our project too.
Step5: Train and deploy a logistic regression model
Train the model
Train a logistic regression model that predicts iris type using the BigQuery ML CREATE MODEL statement. This training job should take approximately 1 minute to complete.
Step6: Export the model
Export the model to a Cloud Storage bucket using the bq command-line tool. For additional ways to export models, see Exporting BigQuery ML models. This extract job should take less than 1 minute to complete.
Step7: Local deployment and serving
You can deploy exported TensorFlow models using the TensorFlow Serving Docker container. The following steps require you to install Docker.
Download the exported model files to a temporary directory
Step8: Create a version subdirectory
This step sets a version number (1 in this case) for the model.
Step9: Pull the docker image
Step10: Run the Docker container
Step11: Run the prediction
Step12: Online deployment and serving
This section uses the gcloud command-line tool to deploy and run predictions against the exported model. For more details about deploying a model to AI Platform for online/batch predictions, see Deploying models.
Note
Step13: Create a model version
Set the environment variables
Step14: Create the version
Step15: This step might take a few minutes to complete. You should see the message Creating version (this might take a few minutes).......
Get information about your new version.
Step16: Online prediction
The details about running online predictions against a deployed model are available at Getting online predictions
Create a newline-delimited JSON file for inputs, for example instances.json file with the following content.
Step17: Setup env variables for predict
Step18: Run predict
Step19: Train and deploy a Boosted Tree classifier model
Train the model
Train a Boosted Tree classifier model that predicts iris type using the BigQuery ML CREATE MODEL statement. This training job should take approximately 7 minutes to complete.
Step20: Export the model
Export the model to a Cloud Storage bucket using the bq command-line tool. For additional ways to export models, see Exporting BigQuery ML models.
Step21: Local deployment and serving
In the exported files, there is a main.py file for local run.
Download the exported model files to a local directory
Step22: Extract predictor.py
Step23: Install XGBoost library
Install the XGBoost library - version 0.82 or later.
Run the prediction
Step24: Online deployment and serving
This section uses the gcloud command-line tool to deploy and run predictions against the exported model.
For more details about deploying a model to AI Platform for online/batch predictions using custom routines, see Deploying models.
Note
Step25: Create a model version
Set the environment variables
Step26: Create the version
Step27: This step might take a few minutes to complete. You should see the message Creating version (this might take a few minutes).......
Get information about your new version.
Step28: Online prediction
For more details about running online predictions against a deployed model, see Requesting predictions.
Create a newline-delimited JSON file for inputs. For example, instances.json file with the following content. Ignore if already created.
Step29: Setup env variables for predict
Step30: Run predict
Step31: Train and deploy an AutoML classifier model
Train the model
Train an AutoML classifier model that predicts iris type using the BigQuery ML CREATE MODEL statement. AutoML models need at least 1000 rows of input data. Because ml_datasets.iris only has 150 rows, we duplicate the data 10 times. This training job should take around 2 hours to complete.
Step32: Export the model
Export the model to a Cloud Storage bucket using the bq command-line tool. For additional ways to export models, see Exporting BigQuery ML models.
Step33: Local deployment and serving
For details about building AutoML containers, see Exporting models. The following steps require you to install Docker.
Copy exported model files to a local directory
Step34: Pull AutoML Docker image
Step35: Start Docker container
Step36: Run the prediction
Create a newline-delimited JSON file for inputs. For example, input.json file with the following contents
Step37: Make the predict call | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
Explanation: Exporting a BigQuery ML Model for Online Prediction
Learning Objectives
Train and deploy a logistic regression model - also applies to DNN classifier, DNN regressor, k-means, linear regression, and matrix factorization models.
Train and deploy a Boosted Tree classifier model - also applies to Boosted Tree regressor model.
Train and deploy an AutoML classifier model - also applies to AutoML regressor model.
Introduction
In this notebook, you will learn how to export a BigQuery ML model and then deploy the model on AI Platform. You will use the iris table from the BigQuery public datasets and work through the three end-to-end scenarios.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
End of explanation
!pip install --user google-cloud-bigquery==1.25.0
Explanation: Check that the Google BigQuery library is installed and if not, install it.
End of explanation
import os
from google.cloud import bigquery
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
Import necessary libraries.
End of explanation
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR BUCKET NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
Explanation: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
End of explanation
%%bash
## Create a BigQuery dataset bqml_tutorial
bq --location=US mk --dataset \
--description "bqml_tutorial" \
$PROJECT:bqml_tutorial
echo "Here are your current datasets:"
bq ls
Explanation: Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called bqml_tutorial. We'll do the same for a GCS bucket for our project too.
End of explanation
%%bash
bq query --use_legacy_sql=false \
'CREATE MODEL `bqml_tutorial.iris_model`
OPTIONS (model_type="logistic_reg",
max_iterations=10, input_label_cols=["species"])
AS SELECT
*
FROM
`bigquery-public-data.ml_datasets.iris`;'
Explanation: Train and deploy a logistic regression model
Train the model
Train a logistic regression model that predicts iris type using the BigQuery ML CREATE MODEL statement. This training job should take approximately 1 minute to complete.
End of explanation
%%bash
bq extract -m bqml_tutorial.iris_model gs://$BUCKET/iris_model
Explanation: Export the model
Export the model to a Cloud Storage bucket using the bq command-line tool. For additional ways to export models, see Exporting BigQuery ML models. This extract job should take less than 1 minute to complete.
End of explanation
%%bash
mkdir tmp_dir
gsutil cp -r gs://$BUCKET/iris_model tmp_dir
Explanation: Local deployment and serving
You can deploy exported TensorFlow models using the TensorFlow Serving Docker container. The following steps require you to install Docker.
Download the exported model files to a temporary directory
End of explanation
%%bash
mkdir -p serving_dir/iris_model/1
cp -r tmp_dir/iris_model/* serving_dir/iris_model/1
rm -r tmp_dir
Explanation: Create a version subdirectory
This step sets a version number (1 in this case) for the model.
End of explanation
%%bash
docker pull tensorflow/serving
Explanation: Pull the docker image
End of explanation
%%bash
docker run -p 8500:8500 --network="host" --mount type=bind,source=`pwd`/serving_dir/iris_model,target=/models/iris_model -e MODEL_NAME=iris_model -t tensorflow/serving &
Explanation: Run the Docker container
End of explanation
%%bash
curl -d '{"instances": [{"sepal_length":5.0, "sepal_width":2.0, "petal_length":3.5, "petal_width":1.0}]}' -X POST http://localhost:8501/v1/models/iris_model:predict
Explanation: Run the prediction
End of explanation
MODEL_NAME="IRIS_MODEL"
gcloud ai-platform models create $MODEL_NAME
Explanation: Online deployment and serving
This section uses the gcloud command-line tool to deploy and run predictions against the exported model. For more details about deploying a model to AI Platform for online/batch predictions, see Deploying models.
Note: Execute the following commands in the Cloud Shell of Cloud Platform Console till the Run predict command. Click Activate Cloud Shell icon to open the cloud shell and click Continue.
Create a model resource
End of explanation
# Replace the BUCKET_NAME with your bucket name.
MODEL_DIR="gs://<BUCKET_NAME>/iris_model"
VERSION_NAME="v1"
FRAMEWORK="TENSORFLOW"
Explanation: Create a model version
Set the environment variables
End of explanation
gcloud ai-platform versions create $VERSION_NAME --model=$MODEL_NAME --origin=$MODEL_DIR --runtime-version=2.1 --framework=$FRAMEWORK
Explanation: Create the version
End of explanation
gcloud ai-platform versions describe $VERSION_NAME --model $MODEL_NAME
Explanation: This step might take a few minutes to complete. You should see the message Creating version (this might take a few minutes).......
Get information about your new version.
End of explanation
{"sepal_length":5.0, "sepal_width":2.0, "petal_length":3.5, "petal_width":1.0}
{"sepal_length":5.3, "sepal_width":3.7, "petal_length":1.5, "petal_width":0.2}
Explanation: Online prediction
The details about running online predictions against a deployed model are available at Getting online predictions
Create a newline-delimited JSON file for inputs, for example instances.json file with the following content.
End of explanation
INPUT_DATA_FILE="instances.json"
Explanation: Setup env variables for predict
End of explanation
gcloud ai-platform predict --model $MODEL_NAME --version $VERSION_NAME --json-instances $INPUT_DATA_FILE
Explanation: Run predict
End of explanation
%%bash
bq query --use_legacy_sql=false \
'CREATE MODEL `bqml_tutorial.boosted_tree_iris_model`
OPTIONS (model_type="boosted_tree_classifier",
max_iterations=10, input_label_cols=["species"])
AS SELECT
*
FROM
`bigquery-public-data.ml_datasets.iris`;'
Explanation: Train and deploy a Boosted Tree classifier model
Train the model
Train a Boosted Tree classifier model that predicts iris type using the BigQuery ML CREATE MODEL statement. This training job should take approximately 7 minutes to complete.
End of explanation
%%bash
bq extract --destination_format ML_XGBOOST_BOOSTER -m bqml_tutorial.boosted_tree_iris_model gs://$BUCKET/boosted_tree_iris_model
Explanation: Export the model
Export the model to a Cloud Storage bucket using the bq command-line tool. For additional ways to export models, see Exporting BigQuery ML models.
End of explanation
%%bash
mkdir serving_dir
gsutil cp -r gs://$BUCKET/boosted_tree_iris_model serving_dir
Explanation: Local deployment and serving
In the exported files, there is a main.py file for local run.
Download the exported model files to a local directory
End of explanation
%%bash
tar -xvf serving_dir/boosted_tree_iris_model/xgboost_predictor-0.1.tar.gz -C serving_dir/boosted_tree_iris_model/
Explanation: Extract predictor.py
End of explanation
%%bash
pip3 install xgboost
%%bash
cd serving_dir/boosted_tree_iris_model/
python main.py '[{"sepal_length":5.0, "sepal_width":2.0, "petal_length":3.5, "petal_width":1.0}]'
Explanation: Install XGBoost library
Install the XGBoost library - version 0.82 or later.
Run the prediction
End of explanation
MODEL_NAME="BOOSTED_TREE_IRIS_MODEL"
gcloud ai-platform models create $MODEL_NAME
Explanation: Online deployment and serving
This section uses the gcloud command-line tool to deploy and run predictions against the exported model.
For more details about deploying a model to AI Platform for online/batch predictions using custom routines, see Deploying models.
Note: Execute the following commands in the Cloud Shell of Cloud Platform Console till the Run predict command.
Create a model resource
End of explanation
# Replace the BUCKET_NAME with your bucket name.
MODEL_DIR="gs://<BUCKET_NAME>/boosted_tree_iris_model"
VERSION_NAME="v1"
Explanation: Create a model version
Set the environment variables
End of explanation
gcloud beta ai-platform versions create $VERSION_NAME --model=$MODEL_NAME --origin=$MODEL_DIR --package-uris=${MODEL_DIR}/xgboost_predictor-0.1.tar.gz --prediction-class=predictor.Predictor --runtime-version=2.1
Explanation: Create the version
End of explanation
gcloud ai-platform versions describe $VERSION_NAME --model $MODEL_NAME
Explanation: This step might take a few minutes to complete. You should see the message Creating version (this might take a few minutes).......
Get information about your new version.
End of explanation
{"sepal_length":5.0, "sepal_width":2.0, "petal_length":3.5, "petal_width":1.0}
{"sepal_length":5.3, "sepal_width":3.7, "petal_length":1.5, "petal_width":0.2}
Explanation: Online prediction
For more details about running online predictions against a deployed model, see Requesting predictions.
Create a newline-delimited JSON file for inputs. For example, instances.json file with the following content. Ignore if already created.
End of explanation
INPUT_DATA_FILE="instances.json"
Explanation: Setup env variables for predict
End of explanation
gcloud ai-platform predict --model $MODEL_NAME --version $VERSION_NAME --json-instances $INPUT_DATA_FILE
Explanation: Run predict
End of explanation
%%bash
bq query --use_legacy_sql=false \
'CREATE MODEL `bqml_tutorial.automl_iris_model`
OPTIONS (model_type="automl_classifier",
budget_hours=1, input_label_cols=["species"])
AS SELECT
* EXCEPT(multiplier)
FROM
`bigquery-public-data.ml_datasets.iris`, unnest(GENERATE_ARRAY(1, 10)) as multiplier;'
Explanation: Train and deploy an AutoML classifier model
Train the model
Train an AutoML classifier model that predicts iris type using the BigQuery ML CREATE MODEL statement. AutoML models need at least 1000 rows of input data. Because ml_datasets.iris only has 150 rows, we duplicate the data 10 times. This training job should take around 2 hours to complete.
End of explanation
%%bash
bq extract -m bqml_tutorial.automl_iris_model gs://$BUCKET/automl_iris_model
Explanation: Export the model
Export the model to a Cloud Storage bucket using the bq command-line tool. For additional ways to export models, see Exporting BigQuery ML models.
End of explanation
%%bash
mkdir automl_serving_dir
gsutil cp -r gs://$BUCKET/automl_iris_model/* automl_serving_dir/
Explanation: Local deployment and serving
For details about building AutoML containers, see Exporting models. The following steps require you to install Docker.
Copy exported model files to a local directory
End of explanation
%%bash
docker pull gcr.io/cloud-automl-tables-public/model_server
Explanation: Pull AutoML Docker image
End of explanation
%%bash
docker run -v `pwd`/automl_serving_dir:/models/default/0000001 -p 8080:8080 -it gcr.io/cloud-automl-tables-public/model_server
Explanation: Start Docker container
End of explanation
{"instances": [{"sepal_length":5.0, "sepal_width":2.0, "petal_length":3.5, "petal_width":1.0},
{"sepal_length":5.3, "sepal_width":3.7, "petal_length":1.5, "petal_width":0.2}]}
Explanation: Run the prediction
Create a newline-delimited JSON file for inputs. For example, input.json file with the following contents:
End of explanation
%%bash
curl -X POST --data @input.json http://localhost:8080/predict
Explanation: Make the predict call
End of explanation |
274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Import Packages
Step2: Section 1
Step3: Last week we learned about the perils of the Wald interval for a proportion. If we calculated it here it would show
Step4: The original authors did do corrections and reported a range of 2.5% to 4.2%.
Key Terminology and Notation
Step5: Derivations & Notation
$\require{color}$
|Name(s)|Notation<br/>(from publication)|Calculation<br/>(From Confusion Matrix)|Name in<br/>Code Below|
|---|
Step6: Exercise
This assumes the the accuracy applies equally to both positive and negative results. What if it were different? Examine a few scenarios with different accuracy rate for postive and negative results and compare to the precision found here (16.1%).
Section 2b
Step7: Now, suppose the specificity is 0.995 and sensitivity is 0.80. Estimate prevalence
Step8: Exercise
Examine how sensitive the result estimate of prevalence is to the choosen specificity and sensitivity. Make a graph illustrating the relationship.
Section 2c
Step9: PyMC3 models can be visualized as GraphViz objects. Here is our simple PGM
Step10: Now we can do MCMC sampling for our PGM
Step11: Review the chains for each of our variables
Step12: Recreate the plot from the article. Note they did not show this model, they showed the one in 2e below.
Step13: What is the 95% confidence interval for Prevalence?
First, sample from the posterior
Step14: Now, request the 95% HDI for prevalence.
HDI is the highest density interval. In other words, it is the narrowest interval that contains 95% of the estimates.
Step15: Review the posterior densities, including those for sample_y = number of positive results.
Step16: Section 2e
Step17: PyMC3 models can be visualized as GraphViz objects. Here is our simple PGM
Step18: The visualizaiton of the models can also include the parameters
Step19: Now we can do MCMC sampling for our PGM
Step20: Review the chains for each of our variables
Step21: Recreate the plot from the article (Figure 1). Ours, looks very similar.
Step22: What is the 95% confidence interval for Prevalence?
First, sample from the posterior
Step23: Now, request the 95% HDI for prevalence.
HDI is the highest density interval. In other words, it is the narrowest interval that contains 95% of the estimates.
Step24: The authors also found the 95% interval for $\pi$ (prevalence) to be (0, 1.8%)!
Review the posterior densities, including those for sample_y = number of positive results. | Python Code:
#@title Install arviz and update pymc3
!pip install arviz -q
!pip install pymc3 -U -q
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Bayesian analysis of tests with unknown specificity and sensitivity
This Colab is based on this reference Article:
- GitHub Link
- https://www.medrxiv.org/content/10.1101/2020.05.22.20108944v2.full
- Bayesian analysis of tests with unknown specificity and sensitivity
- Andrew Gelman, Bob Carpenter
- medRxiv 2020.05.22.20108944;
- doi: https://doi.org/10.1101/2020.05.22.20108944
- Now published in Journal of the Royal Statistical Society: Series C (Applied Statistics)
- doi: 10.1111/rssc.12435
Setup
Install & Update
ArviZ: Exploratory analysis of Bayesian models
PyMC3: Probabilistic programming package
Restart Runtime when done
End of explanation
import random, math
import scipy.stats as stats
import pymc3 as pm
import arviz as az
import matplotlib.pyplot as plt
Explanation: Import Packages
End of explanation
sample = 3330
positive = 50
prevalence = positive / sample
print("Prevalence without accounting for uncertainty is {:.1%}".format(prevalence))
Explanation: Section 1: Background
Open the article and follow along
When testing for a rare disease, prevelence estimates can be highly sensitive to uncertainty in the specificity and sensitivity of the test.
Think about a test for presence of COVID-19. If it is completely accurate then prevelence can be estimated by testing a sample of people and then calculating the rate of positive results. Unfortunately, tests are not completely accurate.
What are specificity and sensitivity?
Think of specificity as the true negative rate. The number of negative test results as a fraction of the number or truely negative results. This includes false positive results.
Think of sensitivity as the true positive rate. The number of positive test results as a fraction of the number of truely positive results. This includes false negative results.
Correcting for false positives and false negatives is a probability problem!
Basis for this colab
In April, 2020, 3330 residents of Santa Clara County, California were recruited and tested for COVID-19. Results include 50 positive results.
Section 1: Prevalence while ignoring uncertainty
End of explanation
confidence_level = 0.95
z = stats.norm.ppf(1-(1-confidence_level)/2)
standard_error = z * math.sqrt(prevalence * (1 - prevalence) / sample)
prevalenceL = prevalence - standard_error
prevalenceU = prevalence + standard_error
print("Prevelence is estimated as {:.1%} with a 95% CI of ({:.1%},{:.1%})".format(prevalence,prevalenceL,prevalenceU))
Explanation: Last week we learned about the perils of the Wald interval for a proportion. If we calculated it here it would show:
End of explanation
#@title Confusion Matrix
%%html
<font size="3">
<table align="center" style="margin: 0px auto;">
<tbody align="center">
<tr>
<td></td>
<td></td>
<td colspan=2><center>Actual</center></td>
</tr>
<tr>
<td></td>
<td></td>
<td>Positive</td>
<td>Negative</td>
</tr>
<tr>
<td rowspan=2>Predicted</td>
<td>Positive</td>
<td>True Positive (<span style="color:white;background-color:green;">TP</span>)</td>
<td>False Positive (<span style="color:white;background-color:red;">FP</span>)</td>
</tr>
<tr>
<td>Negative</td>
<td>False Negative (<span style="color:black;background-color:yellow;">FN</span>)</td>
<td>True Negative (<span style="color:black;background-color:lightgreen;">TN</span>)</td>
</tr>
<thead><tr style="background-color:lightgrey"><td colspan=4><center>Confusion Matrix</td></tr></thead>
<tfoot><tr style="background-color:lightgrey"><td colspan=4><center><a href="https://en.wikipedia.org/wiki/Confusion_matrix">https://en.wikipedia.org/wiki/Confusion_matrix</a></td></tr></tfoot>
</tbody>
</table>
</font>
Explanation: The original authors did do corrections and reported a range of 2.5% to 4.2%.
Key Terminology and Notation:
End of explanation
sample = 1000
known_accuracy = 0.95
known_prevalence = 0.01
# The entire sample is tested.
# Everyone is actually either positive or negative.
actual_positive = known_prevalence * sample
actual_negative = sample - actual_positive
# Test Results
true_positive = actual_positive * known_accuracy
false_positive = actual_negative * (1-known_accuracy)
precision = true_positive / (true_positive+false_positive)
print("For a sample of {} with prevalence {:.1%} and test accuracy {:.1%}\nthe expected number of true positives is {:.1f} and false positives is {:.1f}".format(sample,known_prevalence,known_accuracy,true_positive,false_positive))
print("Positives that are true (Precision): ","{:.1%}".format(precision))
Explanation: Derivations & Notation
$\require{color}$
|Name(s)|Notation<br/>(from publication)|Calculation<br/>(From Confusion Matrix)|Name in<br/>Code Below|
|---|:---:|:---:|:---:|
|True Positive Rate<br/>Recall<br/>Sensitivity|$\delta$|$\frac{\textcolor{white}{\colorbox{green}{TP}}}{\textcolor{white}{\colorbox{green}{TP}} + \textcolor{black}{\colorbox{yellow}{FN}}}$|sensitivity|
|True Negative Rate<br/>Specificity|$\gamma$|$\frac{\textcolor{black}{\colorbox{lightgreen}{TN}}}{\textcolor{white}{\colorbox{red}{FP}} + \textcolor{black}{\colorbox{lightgreen}{TN}}}$|specificity|
|Prevalence|$\pi$|$\frac{\textcolor{white}{\colorbox{green}{TP}} + \textcolor{black}{\colorbox{yellow}{FN}}}{\textcolor{black}{\colorbox{none}{Total}}}$|prevalence|
|Accuracy||$\frac{\textcolor{white}{\colorbox{green}{TP}} + \textcolor{black}{\colorbox{lightgreen}{TN}}}{\textcolor{black}{\colorbox{none}{Total}}}$|accuracy|
|Precision||$\frac{\textcolor{white}{\colorbox{green}{TP}}}{\textcolor{white}{\colorbox{green}{TP}} + \textcolor{white}{\colorbox{red}{FP}}}$|precision|
Section 2: Modeling a test with uncertain sensitivity and specificity
Section 2a: Known accuracy and prevalence
Paragraph 1
This is a counterintuitive result to many people. A person test positive for a disease. If the test has accuracy of 95% and the prevalence is known to be 1% then what is the probability the person actually has the tested disease? Is it 95%?
In the code below, we test 1000 people. The accuracy and prevalence are fixed. We uncover the actual probability of beling positive after testing positive is on 16.1%
End of explanation
sample = 3330
positive = 50
positiveRate = positive/sample
confidence_level = 0.95
z = stats.norm.ppf(1-(1-confidence_level)/2)
standard_error = math.sqrt(positiveRate * (1 - positiveRate) / sample)
positiveRateL = positiveRate - z * standard_error
positiveRateU = positiveRate + z * standard_error
print("The positive rate is estimated as {:.1%} with a standard error of {:.1} and 95% CI of ({:.1%},{:.1%})".format(positiveRate,standard_error,positiveRateL,positiveRateU))
Explanation: Exercise
This assumes the the accuracy applies equally to both positive and negative results. What if it were different? Examine a few scenarios with different accuracy rate for postive and negative results and compare to the precision found here (16.1%).
Section 2b: Estimate Prevalence with Positive results
Paragraph 2
Extend our thinking here. Let's estimate the prevalence using the rate of positive tests. The authors give us formulas and derivations to help with this:
$p = (1 - \gamma)(1 - \pi) + \delta\pi$
$\textrm{PositiveRate} = (1 - \textrm{Specificity})(1 - \textrm{Prevalence}) + \textrm{Sensitivity}\cdot \textrm{Prevalence}$
$\pi = \frac{(p + \gamma - 1)}{(\delta + \gamma - 1)}$
$\textrm{Prevalence} = \frac{(\textrm{PositiveRate} + \textrm{Specificity} - 1)}{(\textrm{Sensitivity} + \textrm{Specificity} - 1)}$
End of explanation
# Suppose = Good Fixed Guess
specificity = 0.995
sensitivity = 0.80
prevalence = (positiveRate+specificity-1)/(sensitivity+specificity-1)
prevalence_standard_error = standard_error/(sensitivity+specificity-1)
prevalenceL = prevalence - z * prevalence_standard_error
prevalenceU = prevalence + z * prevalence_standard_error
print("Estimated prevalence of {:.1%} with standard error {:.1} and 95% CI of ({:.1%},{:.1%})".format(
prevalence, prevalence_standard_error,prevalenceL,prevalenceU))
Explanation: Now, suppose the specificity is 0.995 and sensitivity is 0.80. Estimate prevalence:
End of explanation
# Observed Values
sample_y = 50
sample_n = 3330
specificity_mean = 0.995
specificity_sigma = .1
sensitivity_mean = 0.80
sensitivity_sigma = .1
with pm.Model() as normal_priors:
# priors
prevalence = pm.Uniform("prevalence", lower=0.0, upper=1.0)
boundedNormal = pm.Bound(pm.Normal, lower=0.0, upper=1.0)
specificity = boundedNormal("specificity", mu=specificity_mean, sigma=specificity_sigma)
sensitivity = boundedNormal("sensitivity", mu=sensitivity_mean, sigma=sensitivity_sigma)
# variables
sample_p = ((1-specificity)*(1-prevalence)+sensitivity*prevalence)
# model
sample_y = pm.Binomial("sample_y", n=sample_n, p=sample_p, observed=sample_y)
Explanation: Exercise
Examine how sensitive the result estimate of prevalence is to the choosen specificity and sensitivity. Make a graph illustrating the relationship.
Section 2c: Difficulties with the classical approach
Paragraph 3
Issues:
- If positiveRate is less than 1-specificity, then prevalence < 0
- Uncertainty in specificity and sensitivity is hard to calculate in the expression for prevalence
Exercise
Reconsider the code in 2b with positive = 5 instead of 50. Does the estimate of prevalence makes sense?
Section 2d: Bayesian Analysis with Normal Priors
Paragraphs 4-5
What if we could provide specificiy and sensitivity with distributions rather than fixed values?
For this we use PyMC3. Think of this as algebra on random variables. A good way of thinking about the flow is:
- Building blocks - parameters and distributions
- Model Specification - joint distributions
- Inference - appoximating integrals via sampling
We want to model $y \sim \textrm{binomial}(n,p)$ with:
- $p = (1 - \gamma)(1 - \pi) + \delta\pi$
- $\textrm{PositiveRate} = (1 - \textrm{Specificity})(1 - \textrm{Prevalence}) + \textrm{Sensitivity}\cdot \textrm{Prevalence}$
- $\gamma \sim \textrm{normal}(\mu_\gamma,\sigma_\gamma)$
- specificity
- $\delta \sim \textrm{normal}(\mu_\delta,\sigma_\delta)$
- sensitivity
Also, we want to control $\pi$ (prevalence), $\gamma$ (specificity), and $\delta$ (sensitivity) to be between 0 and 1.
The following PyMC3 code block does this. Tip It can be helpful to read the block from bottom to top.
End of explanation
graph = pm.model_graph.model_to_graphviz(model=normal_priors, formatting="plain")
graph.attr(rankdir='LR', size='8,16')
graph
Explanation: PyMC3 models can be visualized as GraphViz objects. Here is our simple PGM:
End of explanation
with normal_priors:
traceNormal = pm.sample(draws=2000, tune=10000, chains=1, target_accept=0.9)
Explanation: Now we can do MCMC sampling for our PGM:
End of explanation
pm.traceplot(traceNormal)
Explanation: Review the chains for each of our variables:
End of explanation
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(12,5))
ax1.scatter(traceNormal["specificity"], traceNormal["prevalence"], alpha=0.6)
ax1.set_ylabel('Prevalence, $\pi$ ', fontsize=15)
ax1.set_xlabel('Specificity, $\gamma$', fontsize=15)
ax2.hist(traceNormal["prevalence"], bins=12, edgecolor='black', linewidth=1.2)
ax2.set_xlabel('Prevalence, $\pi$ ', fontsize=15)
Explanation: Recreate the plot from the article. Note they did not show this model, they showed the one in 2e below.
End of explanation
with normal_priors:
ppc = pm.sample_posterior_predictive(
traceNormal,
var_names=['prevalence', 'specificity', 'sensitivity', 'sample_y'])
Explanation: What is the 95% confidence interval for Prevalence?
First, sample from the posterior:
End of explanation
prevalence_hdi95 = pm.stats.hdi(ppc['prevalence'],hdi_prob=0.95)
print("Prevalence has 95% HDI of ({:0.2%},{:0.2%})".format(prevalence_hdi95[0],
prevalence_hdi95[1]))
Explanation: Now, request the 95% HDI for prevalence.
HDI is the highest density interval. In other words, it is the narrowest interval that contains 95% of the estimates.
End of explanation
pm.plot_posterior(ppc);
Explanation: Review the posterior densities, including those for sample_y = number of positive results.
End of explanation
# Observed Values
sample_y = 50
sample_n = 3330
specificity_y = 399
specificity_n = 401
sensitivity_y = 103
sensitivity_n = 122
with pm.Model() as binomial_priors:
# priors
prevalence = pm.Uniform("prevalence", lower=0.0, upper=1.0)
specificity = pm.Uniform("specificity", lower=0.0, upper=1.0)
sensitivity = pm.Uniform("sensitivity", lower=0.0, upper=1.0)
# variables
sample_p = ((1-specificity)*(1-prevalence)+sensitivity*prevalence)
# model
m_sample_y = pm.Binomial("sample_y", n=sample_n, p=sample_p, observed=sample_y)
m_specificity_y = pm.Binomial("specificity_y", n=specificity_n, p=specificity, observed=specificity_y)
m_sensitivity_y = pm.Binomial("sensitivity_y", n=sensitivity_n, p=sensitivity, observed=sensitivity_y)
Explanation: Section 2e: Bayesian Analysis with Data from Previous Trials
Paragraphs 6-8
In the case of the authors reference, they had prior information on specificity and sensitivity from previous trials on known negative and known positive subjects.
Again, we want to model $y \sim \textrm{binomial}(n,p)$. This time with:
- $p = (1 - \gamma)(1 - \pi) + \delta\pi$
- $\textrm{PositiveRate} = (1 - \textrm{Specificity})(1 - \textrm{Prevalence}) + \textrm{Sensitivity}\cdot \textrm{Prevalence}$
- $y_\gamma \sim \textrm{binomial}(n_\gamma,\gamma)$
- specificity
- $y_\delta \sim \textrm{normal}(\mu_\delta,\sigma_\delta)$
- sensitivity
Also, we want to control $\pi$ (prevalence), $\gamma$ (specificity), and $\delta$ (sensitivity) to be between 0 and 1. In this instance we do that with a uniform distribution.
The following PyMC3 code block does this. Tip It can be helpful to read the block from bottom to top.
End of explanation
graph = pm.model_graph.model_to_graphviz(model=binomial_priors, formatting="plain")
graph.attr(rankdir='LR', size='8,16')
graph
Explanation: PyMC3 models can be visualized as GraphViz objects. Here is our simple PGM:
End of explanation
graph = pm.model_graph.model_to_graphviz(model=binomial_priors, formatting="plain_with_params")
graph.attr(rankdir='LR', size='8,16')
graph
Explanation: The visualizaiton of the models can also include the parameters:
End of explanation
with binomial_priors:
traceBinomial = pm.sample(draws=2000, tune=10000, chains=1, target_accept=0.9)
Explanation: Now we can do MCMC sampling for our PGM:
End of explanation
pm.traceplot(traceBinomial);
Explanation: Review the chains for each of our variables:
End of explanation
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(12,5))
ax1.scatter(traceBinomial["specificity"], traceBinomial["prevalence"], alpha=0.6)
ax1.set_ylabel('Prevalence, $\pi$ ', fontsize=15)
ax1.set_xlabel('Specificity, $\gamma$', fontsize=15)
ax2.hist(traceBinomial["prevalence"], bins=12, edgecolor='black', linewidth=1.2)
ax2.set_xlabel('Prevalence, $\pi$ ', fontsize=15)
Explanation: Recreate the plot from the article (Figure 1). Ours, looks very similar.
End of explanation
with binomial_priors:
ppc = pm.sample_posterior_predictive(
traceBinomial,
var_names=['prevalence','specificity','sensitivity','sample_y'])
Explanation: What is the 95% confidence interval for Prevalence?
First, sample from the posterior:
End of explanation
prevalence_hdi95 = pm.stats.hdi(ppc['prevalence'],hdi_prob=0.95)
print("Prevalence has 95% HDI of ({:0.1%},{:0.1%})".format(prevalence_hdi95[0],
prevalence_hdi95[1]))
Explanation: Now, request the 95% HDI for prevalence.
HDI is the highest density interval. In other words, it is the narrowest interval that contains 95% of the estimates.
End of explanation
pm.plot_posterior(ppc);
Explanation: The authors also found the 95% interval for $\pi$ (prevalence) to be (0, 1.8%)!
Review the posterior densities, including those for sample_y = number of positive results.
End of explanation |
275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2>Analisi comparativa dei metodi di dosaggio degli anticorpi anti recettore del TSH</h2>
<h3>Metodo Routine
Step1: <h4>Importazione del file con i dati </h4>
Step2: Varibili d'ambiete in comune
Step3: <h3>Aggiungo due colonne con pos neg in base al cut-off</h3>
Step4: <h4>Calcolo la tabella delle frequenze</h4>
<font color='red'> modulo utilizzato scipy.stat </font> http
Step5: <h4> Test chi quadrato</h4>
Step6: <h4> Test esatto di Fisher</h4>
Step7: <h3>test corretto per questo caso è il test di McNemar
Step8: -
<h2> Analisi della regressione</h2>
Step9: eseguiamo ora lo studio di regressione con tre modelli diversi
<font color='red'>Moduli statmodels e scipy </font>
Step10: Ortogonal Distance Regression (Deming Regression)
Step11: <h4>Bias</h4>
Step12: Creo colonne con Positivo negativo dubbio in base ai cut off secificati dalle ditte
Step13: Creo una colonna che assume valore positivo solo nel caso in cui i due metodi abbiano dato valore opposto N con P o vieceversa) | Python Code:
%matplotlib inline
#importo le librerie
import pandas as pd
import os
from __future__ import print_function,division
import numpy as np
import seaborn as sns
os.environ["NLS_LANG"] = "ITALIAN_ITALY.UTF8"
Explanation: <h2>Analisi comparativa dei metodi di dosaggio degli anticorpi anti recettore del TSH</h2>
<h3>Metodo Routine:<h3>
<ul>
<li>Brahms Trak Human con metodica LIA </li>
<li><small>Metodo Siemens XPi TSI Assay chemiluminescenza Immulite 2000</small></li>
</ul>
<h3>Metodo di comparazione Thermophisher: anti TSH-R Elia su Immunocap 250</h3>
Analisi dei dati effettuata con la suite CONTINUUM ANALITICS https://www.continuum.io/
basata sui seguenti moduli python:
<ul>
<li>Pandas: per la gestione dei dati e le analisi di base </li>
<li> Matplotlib: per i grafici di base</li>
<li>Seaborn per grafici avanzati</li>
<li>Statmodels e scipy per le analisi avanzate</li>
</ul>
** tutti i software utilizzati sono open source**
End of explanation
#importo il file con i dati
path=r"D:\d\05 Lavscien\autoimmunita\corr_thibya\compar_thibya_brahms.csv"
database=pd.read_csv(path,sep=';',usecols=[1, 2, 3,4,5])#colonne da utilizzare
database['valore_cap']=database['valore_cap'].apply(lambda x: round(x,2))
database.drop(['codificato','accettazione'],axis=1,inplace=True)
database.tail(6)
Explanation: <h4>Importazione del file con i dati </h4>
End of explanation
#variabili d'ambiente comuni
cutoff_cap=2.9 #tre 2.9 r 3.3 dubbi
#cutoff_cap=3.3
cutoff_rout=1 #brahms 1-1.5 dubbi
METODO_ROUTINE="Brahms Trak Human LIA"
#METODO_ROUTINE="Siemens Immulite 2000 Chemil."
CAP="Thermo Fisher ELIA anti-TSH-R Cap250 "
Explanation: Varibili d'ambiete in comune
End of explanation
database['cap_PN']=(database['valore_cap']>=cutoff_cap)
database['rut_PN']=(database['valore_rut']>=cutoff_rout)
database['cap_PN'].replace([True,False],['Pos','Neg'],inplace=True)
database['rut_PN'].replace([True,False],['Pos','Neg'],inplace=True)
database.head()
database.describe()
Explanation: <h3>Aggiungo due colonne con pos neg in base al cut-off</h3>
End of explanation
#sci.py moduli
from scipy.stats import chi2_contingency, fisher_exact
pd.crosstab(database.cap_PN,database.rut_PN)
ax=pd.crosstab(database.cap_PN,database.rut_PN).plot(kind='bar',stacked=True, )
ax.legend(['Neg','Pos'])
ax.set_xlabel(CAP)
Explanation: <h4>Calcolo la tabella delle frequenze</h4>
<font color='red'> modulo utilizzato scipy.stat </font> http://docs.scipy.org/doc/scipy/reference/stats.html
End of explanation
# test chi square
chi2, pvalue, dof, ex = chi2_contingency(pd.crosstab(database.cap_PN,database.rut_PN))
print ('valore di p:{}'.format(pvalue))
Explanation: <h4> Test chi quadrato</h4>
End of explanation
# test esatto di Fisher
oddsratio, pvalue =fisher_exact(pd.crosstab(database.cap_PN,database.rut_PN))
print ('valore di p:{}'.format(pvalue))
Explanation: <h4> Test esatto di Fisher</h4>
End of explanation
from statsmodels.sandbox.stats.runs import mcnemar
stat,p=mcnemar(pd.crosstab(database.cap_PN,database.rut_PN))
print("valore di p:{}".format(p))
Explanation: <h3>test corretto per questo caso è il test di McNemar:</h3>
test non parametrico dati appaiati risposte nominali binarie
<h4> Test esatto McNemar (per la dipendenza delle variabili)</h4>
<font color='red'> modulo utilizzato statsmodels </font> http://statsmodels.sourceforge.net/stable/index.html
End of explanation
# grafico di dispersione
import matplotlib.pyplot as plt
fig = plt.figure()
fig.suptitle('Scatterplot', fontsize=14, fontweight='bold')
ax = fig.add_subplot(111)
ax.set_xlabel(METODO_ROUTINE)
ax.set_ylabel(CAP)
ax.plot(database.valore_rut,database.valore_cap,'o')
plt.show()
Explanation: -
<h2> Analisi della regressione</h2>
End of explanation
# con statmodel : regressione minimi quadrati
##res_ols = sm.OLS(y, statsmodels.tools.add_constant(X)).fit() per vecchia versione
import statsmodels.api as sm
#sm.OLS(Y,X)
X = sm.add_constant(database.valore_rut )
modello_minquad=sm.OLS(database.valore_cap,X)
regressione_minquad=modello_minquad.fit()
regressione_minquad.summary()
# con statmodel : regressione robusta (Robust Linear Model)
X = sm.add_constant(database.valore_rut)
modello=sm.RLM(database.valore_cap,X)
regressione_robusta=modello.fit()
regressione_robusta.summary()
#importo la librearia seborn per una migliore visualizzazione grafica
sns.set(color_codes=True)
ax = sns.regplot(x=database.valore_rut,y=database.valore_cap, color="g",robust=True)
ax = sns.regplot(x=database.valore_rut,y=database.valore_cap, color="b")
ax.set_title('Regressione lineare OLS + RLM ')
ax.set_xlabel(METODO_ROUTINE)
ax.set_ylabel(CAP)
ax.set(ylim=(0, None))
ax.set(xlim=(0, None))
sns.set(color_codes=True)
ax2 = sns.regplot(x=database.valore_rut,y=database.valore_cap, color="g",robust=True)
ax2 = sns.regplot(x=database.valore_rut,y=database.valore_cap, color="b")
ax2.set_title('Regressione lineare OLS + RLM ')
ax2.set_xlabel(METODO_ROUTINE)
ax2.set_ylabel(CAP)
ax2.set(ylim=(0, 20))
ax2.set(xlim=(0, 8))
ax=sns.jointplot(x=database.valore_rut,y=database.valore_cap, kind="reg");
ax.set_axis_labels(METODO_ROUTINE,CAP)
Explanation: eseguiamo ora lo studio di regressione con tre modelli diversi
<font color='red'>Moduli statmodels e scipy </font>
End of explanation
# regressione ODR (ortogonal distance regression Deming)
import scipy.odr as odr
#modello di fitting
def funzione(B,x):
return B[0]*x+B[1]
linear= odr.Model(funzione)
variabili=odr.Data(database.valore_rut,database.valore_cap)
regressione_ortogonale=odr.ODR(variabili,linear,beta0=[1., 2.])
output=regressione_ortogonale.run()
#print (odr.Model)
output.pprint()
print("coefficente angolare: {ang}, Intercetta: {int}".format(ang=output.beta[0],int=output.beta[1]))
Explanation: Ortogonal Distance Regression (Deming Regression)
End of explanation
database_b=database
database_b['bias']=database['valore_rut']-database['valore_cap']
database_b.head(5)
sns.distplot(database_b.bias)
database.describe()
Explanation: <h4>Bias</h4>
End of explanation
def discret_cap(x):
if x<2.9:
return 'N'
elif x>=3.3:
return 'P'
else:
return 'D'
def discret_bra(x):
if x<1:
return 'N'
elif x>=1.5:
return 'P'
else:
return 'D'
database['cap_PND']=database['valore_cap'].apply(discret_cap)
database['rut_PND']=database['valore_rut'].apply(discret_bra)
database.head(12)
pd.crosstab(database.cap_PND,database.rut_PND)
ax=pd.crosstab(database.cap_PND,database.rut_PND).plot(kind='bar',stacked=True, )
ax.legend(['Bra Dub','Bra Neg','Bra Pos'])
ax.set_xlabel(CAP)
Explanation: Creo colonne con Positivo negativo dubbio in base ai cut off secificati dalle ditte
End of explanation
def no_match(x):
if (x['cap_PND']==x['rut_PND']or x['cap_PND']=='D' or x['rut_PND']=='D'):
return 0
else:
return 1
#df.apply(lambda row: my_test(row['a'], row['c']), axis=1)
database['mismatch']=database.apply(no_match,axis=1)
#database['valore_cap'].apply(discret_cap)
per_disc=round(100*database['mismatch'].sum()/database['mismatch'].count(),2)
database.head(20)
print("classificazioni deiverse: {} su un totale di {}".format(database['mismatch'].sum(),database['mismatch'].count()))
print(" pecentuale di discordanza e dello {}%".format(per_disc))
Explanation: Creo una colonna che assume valore positivo solo nel caso in cui i due metodi abbiano dato valore opposto N con P o vieceversa)
End of explanation |
276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
Simple water-table solution with recharge
This problem is an unconfined system with a uniform recharge rate, a horizontal bottom, and flow between constant-head boundaries in column 1 and 100. MODFLOW models cannot match the analytical solution exactly because they do not allow recharge to constant-head cells. Constant-head cells in column 1 and 100 were made very thin (0.1 m) in the direction of flow to minimize the effect of recharge applied to them. The analytical solution for this problem can be written as
Step1: Function to calculate the analytical solution at specified points in a aquifer
Step2: Model data required to create the model files and calculate the analytical solution
Step3: Create a flopy object to create and run the MODFLOW-NWT datasets for this problem
Step4: Read the simulated MODFLOW-NWT model results
Step5: Plot the MODFLOW-NWT results and compare to the analytical solution | Python Code:
%matplotlib inline
from __future__ import print_function
import sys
import os
import platform
import numpy as np
import matplotlib.pyplot as plt
import flopy
import flopy.utils as fputl
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mfnwt'
if platform.system() == 'Windows':
exe_name = 'MODFLOW-NWT.exe'
mfexe = exe_name
modelpth = os.path.join('data')
modelname = 'watertable'
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
Explanation: FloPy
Simple water-table solution with recharge
This problem is an unconfined system with a uniform recharge rate, a horizontal bottom, and flow between constant-head boundaries in column 1 and 100. MODFLOW models cannot match the analytical solution exactly because they do not allow recharge to constant-head cells. Constant-head cells in column 1 and 100 were made very thin (0.1 m) in the direction of flow to minimize the effect of recharge applied to them. The analytical solution for this problem can be written as:
$h = \sqrt{b_{1}^{2} - \frac{x}{L} (b_{1}^{2} - b_{2}^{2}) + (\frac{R x}{K}(L-x))} + z_{bottom}$
where $R$ is the recharge rate, $K$ is the the hydraulic conductivity in the horizontal direction, $b_1$ is the specified saturated thickness at the left boundary, $b_2$ is the specified saturated thickness at the right boundary, $x$ is the distance from the left boundary $L$ is the length of the model domain, and $z_{bottom}$ is the elebation of the bottom of the aquifer.
The model consistes of a grid of 100 columns, 1 row, and 1 layer; a bottom altitude of 0 m; constant heads of 20 and 11m in column 1 and 100, respectively; a recharge rate of 0.001 m/d; and a horizontal hydraulic conductivity of 50 m/d. The discretization is 0.1 m in the row direction for the constant-head cells (column 1 and 100) and 50 m for all other cells.
End of explanation
def analyticalWaterTableSolution(h1, h2, z, R, K, L, x):
h = np.zeros((x.shape[0]), np.float)
#dx = x[1] - x[0]
#x -= dx
b1 = h1 - z
b2 = h2 - z
h = np.sqrt(b1**2 - (x/L)*(b1**2 - b2**2) + (R * x / K) * (L - x)) + z
return h
Explanation: Function to calculate the analytical solution at specified points in a aquifer
End of explanation
# model dimensions
nlay, nrow, ncol = 1, 1, 100
# cell spacing
delr = 50.
delc = 1.
# domain length
L = 5000.
# boundary heads
h1 = 20.
h2 = 11.
# ibound
ibound = np.ones((nlay, nrow, ncol), dtype=np.int)
# starting heads
strt = np.zeros((nlay, nrow, ncol), dtype=np.float)
strt[0, 0, 0] = h1
strt[0, 0, -1] = h2
# top of the aquifer
top = 25.
# bottom of the aquifer
botm = 0.
# hydraulic conductivity
hk = 50.
# location of cell centroids
x = np.arange(0.0, L, delr) + (delr / 2.)
# location of cell edges
xa = np.arange(0, L+delr, delr)
# recharge rate
rchrate = 0.001
# calculate the head at the cell centroids using the analytical solution function
hac = analyticalWaterTableSolution(h1, h2, botm, rchrate, hk, L, x)
# calculate the head at the cell edges using the analytical solution function
ha = analyticalWaterTableSolution(h1, h2, botm, rchrate, hk, L, xa)
# ghbs
# ghb conductance
b1, b2 = 0.5*(h1+hac[0]), 0.5*(h2+hac[-1])
c1, c2 = hk*b1*delc/(0.5*delr), hk*b2*delc/(0.5*delr)
# dtype
ghb_dtype = flopy.modflow.ModflowGhb.get_default_dtype()
print(ghb_dtype)
# build ghb recarray
stress_period_data = np.zeros((2), dtype=ghb_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
# fill ghb recarray
stress_period_data[0] = (0, 0, 0, h1, c1)
stress_period_data[1] = (0, 0, ncol-1, h2, c2)
Explanation: Model data required to create the model files and calculate the analytical solution
End of explanation
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth, version='mfnwt')
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol,
delr=delr, delc=delc,
top=top, botm=botm,
perlen=1, nstp=1, steady=True)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=strt)
lpf = flopy.modflow.ModflowUpw(mf, hk=hk, laytyp=1)
ghb = flopy.modflow.ModflowGhb(mf, stress_period_data=stress_period_data)
rch = flopy.modflow.ModflowRch(mf, rech=rchrate, nrchop=1)
oc = flopy.modflow.ModflowOc(mf)
nwt = flopy.modflow.ModflowNwt(mf, linmeth=2, iprnwt=1, options='COMPLEX')
mf.write_input()
# remove existing heads results, if necessary
try:
os.remove(os.path.join(model_ws, '{0}.hds'.format(modelname)))
except:
pass
# run existing model
mf.run_model()
Explanation: Create a flopy object to create and run the MODFLOW-NWT datasets for this problem
End of explanation
# Create the headfile object
headfile = os.path.join(modelpth, '{0}.hds'.format(modelname))
headobj = fputl.HeadFile(headfile, precision='single')
times = headobj.get_times()
head = headobj.get_data(totim=times[-1])
Explanation: Read the simulated MODFLOW-NWT model results
End of explanation
fig = plt.figure(figsize=(16,6))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=0.25, hspace=0.25)
ax = fig.add_subplot(1, 3, 1)
ax.plot(xa, ha, linewidth=8, color='0.5', label='analytical solution')
ax.plot(x, head[0, 0, :], color='red', label='MODFLOW-2015')
leg = ax.legend(loc='lower left')
leg.draw_frame(False)
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Head, in m')
ax = fig.add_subplot(1, 3, 2)
ax.plot(x, head[0, 0, :] - hac, linewidth=1, color='blue')
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Error, in m')
ax = fig.add_subplot(1, 3, 3)
ax.plot(x, 100.*(head[0, 0, :] - hac)/hac, linewidth=1, color='blue')
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Percent Error');
Explanation: Plot the MODFLOW-NWT results and compare to the analytical solution
End of explanation |
277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Binomial and negative binomial distributions
Today's post is prompted by this question from Reddit
Step1: Solution
There are two ways to solve this problem. One is to relate the desired distribution to the binomial distribution.
If the probability of success on every trial is p, the probability of getting the kth success on the nth trial is
PMF(n; k, p) = BinomialPMF(k-1; n-1, p) p
That is, the probability of getting k-1 successes in n-1 trials, times the probability of getting the kth success on the nth trial.
Here's a function that computes it
Step2: And here's an example using the parameters in the question.
Step3: We can solve the same problem using the negative binomial distribution, but it requires some translation from the parameters of the problem to the conventional parameters of the binomial distribution.
The negative binomial PMF is the probability of getting r non-terminal events before the kth terminal event. (I am using "terminal event" instead of "success" and "non-terminal" event instead of "failure" because in the context of the negative binomial distribution, the use of "success" and "failure" is often reversed.)
If n is the total number of events, n = k + r, so
r = n - k
If the probability of a terminal event on every trial is p, the probability of getting the kth terminal event on the nth trial is
PMF(n; k, p) = NegativeBinomialPMF(n-k; k, p) p
That is, the probability of n-k non-terminal events on the way to getting the kth terminal event.
Here's a function that computes it
Step4: Here's the same example
Step5: And confirmation that the results are the same within floating point error.
Step6: Using the PMF, we can compute the mean and standard deviation
Step7: To compute percentiles, we can convert to a CDF (which computes the cumulative sum of the PMF)
Step8: And here are the 10th and 90th percentiles. | Python Code:
from __future__ import print_function, division
import thinkplot
from thinkstats2 import Pmf, Cdf
from scipy import stats
from scipy import special
%matplotlib inline
Explanation: Binomial and negative binomial distributions
Today's post is prompted by this question from Reddit:
How do I calculate the distribution of the number of selections (with replacement)
I need to make before obtaining k? For example, let's say I am picking marbles from
a bag with replacement. There is a 10% chance of green and 90% of black. I want k=5 green
marbles. What is the distribution number of times I need to take a marble before getting 5?
I believe this is a geometric distribution. I see how to calculate the cumulative
probability given n picks, but I would like to generalize it so that for any value of k
(number of marbles I want), I can tell you the mean, 10% and 90% probability for the
number of times I need to pick from it.
Another way of saying this is, how many times do I need to pull on a slot machine
before it pays out given that each pull is independent?
Note: I've changed the notation in the question to be consistent with convention.
End of explanation
def MakePmfUsingBinom(k, p, high=100):
pmf = Pmf()
for n in range(1, high):
pmf[n] = stats.binom.pmf(k-1, n-1, p) * p
return pmf
Explanation: Solution
There are two ways to solve this problem. One is to relate the desired distribution to the binomial distribution.
If the probability of success on every trial is p, the probability of getting the kth success on the nth trial is
PMF(n; k, p) = BinomialPMF(k-1; n-1, p) p
That is, the probability of getting k-1 successes in n-1 trials, times the probability of getting the kth success on the nth trial.
Here's a function that computes it:
End of explanation
pmf = MakePmfUsingBinom(5, 0.1, 200)
thinkplot.Pdf(pmf)
Explanation: And here's an example using the parameters in the question.
End of explanation
def MakePmfUsingNbinom(k, p, high=100):
pmf = Pmf()
for n in range(1, high):
r = n-k
pmf[n] = stats.nbinom.pmf(r, k, p)
return pmf
Explanation: We can solve the same problem using the negative binomial distribution, but it requires some translation from the parameters of the problem to the conventional parameters of the binomial distribution.
The negative binomial PMF is the probability of getting r non-terminal events before the kth terminal event. (I am using "terminal event" instead of "success" and "non-terminal" event instead of "failure" because in the context of the negative binomial distribution, the use of "success" and "failure" is often reversed.)
If n is the total number of events, n = k + r, so
r = n - k
If the probability of a terminal event on every trial is p, the probability of getting the kth terminal event on the nth trial is
PMF(n; k, p) = NegativeBinomialPMF(n-k; k, p) p
That is, the probability of n-k non-terminal events on the way to getting the kth terminal event.
Here's a function that computes it:
End of explanation
pmf2 = MakePmfUsingNbinom(5, 0.1, 200)
thinkplot.Pdf(pmf2)
Explanation: Here's the same example:
End of explanation
diffs = [abs(pmf[n] - pmf2[n]) for n in pmf]
max(diffs)
Explanation: And confirmation that the results are the same within floating point error.
End of explanation
pmf.Mean(), pmf.Std()
Explanation: Using the PMF, we can compute the mean and standard deviation:
End of explanation
cdf = Cdf(pmf)
scale = thinkplot.Cdf(cdf)
Explanation: To compute percentiles, we can convert to a CDF (which computes the cumulative sum of the PMF)
End of explanation
cdf.Percentile(10), cdf.Percentile(90)
Explanation: And here are the 10th and 90th percentiles.
End of explanation |
278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Classification
In this project, we'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. We'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. We'll build a convolutional, max pooling, dropout, and fully connected layers. At the end, we'll get to see our neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step1: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step3: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step5: One-hot encode
Just like the previous code cell, we'll be implementing a function for preprocessing. This time, we'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Step6: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but we don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step7: Check Point
This is our first checkpoint. If we ever decide to come back to this notebook or have to restart the notebook, we can start from here. The preprocessed data has been saved to disk.
Step11: Build the network
For the neural network, we'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step13: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, we should implement the function conv2d_maxpool to apply convolution then max pooling.
Step15: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size).
Step17: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs).
Step19: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs).
Note
Step21: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step23: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step25: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step26: Hyperparameters
Tune the following parameters
Step27: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step28: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step30: Checkpoint
The model has been saved to disk.
Test Model
Test the model against the test dataset. This will be our final accuracy. | Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, we'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. We'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. We'll build a convolutional, max pooling, dropout, and fully connected layers. At the end, we'll get to see our neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 3
sample_id = 2
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
#Implement Function
result_normalize = x/255
#print(result_normalize[0])
return result_normalize
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
#Implement Function
#print(x)
result_one_hot_encode = np.eye(10)[x]
#print(result_one_hot_encode)
return result_one_hot_encode
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, we'll be implementing a function for preprocessing. This time, we'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
End of explanation
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but we don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is our first checkpoint. If we ever decide to come back to this notebook or have to restart the notebook, we can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
#Implement Function
print(image_shape)
return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
#Implement Function
#print(n_classes)
return tf.placeholder(tf.float32, [None, n_classes], name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, we'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
#Implement Function
print(conv_ksize)
print(x_tensor.shape)
print(conv_num_outputs)
weights = tf.Variable(tf.truncated_normal((conv_ksize[0], conv_ksize[1], int(x_tensor.shape[3]), conv_num_outputs ),\
mean=0, stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
print(conv_strides)
conv_layer = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding = 'SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
print(pool_ksize)
print(pool_strides)
conv_layer = tf.nn.max_pool(conv_layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1], \
strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME' )
return conv_layer
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, we should implement the function conv2d_maxpool to apply convolution then max pooling.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
#Implement Function
print(x_tensor.shape)
flattened_x_tensor = tf.reshape(x_tensor, [-1, int(x_tensor.shape[1]) * int(x_tensor.shape[2]) * int(x_tensor.shape[3]) ])
print(flattened_x_tensor.shape)
return flattened_x_tensor
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size).
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# Implement Function
print(x_tensor.shape)
print(num_outputs)
weights = tf.Variable(tf.truncated_normal( (int(x_tensor.shape[1]), num_outputs), mean=0, stddev=0.1 ) )
bias = tf.Variable(tf.zeros(num_outputs))
layer = tf.add(tf.matmul(x_tensor, weights), bias)
print(layer.shape)
return layer
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs).
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
#Implement Function
print(x_tensor.shape)
print(num_outputs)
weights = tf.Variable(tf.truncated_normal((int(x_tensor.shape[1]), num_outputs), mean=0, stddev=0.1 ) )
bias = tf.Variable(tf.zeros(num_outputs))
layer = tf.add(tf.matmul(x_tensor, weights), bias )
print(layer.shape)
return layer
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs).
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv1 = conv2d_maxpool(x, 64, (5,5), (1,1), (3,3), (2,2))
#conv1 = conv2d_maxpool(x, 64, (8,8), (1,1), (3,3), (2,2))
#conv1 = conv2d_maxpool(x, 64, (5,5), (1,1), (3,3), (2,2))
#print('dsdsa')
#print(conv1.shape)
conv2 = conv2d_maxpool(conv1, 128, (3,3), (1,1), (2,2), (2,2))
#conv2 = conv2d_maxpool(conv1, 32, (4,4), (1,1), (3,3), (2,2))
#conv2 = conv2d_maxpool(conv1, 32, (3,3), (1,1), (3,3), (2,2))
conv3 = conv2d_maxpool(conv2, 256, (2,2), (1,1), (2,2), (2,2))
#conv3 = conv2d_maxpool(conv2, 16, (2,2), (1,1), (2,2), (2,2))
#conv3 = conv2d_maxpool(conv2, 16, (2,2), (1,1), (2,2), (2,2) )
# Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
fc1 = flatten(conv3)
# Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc1 = fully_conn(fc1, 1024)
fc1 = fully_conn(fc1, 512)
fc1 = fully_conn(fc1, 256)
#fc1 = fully_conn(fc1, 256)
#c1 = fully_conn(fc1, 256)
fc1 = tf.nn.relu(fc1)
# Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
fc1 = tf.nn.dropout(fc1, keep_prob)
outputvar = output(fc1, 10)
# return output
return outputvar
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply Convolution and Max Pool layers
Apply a Flatten Layer
Apply Fully Connected Layers
Apply an Output Layer
Return the output and apply dropout
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# Implement Function
#print(optimizer)
#print(feature_batch.size)
#cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=feature_batch, labels=label_batch))
#optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
session.run(optimizer, feed_dict={x: feature_batch, y:label_batch, keep_prob:keep_probability})
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# Implement Function
loss = session.run(cost, feed_dict={x:feature_batch, y:label_batch, keep_prob:1.})
valid_acc = session.run(accuracy, feed_dict={x:valid_features, y:valid_labels, keep_prob:1.})
print ('Loss: {:>10.4f} Validation Accuracy: {:.6f} '.format(loss,valid_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# Tune Parameters
epochs = 40
batch_size = 128
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for.
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test the model against the test dataset. This will be our final accuracy.
End of explanation |
279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Filters
Step1: Hodrick-Prescott Filter
The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$
$$y_t = \tau_t + \zeta_t$$
The components are determined by minimizing the following quadratic loss function
$$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
Step2: Baxter-King approximate band-pass filter
Step3: We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
Step4: Christiano-Fitzgerald approximate band-pass filter | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
dta = sm.datasets.macrodata.load_pandas().data
index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))
print(index)
dta.index = index
del dta['year']
del dta['quarter']
print(sm.datasets.macrodata.NOTE)
print(dta.head(10))
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
dta.realgdp.plot(ax=ax);
legend = ax.legend(loc = 'upper left');
legend.prop.set_size(20);
Explanation: Time Series Filters
End of explanation
gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp)
gdp_decomp = dta[['realgdp']].copy()
gdp_decomp["cycle"] = gdp_cycle
gdp_decomp["trend"] = gdp_trend
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
gdp_decomp[["realgdp", "trend"]]["2000-03-31":].plot(ax=ax, fontsize=16);
legend = ax.get_legend()
legend.prop.set_size(20);
Explanation: Hodrick-Prescott Filter
The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$
$$y_t = \tau_t + \zeta_t$$
The components are determined by minimizing the following quadratic loss function
$$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
End of explanation
bk_cycles = sm.tsa.filters.bkfilter(dta[["infl","unemp"]])
Explanation: Baxter-King approximate band-pass filter: Inflation and Unemployment
Explore the hypothesis that inflation and unemployment are counter-cyclical.
The Baxter-King filter is intended to explicitly deal with the periodicity of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average
$$y_{t}^{*}=\sum_{k=-K}^{k=K}a_ky_{t-k}$$
where $a_{-k}=a_k$ and $\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2).
For completeness, the filter weights are determined as follows
$$a_{j} = B_{j}+\theta\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
$$B_{0} = \frac{\left(\omega_{2}-\omega_{1}\right)}{\pi}$$
$$B_{j} = \frac{1}{\pi j}\left(\sin\left(\omega_{2}j\right)-\sin\left(\omega_{1}j\right)\right)\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
where $\theta$ is a normalizing constant such that the weights sum to zero.
$$\theta=\frac{-\sum_{j=-K^{K}b_{j}}}{2K+1}$$
$$\omega_{1}=\frac{2\pi}{P_{H}}$$
$$\omega_{2}=\frac{2\pi}{P_{L}}$$
$P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default.
End of explanation
fig = plt.figure(figsize=(12,10))
ax = fig.add_subplot(111)
bk_cycles.plot(ax=ax, style=['r--', 'b-']);
Explanation: We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
End of explanation
print(sm.tsa.stattools.adfuller(dta['unemp'])[:3])
print(sm.tsa.stattools.adfuller(dta['infl'])[:3])
cf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[["infl","unemp"]])
print(cf_cycles.head(10))
fig = plt.figure(figsize=(14,10))
ax = fig.add_subplot(111)
cf_cycles.plot(ax=ax, style=['r--','b-']);
Explanation: Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment
The Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the
calculations of the weights in
$$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\dots+B_{T-1-t}y_{T-1}+\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\dots+B_{t-2}y_{2}+\tilde B_{t-1}y_{1}$$
for $t=3,4,...,T-2$, where
$$B_{j} = \frac{\sin(jb)-\sin(ja)}{\pi j},j\geq1$$
$$B_{0} = \frac{b-a}{\pi},a=\frac{2\pi}{P_{u}},b=\frac{2\pi}{P_{L}}$$
$\tilde B_{T-t}$ and $\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation.
The CF filter is appropriate for series that may follow a random walk.
End of explanation |
280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analýza časových řad 1 - manipulace s daty v Pandas
Popis základních funkcí pomocí pro analýzu dat v Pandas.
Info o verzi a notebooku
Step1: Informace o použitých python modulech
Step2: Seznam zdrojů
Step3: Základní práce s daty
Zobrazení prvních n záznamů z DataFrame.
Step4: Zobrazení posledních n záznamů z DataFrame.
Step5: Zobrazeních několik statistických informací ke každému sloupci v DataFrame.
Step6: Uložení dat v DataFrame do .csv souboru
Step7: Načtení dat z .csv souboru
Step8: Informace o indexu a sloupcích daného DataFrame
Step9: ## Výběr určitých dat z DataFrame
### Indexace
Základní výběr dat z DataFrame lze dělat pomocí indexace.
Step10: Výběr podle popisu (label-based) a pozice (positional)
Pro získání dat podle popisu pandas používá funkce loc. Např. 2017, nebo 2016-11-01 zadáme jako argument
Step11: Pro získání dat podle pozice pandas používá funkce iloc. Např. 20, nebo 43 zadáme jako argument
Step12: Více v podrobné dokumentaci Indexing and Selecting Data.
Úprava datového vzorku časové řady
Náhodný vzorek dat
Vzorek náhodných dat lze získat pomocí funkce sample. Dokumentace k DataFrame.sample.
Step13: Získání měsíčního vzorku dat z denního
Funkce resample umožňuje flexibilní konverzi frekvence dat jako funkce asfreq, ale i další. Více v dokumentaci k resample a dokumentaci asfreq.
Step14: Vypočítání volatility EOD dat
Se sloupci DataFramu můžu bezproblému aritmeticky počítat. Pro získání volatility jednotlivých denních záznamů, odečtu jednoduše sloupec low od sloupce high a výsledek vložím do sloupce ATR.
Step15: Smazání sloupce
Smazat sloupce lze pomocí klíčového slova del. | Python Code:
import datetime
MY_VERSION = 1,0
print('Verze notebooku:', '.'.join(map(str, MY_VERSION)))
print('Poslední aktualizace:', datetime.datetime.now())
Explanation: Analýza časových řad 1 - manipulace s daty v Pandas
Popis základních funkcí pomocí pro analýzu dat v Pandas.
Info o verzi a notebooku
End of explanation
import sys
import datetime
import pandas as pd
import pandas_datareader as pdr
import pandas_datareader.data as pdr_web
import quandl as ql
# Load Quandl API key
import json
with open('quandl_key.json','r') as f:
quandl_api_key = json.load(f)
ql.ApiConfig.api_key = quandl_api_key['API-key']
print('Verze pythonu:')
print(sys.version)
print('---')
print('Pandas:', pd.__version__)
print('pandas-datareader:', pdr.__version__)
print('Quandl version:', ql.version.VERSION)
Explanation: Informace o použitých python modulech
End of explanation
start_date = datetime.datetime(2015, 1, 1)
end_date = datetime.datetime.now()
ES = ql.get("CHRIS/CME_ES1", start_date=start_date, end_date=end_date)
ES.head()
SPY = pdr_web.DataReader("NYSEARCA:SPY", 'google', start=start_date, end=end_date)
SPY.head()
Explanation: Seznam zdrojů:
Pandas - manipulace a analýza dat
pandas-datareader
Seznam všech webových zdrojů v pandas-datareader
Python For Finance: Algorithmic Trading
Quandl
ETF trhy - finančník
Series a DataFrame
Knihovna pandas používá k uchovávání a zpracování dat své typy Series a DataFrame.
V případě Series se jedná o 1D označená (labeled) struktura dat jednoho typu. DataFrame je pak 2D označená (labeled) struktura dat různých typů. Jednotlivé sloupce v DataFrame jsou typu Series. Další informace v dokumentaci DataFrame a Series.
Data k analýze
End of explanation
n = 10
#ES.head()
ES.head(n)
Explanation: Základní práce s daty
Zobrazení prvních n záznamů z DataFrame.
End of explanation
n = 10
#ES.tail()
ES.tail(n)
Explanation: Zobrazení posledních n záznamů z DataFrame.
End of explanation
ES.describe()
Explanation: Zobrazeních několik statistických informací ke každému sloupci v DataFrame.
End of explanation
ES.to_csv('data/es.csv')
Explanation: Uložení dat v DataFrame do .csv souboru
End of explanation
#data = pd.read_csv('data/es.csv')
data = pd.read_csv('data/es.csv', header=0, index_col='Date', parse_dates=True)
data.head(3)
Explanation: Načtení dat z .csv souboru
End of explanation
data.index
data.columns
Explanation: Informace o indexu a sloupcích daného DataFrame
End of explanation
# výběr posledních 10 záznamů ze sloupce Last, výsledek je typu Series
vyber = data['Last'][-10:]
vyber
Explanation: ## Výběr určitých dat z DataFrame
### Indexace
Základní výběr dat z DataFrame lze dělat pomocí indexace.
End of explanation
data.loc['2016-11-01']
vyber = data.loc['2017']
print(vyber.head(5))
print(vyber.tail(5))
Explanation: Výběr podle popisu (label-based) a pozice (positional)
Pro získání dat podle popisu pandas používá funkce loc. Např. 2017, nebo 2016-11-01 zadáme jako argument:
End of explanation
# zobrazí řádek 20
print(data.iloc[20])
# zobrazí řádky 0,1,2,3,4 a sloupce 0,1,2,3
data.iloc[[0,1,2,3,4], [0,1,2,3]]
Explanation: Pro získání dat podle pozice pandas používá funkce iloc. Např. 20, nebo 43 zadáme jako argument:
End of explanation
# Vzorek 20 řádků
sample = data.sample(20)
sample
Explanation: Více v podrobné dokumentaci Indexing and Selecting Data.
Úprava datového vzorku časové řady
Náhodný vzorek dat
Vzorek náhodných dat lze získat pomocí funkce sample. Dokumentace k DataFrame.sample.
End of explanation
prumer = data.resample('M').mean()
prumer.head()
mesicni = data.asfreq("M", method="bfill")
mesicni.head()
Explanation: Získání měsíčního vzorku dat z denního
Funkce resample umožňuje flexibilní konverzi frekvence dat jako funkce asfreq, ale i další. Více v dokumentaci k resample a dokumentaci asfreq.
End of explanation
data['ATR_1'] = data.High - data.Low
data.head()
Explanation: Vypočítání volatility EOD dat
Se sloupci DataFramu můžu bezproblému aritmeticky počítat. Pro získání volatility jednotlivých denních záznamů, odečtu jednoduše sloupec low od sloupce high a výsledek vložím do sloupce ATR.
End of explanation
del data['ATR_1']
data.head()
Explanation: Smazání sloupce
Smazat sloupce lze pomocí klíčového slova del.
End of explanation |
281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Networks
This is an jupyter notebook.
Lectures about Python, useful both for beginners and experts, can be found at http
Step1: Now we plot the two classes as a scatter plot!
Step2: Now we want to classify this synthetic data using the perceptron model which will be trained using this data, and then we will test using the same data (this is called, self classification test). To proceed further we first need to train our perceptron model using the theory above.
Here the dimention of weight vector $\textbf{w}$ is 3 (as we just need to estimate a line). So we initilize the parameters as ones.
Step3: Now how do we solve for the parameters. Easy, we apply simple gradient descent on the objective function (the function of the parameters to be estimated, which is to be minimized). So we take the derivative of the equation (2) and we get
$$ \textbf{w}^{(l+1)} = \textbf{w}^{(l)} + \eta \sum \limits_{n \in \mathcal{M}} \phi (\textbf{x}_n) t_n $$
So now we start coding the actual parameter estimation part.
Step4: We can see that this perceptron model classifies the data very well | Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
# now we genrate the data
N = 30
x = np.zeros(N, dtype=np.float64)
y = np.zeros(N, dtype=np.float64)
for k in range(N):
x[k], y[k] = [np.random.uniform(-1,1) for i in range(2)]
a = np.random.uniform(-1,1)
b = np.random.uniform(-1,1)
c = np.random.uniform(-1,1)
label = np.ones(N) # stores the labels for two classes, 1 for C1 and -1 for C2
xa = []
ya = []
xb = []
yb = []
N1 = 0
N2 = 0
# the random line divides the points into two classes of size N1 and N2
for k in range(N):
temp = a*x[k] + b*y[k] + c
if temp > 0:
xa.append(x[k])
ya.append(y[k])
N1 += 1
else:
label[k] = -1
xb.append(x[k])
yb.append(y[k])
N2 += 1
Explanation: Neural Networks
This is an jupyter notebook.
Lectures about Python, useful both for beginners and experts, can be found at http://scipy-lectures.github.io.
Open the notebook by (1) copying this file into a directory, (2) in that directory typing
jupyter-notebook
and (3) selecting the notebook.
Written By: Riddhish Bhalodia
In this exercise, we will learn about different neural network concepts. There are few prerequisites of probability and machine learning.
The Perceptron Algorithm
It is one of the example of a linear discriminant model and used for two-class clustering / separation. In this model the input bector x is transformed using a fixed non-linear transformation. So starting from generalized model of linear regression we have
$$ y(\textbf{x}) = \textbf{w}^T\phi(\textbf{x})$$
Now in perceptron all we do is pass this linear regression model through a non-linear activation function as follows
$$y(\textbf{x}) = f(\textbf{w}^T\phi(\textbf{x})) \quad \quad \quad (1)$$
Here, $f(.)$ is given by
$$
f(a) = \left{
\begin{array}{ll}
-1 & \quad a < 0 \
1 & \quad a \geq 0
\end{array}
\right.
$$
Now, as we have two classes $\mathcal{C}_1$ and $\mathcal{C}_2$ so we define a <u>target variable t</u> which takes the values +1 and -1 for $\mathcal{C}_1$ and $\mathcal{C}_2$ respectively. Now we need to determine the parameters w, for that we need to define an error function which we have to minimize.
A natural choice for the error function is total number of misclassified patterns, however this causes some problems in the learning algorithm. Hence we propose an alternate error function called the perceptron criterion given by
$$ E_p(\textbf{w}) = - \sum \limits _{n \in \mathcal{M}} \textbf{w}^T \phi (\textbf{x}_n) t_n \quad \quad \quad (2)$$
Here, $\mathcal{M}$ denotes the set of all the misclassified patterns, the reasoning behind this functional can be found Christopher M Bishop's book here :D
Trivial example
Here we will simplate a trivial example in the case of a 2D data within the space [-1,1] x [-1,1], and we will asumme that $\phi(x_n) = x_n \quad \forall x_n$. We first need to generate the data
End of explanation
plt.scatter(xa, ya, color = 'b')
plt.hold(True)
plt.scatter(xb, yb, color = 'r')
plt.title('Scatter plot of the data, N = 30')
Explanation: Now we plot the two classes as a scatter plot!
End of explanation
w = np.ones(3, dtype=np.float64) # the weights
iter_max = 100 # maximum number of iterations
error = 100.0 # randomly initilize the classification error
it = 0 # variable to store the iteration number
eta = 0.02 # the step size (try varying this)
classified_labels = np.ones(N)
Explanation: Now we want to classify this synthetic data using the perceptron model which will be trained using this data, and then we will test using the same data (this is called, self classification test). To proceed further we first need to train our perceptron model using the theory above.
Here the dimention of weight vector $\textbf{w}$ is 3 (as we just need to estimate a line). So we initilize the parameters as ones.
End of explanation
while (error != 0 and it < iter_max):
print(it)
# Update Rules
temp_vec = np.zeros(3, dtype=np.float64)
for i in range(N):
if label[i] != classified_labels[i]:
temp += eta * np.array([x[i], y[i], 1]) * label[i]
w += temp
# recompute the classification
for i in range(N):
temp = w[0]*x[i] + w[1]*y[i] + w[2]
if temp > 0:
classified_labels[i] = 1
else:
classified_labels[i] = -1
# compute the misclassification error
error = 0
for i in range(N):
temp = w[0]*x[i] + w[1]*y[i] + w[2]
if label[i] != classified_labels[i]:
error += - label[i] * temp
it +=1
x = np.linspace(-1,1,100)
y = -(w[0] * x + w[2]) / w[1]
plt.scatter(xa, ya, color = 'b')
plt.hold(True)
plt.scatter(xb, yb, color = 'r')
plt.plot(x,y, color='k')
plt.title('Perceptron classified data (the line)')
Explanation: Now how do we solve for the parameters. Easy, we apply simple gradient descent on the objective function (the function of the parameters to be estimated, which is to be minimized). So we take the derivative of the equation (2) and we get
$$ \textbf{w}^{(l+1)} = \textbf{w}^{(l)} + \eta \sum \limits_{n \in \mathcal{M}} \phi (\textbf{x}_n) t_n $$
So now we start coding the actual parameter estimation part.
End of explanation
x = np.linspace(-1,1,100)
y = -(w[0] * x + w[2]) / w[1]
plt.plot(x,y,color='b')
x = np.linspace(-1,1,100)
y = -(a * x + c) / b
plt.hold(True)
plt.plot(x,y,color='r')
plt.legend(['predicted', 'original'])
Explanation: We can see that this perceptron model classifies the data very well :) lest check how close the weights are to the actual line we took to generate the data
End of explanation |
282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reinforcement Learning (DQN) tutorial
http
Step2: Experience Replay
DQNは観測を蓄積しておいてあとでシャッフルしてサンプリングして使う
Transition - a named tuple representing a single transition in our environment
ReplayMemory - a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a .sample() method for selecting a random batch of transitions for training.
Step4: https
Step6: Jupyter Notebook上だとrender()が動かない!
NotImplementedError
Step7: ゲーム画面取得や描画関連のコードはコンソール上で実行すること!
Step8: 訓練コード
Step9: 探査率は学習が進むにつれて徐々に減らすスケジューリングをしている
探査率の変化曲線を描画してみる | Python Code:
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
from copy import deepcopy
from PIL import Image
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
import torchvision.transforms as T
%matplotlib inline
# setup matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
from IPython import display
plt.ion()
# if gpu is to be used
use_cuda = torch.cuda.is_available()
FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor
ByteTensor = torch.cuda.ByteTensor if use_cuda else torch.ByteTensor
Tensor = FloatTensor
env = gym.make('CartPole-v0').unwrapped
env
Explanation: Reinforcement Learning (DQN) tutorial
http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
OpenAI GymのCatPole task
環境の状態は (position, velocity, ...) など4つの数値が与えられるが
DQNではカートを中心とした画像を入力とする
厳密に言うと状態=現在の画像と1つ前の画像の差分
Strictly speaking, we will present the state as the difference between the current screen patch and the previous one. This will allow the agent to take the velocity of the pole into account from one image.
TODO: DQNではなく、4つの数値を状態としたQ-Learningで学習
OpenAI Gymを使うので pip install gym でインストール
End of explanation
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward'))
# namedtupleの使い方
t = Transition(1, 2, 3, 4)
print(t)
print(t.state, t.action, t.next_state, t.reward)
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
Save a transition.
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
# memoryを使い切ったら古いのから上書きしていく
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
# ReplayMemoryの動作確認
rm = ReplayMemory(3)
rm.push(1, 1, 1, 1)
rm.push(2, 2, 2, 2)
rm.push(3, 3, 3, 3)
print(len(rm))
print(rm.memory)
rm.push(4, 4, 4, 4)
print(len(rm))
print(rm.memory)
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
self.head = nn.Linear(448, 2)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
return self.head(x.view(x.size(0), -1))
dqn = DQN()
dqn
resize = T.Compose([T.ToPILImage(),
T.Resize((40, 40), interpolation=Image.CUBIC),
T.ToTensor()])
Explanation: Experience Replay
DQNは観測を蓄積しておいてあとでシャッフルしてサンプリングして使う
Transition - a named tuple representing a single transition in our environment
ReplayMemory - a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a .sample() method for selecting a random batch of transitions for training.
End of explanation
screen_width = 600
def get_cart_location():
台車の位置をピクセル単位で返す
world_width = env.x_threshold * 2
scale = screen_width / world_width
return int(env.state[0] * scale + screen_width / 2.0)
Explanation: https://github.com/openai/gym/wiki/CartPole-v0
state[0] = Cart Position (-2.4, 2.4)
env.x_threshold = 2.4
End of explanation
def get_screen():
ゲーム画面を取得する
# env.reset() しておかないとrenderはNoneが変えるので注意
# PyTorchの (C, H, W) の順に変換する
# default: (3, 800, 1200)
screen = env.render(mode='rgb_array').transpose((2, 0, 1))
# 台車を中心として 320 x 640 の範囲を切り出す
# 画面の上下の部分を除く(台車のある範囲のみ残す)
screen = screen[:, 320:640]
# 横幅は台車を中心としてこの範囲を切り出す
view_width = 640
cart_location = get_cart_location()
if cart_location < view_width // 2:
# view_widthの範囲を切り出すと左が画面からはみ出る場合
slice_range = slice(view_width)
elif cart_location > (screen_width - view_width // 2):
# view_widthの範囲を切り出すと右が画面からはみ出る場合
slice_range = slice(-view_width, None)
else:
# 両端が画面からはみ出ない場合
slice_range = slice(cart_location - view_width // 2,
cart_location + view_width // 2)
screen = screen[:, :, slice_range]
# TODO: ascontiguousarray()は高速化のため?
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
# Tensorに変換
screen = torch.from_numpy(screen)
# リサイズしてバッチ数の次元を4Dテンソルにして返す
return resize(screen).unsqueeze(0).type(Tensor)
get_screen()
Explanation: Jupyter Notebook上だとrender()が動かない!
NotImplementedError: abstract
End of explanation
env.reset()
patch = get_screen()
print(patch.size()) # torch.Size([1, 3, 40, 40])
# 切り取ったゲーム画面を描画
env.reset()
plt.figure()
# get_screen()は4Dテンソルで返ってくるので描画できるようにndarrayに戻す
patch = get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy()
plt.imshow(patch, interpolation='none')
plt.title('Example extracted screen')
plt.show()
Explanation: ゲーム画面取得や描画関連のコードはコンソール上で実行すること!
End of explanation
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9 # 探査率の開始値
EPS_END = 0.05 # 探査率の終了値
EPS_DECAY = 200 # 値が小さいほど低下が急激
model = DQN()
if use_cuda:
model.cuda()
optimizer = optim.RMSprop(model.parameters())
memory = ReplayMemory(10000)
steps_done = 0
model
Explanation: 訓練コード
End of explanation
# 探査率のスケジューリング
eps_list = []
for steps_done in range(2000):
eps_threshold = EPS_END + (EPS_START - EPS_END) * math.exp(-1. * steps_done / EPS_DECAY)
eps_list.append(eps_threshold)
plt.plot(range(2000), eps_list)
plt.yticks(np.arange(0.0, 1.0, 0.1))
plt.xlabel('steps')
plt.ylabel('epsilon')
plt.grid()
Explanation: 探査率は学習が進むにつれて徐々に減らすスケジューリングをしている
探査率の変化曲線を描画してみる
End of explanation |
283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro into IPython notebooks
Step1: Fitting Lines to Data
We'll cover very basic line fitting, largely ignoring the subtleties of the statistics in favor of showing you how to perform simple fits of models to data.
Step2: y.size is the number of elements in y, just like len(y) or, in IDL, n_elements(y)
Step3: Now we're onto the fitting stage.
We're going to fit a function of the form
$$y = mx + b$$
which is the same as
$$f(x) = p[1]x + p[0]$$
to the data.
This is called "linear regression", but it is also a special case of a more
general concept
Step4: Let's do the same thing with a noisier data set. I'm going to leave out most of the comments this time.
Step5: Despite the noisy data, our fit is still pretty good! One last plotting trick, then we'll move on.
Step6: Curve Fitting
We'll now move on to more complicated curves. What if the data looks more like a sine curve? We'll create "fake data" in basically the same way as above.
Step7: That looks like kind of a mess. Let's see how well we can fit it.
The function we're trying to fit has the form
Step8: Look at the returns
Step9: Again, this is pretty good despite the noisiness.
Fitting a Power Law
Power laws occur all the time in physis, so it's a good idea to learn how to use them.
What's a power law? Any function of the form
Step10: It's a straight line. Now, for our "fake data", we'll add the noise before transforming from "linear" to "log" space
Step11: Note how different this looks from the "noisy line" we plotted earlier. Power laws are much more sensitive to noise! In fact, there are some data points that don't even show up on this plot because you can't take the log of a negative number. Any points where the random noise was negative enough that the curve dropped below zero ended up being "NAN", or "Not a Number". Luckily, our plotter knows to ignore those numbers, but polyfit doesnt.
Step12: In order to get around this problem, we need to mask the data. That means we have to tell the code to ignore all the data points where noisy_y is nan.
My favorite way to do this is to take advantage of a curious fact
Step13: So if we find all the places were noisy_y != noisy_y, we can get rid of them. Or we can just use the places where noisy_y equals itself.
Step14: This OK array is a "boolean mask". We can use it as an "index array", which is pretty neat.
Step15: The noise seems to have affected our fit.
Step16: That's pretty bad. A "least-squares" approach, as with curve_fit, is probably going to be the better choice. However, in the absence of noise (i.e., on your homework), this approach should work
Step20: Tricks with Arrays
We need to cover a few syntactic things comparing IDL and python.
In IDL, if you wanted the maximum value in an array, you would do
Step21: Further info on IPython Notebooks
| Overview | link |
|--------------------------------------|------------------------------------------------------------------------------------|
| Blog of IPython creator | http | Python Code:
%pylab inline
from IPython.display import YouTubeVideo
YouTubeVideo("qb7FT68tcA8", width=600, height=400, theme="light", color="blue")
# You can ignore this, it's just for aesthetic purposes
matplotlib.rcParams['figure.figsize'] = (8,5)
rcParams['savefig.dpi'] = 100
Explanation: Intro into IPython notebooks
End of explanation
# These import commands set up the environment so we have access to numpy and pylab functions
import numpy as np
import pylab as pl
# Data Fitting
# First, we'll generate some fake data to use
x = np.linspace(0,10,50) # 50 x points from 0 to 10
# Remember, you can look at the help for linspace too:
# help(np.linspace)
# y = m x + b
y = 2.5 * x + 1.2
# let's plot that
pl.clf()
pl.plot(x,y)
# looks like a simple line. But we want to see the individual data points
pl.plot(x,y,marker='s')
# We need to add noise first
noise = pl.randn(y.size)
# Like IDL, python has a 'randn' function that is centered at 0 with a standard deviation of 1.
# IDL's 'randomu' is 'pl.rand' instead
# What's y.size?
print y.size
print len(y)
Explanation: Fitting Lines to Data
We'll cover very basic line fitting, largely ignoring the subtleties of the statistics in favor of showing you how to perform simple fits of models to data.
End of explanation
# We can add arrays in python just like in IDL
noisy_flux = y + noise
# We'll plot it too, but this time without any lines
# between the points, and we'll use black dots
# ('k' is a shortcut for 'black', '.' means 'point')
pl.clf() # clear the figure
pl.plot(x,noisy_flux,'k.')
# We need labels, of course
pl.xlabel("Time")
pl.ylabel("Flux")
Explanation: y.size is the number of elements in y, just like len(y) or, in IDL, n_elements(y)
End of explanation
# We'll use polyfit to find the values of the coefficients. The third
# parameter is the "order"
p = np.polyfit(x,noisy_flux,1)
# help(polyfit) if you want to find out more
# print our fit parameters. They are not exact because there's noise in the data!
# note that this is an array!
print p
print type(p) # you can ask python to tell you what type a variable is
# Great! We've got our fit. Let's overplot the data and the fit now
pl.clf() # clear the figure
pl.plot(x,noisy_flux,'k.') # repeated from above
pl.plot(x,p[0]*x+p[1],'r-') # A red solid line
pl.xlabel("Time") # labels again
pl.ylabel("Flux")
# Cool, but there's another (better) way to do this. We'll use the polyval
# function instead of writing out the m x + b equation ourselves
pl.clf() # clear the figure
pl.plot(x,noisy_flux,'k.') # repeated from above
pl.plot(x,np.polyval(p,x),'r-') # A red solid line
pl.xlabel("Time") # labels again
pl.ylabel("Flux")
# help(polyval) if you want to find out more
Explanation: Now we're onto the fitting stage.
We're going to fit a function of the form
$$y = mx + b$$
which is the same as
$$f(x) = p[1]x + p[0]$$
to the data.
This is called "linear regression", but it is also a special case of a more
general concept: this is a first-order polynomial.
"First Order" means that the highest exponent of x in the equation is 1
End of explanation
noisy_flux = y+noise*10
p = polyfit(x,noisy_flux,1)
print p
# plot it
pl.clf() # clear the figure
pl.plot(x,noisy_flux,'k.') # repeated from above
pl.plot(x,np.polyval(p,x),'r-',label="Best fit") # A red solid line
pl.plot(x,2.5*x+1.2,'b--',label="Input") # a blue dashed line showing the REAL line
pl.legend(loc='best') # make a legend in the best location
pl.xlabel("Time") # labels again
pl.ylabel("Flux")
Explanation: Let's do the same thing with a noisier data set. I'm going to leave out most of the comments this time.
End of explanation
pl.clf() # clear the figure
pl.errorbar(x,noisy_flux,yerr=10,marker='.',color='k',linestyle='none') # errorbar requires some extras to look nice
pl.plot(x,np.polyval(p,x),'r-',label="Best fit") # A red solid line
pl.plot(x,2.5*x+1.2,'b--',label="Input") # a blue dashed line showing the REAL line
pl.legend(loc='best') # make a legend in the best location
pl.xlabel("Time") # labels again
pl.ylabel("Flux")
Explanation: Despite the noisy data, our fit is still pretty good! One last plotting trick, then we'll move on.
End of explanation
# this time we want our "independent variable" to be in radians
x = np.linspace(0,2*np.pi,50)
y = np.sin(x)
pl.clf()
pl.plot(x,y)
# We'll make it noisy again
noise = pl.randn(y.size)
noisy_flux = y + noise
pl.plot(x,noisy_flux,'k.') # no clear this time
Explanation: Curve Fitting
We'll now move on to more complicated curves. What if the data looks more like a sine curve? We'll create "fake data" in basically the same way as above.
End of explanation
# curve_fit is the function we need for this, but it's in another package called scipy
from scipy.optimize import curve_fit
# we need to know what it does:
help(curve_fit)
Explanation: That looks like kind of a mess. Let's see how well we can fit it.
The function we're trying to fit has the form:
$$f(x) = A * sin(x - B)$$
where $A$ is a "scale" parameter and $B$ is the side-to-side offset (or the "delay" if the x-axis is time). For our data, they are $A=1$ and $B=0$ respectively, because we made $y=sin(x)$
End of explanation
def sinfunc(x,a,b):
return a*np.sin(x-b)
fitpars, covmat = curve_fit(sinfunc,x,noisy_flux)
# The diagonals of the covariance matrix are variances
# variance = standard deviation squared, so we'll take the square roots to get the standard devations!
# You can get the diagonals of a 2D array easily:
variances = covmat.diagonal()
std_devs = np.sqrt(variances)
print fitpars,std_devs
# Let's plot our best fit, see how well we did
# These two lines are equivalent:
pl.plot(x, sinfunc(x, fitpars[0], fitpars[1]), 'r-')
pl.plot(x, sinfunc(x, *fitpars), 'r-')
Explanation: Look at the returns:
Returns
-------
popt : array
Optimal values for the parameters so that the sum of the squared error
of ``f(xdata, *popt) - ydata`` is minimized
pcov : 2d array
The estimated covariance of popt. The diagonals provide the variance
of the parameter estimate.
So the first set of returns is the "best-fit parameters", while the second set is the "covariance matrix"
End of explanation
t = np.linspace(0.1,10)
a = 1.5
b = 2.5
z = a*t**b
pl.clf()
pl.plot(t,z)
# Change the variables
# np.log is the natural log
y = np.log(z)
x = np.log(t)
pl.clf()
pl.plot(x,y)
pl.ylabel("log(z)")
pl.xlabel("log(t)")
Explanation: Again, this is pretty good despite the noisiness.
Fitting a Power Law
Power laws occur all the time in physis, so it's a good idea to learn how to use them.
What's a power law? Any function of the form:
$$f(t) = a t^b$$
where $x$ is your independent variable, $a$ is a scale parameter, and $b$ is the exponent (the power).
When fitting power laws, it's very useful to take advantage of the fact that "a power law is linear in log-space".
That means, if you take the log of both sides of the equation (which is allowed) and change variables, you get a
linear equation!
$$\ln(f(t)) = \ln(a t^b) = \ln(a) + b \ln(t)$$
We'll use the substitutions $y=\ln(f(t))$, $A=\ln(a)$, and $x=\ln(t)$, so that
$$y=a+bx$$
which looks just like our linear equation from before (albeit with different letters for the fit parameters).
We'll now go through the same fitting exercise as before, but using powerlaws instead of lines.
End of explanation
noisy_z = z + pl.randn(z.size)*10
pl.clf()
pl.plot(t,z)
pl.plot(t,noisy_z,'k.')
noisy_y = np.log(noisy_z)
pl.clf()
pl.plot(x,y)
pl.plot(x,noisy_y,'k.')
pl.ylabel("log(z)")
pl.xlabel("log(t)")
Explanation: It's a straight line. Now, for our "fake data", we'll add the noise before transforming from "linear" to "log" space
End of explanation
print noisy_y
# try to polyfit a line
pars = np.polyfit(x,noisy_y,1)
print pars
Explanation: Note how different this looks from the "noisy line" we plotted earlier. Power laws are much more sensitive to noise! In fact, there are some data points that don't even show up on this plot because you can't take the log of a negative number. Any points where the random noise was negative enough that the curve dropped below zero ended up being "NAN", or "Not a Number". Luckily, our plotter knows to ignore those numbers, but polyfit doesnt.
End of explanation
print 1 == 1
print np.nan == np.nan
Explanation: In order to get around this problem, we need to mask the data. That means we have to tell the code to ignore all the data points where noisy_y is nan.
My favorite way to do this is to take advantage of a curious fact: $1=1$, but nan!=nan
End of explanation
OK = noisy_y == noisy_y
print OK
Explanation: So if we find all the places were noisy_y != noisy_y, we can get rid of them. Or we can just use the places where noisy_y equals itself.
End of explanation
print "There are %i OK values" % (OK.sum())
masked_noisy_y = noisy_y[OK]
masked_x = x[OK]
print "masked_noisy_y has length",len(masked_noisy_y)
# now polyfit again
pars = np.polyfit(masked_x,masked_noisy_y,1)
print pars
# cool, it worked. But the fit looks a little weird!
fitted_y = polyval(pars,x)
pl.plot(x, fitted_y, 'r--')
Explanation: This OK array is a "boolean mask". We can use it as an "index array", which is pretty neat.
End of explanation
# Convert bag to linear-space to see what it "really" looks like
fitted_z = np.exp(fitted_y)
pl.clf()
pl.plot(t,z)
pl.plot(t,noisy_z,'k.')
pl.plot(t,fitted_z,'r--')
pl.xlabel('t')
pl.ylabel('z')
Explanation: The noise seems to have affected our fit.
End of explanation
def powerlaw(x,a,b):
return a*(x**b)
pars,covar = curve_fit(powerlaw,t,noisy_z)
pl.clf()
pl.plot(t,z)
pl.plot(t,noisy_z,'k.')
pl.plot(t,powerlaw(t,*pars),'r--')
pl.xlabel('t')
pl.ylabel('z')
Explanation: That's pretty bad. A "least-squares" approach, as with curve_fit, is probably going to be the better choice. However, in the absence of noise (i.e., on your homework), this approach should work
End of explanation
# sin(x) is already defined
def sin2x(x):
sin^2 of x
return np.sin(x)**2
def sin3x(x):
sin^3 of x
return np.sin(x)**3
def sincos(x):
sin(x)*cos(x)
return np.sin(x)*np.cos(x)
list_of_functions = [np.sin, sin2x, sin3x, sincos]
# we want 0-2pi for these functions
t = np.linspace(0,2*np.pi)
# this is the cool part: we can make a variable function
for fun in list_of_functions:
# the functions know their own names (in a "secret hidden variable" called __name__)
print "The maximum of ",fun.__name__," is ", fun(t).max()
# OK, but we wanted the location of the maximum....
for fun in list_of_functions:
print "The location of the maximum of ",fun.__name__," is ", fun(t).argmax()
# well, that's not QUITE what we want, but it's close
# We want to know the value of t, not the index!
for fun in list_of_functions:
print "The location of the maximum of ",fun.__name__," is ", t[fun(t).argmax()]
# Finally, what if we want to store all that in an array?
# Well, here's a cool trick: you can sort of invert the for loop
# This is called a "list comprehension":
maxlocs = [ t[fun(t).argmax()] for fun in list_of_functions ]
print maxlocs
# Confused? OK. Try this one:
print range(6)
print [ii**2 for ii in range(6)]
Explanation: Tricks with Arrays
We need to cover a few syntactic things comparing IDL and python.
In IDL, if you wanted the maximum value in an array, you would do:
maxval = max(array, location_of_max)
In python, it's more straightforward:
location_of_max = array.argmax()
or
location_of_max = np.argmax(array)
Now, say we want to determine the location of the maximum of a number of different functions. The functions we'll use are:
sin(x)
sin$^2$(x)
sin$^3$(x)
sin(x)cos(x)
We'll define these functions, then loop over them.
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("xe_ATRmw0KM", width=600, height=400, theme="light", color="blue")
from IPython.display import YouTubeVideo
YouTubeVideo("zG8FYPFU9n4", width=600, height=400, theme="light", color="blue")
Explanation: Further info on IPython Notebooks
| Overview | link |
|--------------------------------------|------------------------------------------------------------------------------------|
| Blog of IPython creator | http://blog.fperez.org/2012/09/blogging-with-ipython-notebook.html |
| Blog of an avid IPython user | http://www.damian.oquanta.info/index.html |
| Turning notebook into a presentation | https://www.youtube.com/watch?v=rBS6hmiK-H8 |
| Tutorial on IPython & SciPy | https://github.com/esc/scipy2013-tutorial-numpy-ipython |
| IPython notebooks gallery | https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks |
End of explanation |
284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Building Part 2
Code for building the models
Author
Step1: Training and Testing Split
Step2: Text
Step3: Classification Models
Step4: Although tuning is not necessary for Naive Bayes, I pass the default parameters of those models to GridSearchCV anyway so that I can do a direct pair-wise comparison with the other models across the different steps of cross-validation.
In the interest of time, I didn't use the SVM classifier.
Step5: Creating Pipelines
Step6: I plan on using imblearn classes for later iterations so I use it's pipeline in the beginning for convenience
Step7: Naive Bayes Estimators
Step8: QDA Estimators
Step9: Logistic Estimators
Step10: Random Forest Estimators
Step11: Fitting Estimators
Step12: Basic Estimators
Step13: Testing Estimators
Step14: Closer look at the variability of the best model | Python Code:
import os
import pandas as pd
import numpy as np
import scipy as sp
import seaborn as sns
import matplotlib.pyplot as plt
import json
from IPython.display import Image
from IPython.core.display import HTML
retval=os.chdir("..")
clean_data=pd.read_pickle('./clean_data/clean_data.pkl')
clean_data.head()
kept_cols=['helpful', 'text_lemma']
Explanation: Model Building Part 2
Code for building the models
Author: Jimmy Charité
Email: jimmy.charite@gmail.com
Following up with part one, I will try the bag of words approach
End of explanation
my_rand_state=0
test_size=0.25
from sklearn.model_selection import train_test_split
X = (clean_data[kept_cols].iloc[:,1]).tolist()
y = (clean_data[kept_cols].iloc[:,0]).tolist()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size,
random_state=my_rand_state)
Explanation: Training and Testing Split
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
#set max_features to minimize training time
#also, I didn't apply LDA-based dimensionality reduction
tfidf=TfidfVectorizer(lowercase=False,max_features=200)
Explanation: Text
End of explanation
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
Explanation: Classification Models
End of explanation
nb_clf=GaussianNB()
priors=[None]
qda_clf=QuadraticDiscriminantAnalysis()
reg_param=[0.0, 0.25, 0.5, 0.75]
log_clf=LogisticRegression(penalty='l2')
C=[0.001 , 0.01, 10, 100,1000]
rf_clf=RandomForestClassifier()
n_estimators=[100,200]
max_features=[.1,.3,.5]
class_weight=['balanced']
class_weight.extend([{1: w} for w in [1, 2, 10]])
Explanation: Although tuning is not necessary for Naive Bayes, I pass the default parameters of those models to GridSearchCV anyway so that I can do a direct pair-wise comparison with the other models across the different steps of cross-validation.
In the interest of time, I didn't use the SVM classifier.
End of explanation
from imblearn import pipeline #needed if mixing imblearn with sklearn classes
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
Explanation: Creating Pipelines
End of explanation
n_jobs=4
n_folds=10
skfold = StratifiedKFold(n_splits=n_folds,random_state=my_rand_state, shuffle=False)
from sklearn.base import BaseEstimator, TransformerMixin
class DenseTransformer(BaseEstimator, TransformerMixin):
def transform(self, X, y=None, **fit_params):
return X.todense()
def fit_transform(self, X, y=None, **fit_params):
self.fit(X, y, **fit_params)
return self.transform(X)
def fit(self, X, y=None, **fit_params):
return self
Explanation: I plan on using imblearn classes for later iterations so I use it's pipeline in the beginning for convenience
End of explanation
nb_clf_b = pipeline.Pipeline(steps=[('tfidf',tfidf),('to_dense', DenseTransformer()),('clf',nb_clf)])
nb_clf_est_b = GridSearchCV(estimator=nb_clf_b,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(clf__priors=priors))
Explanation: Naive Bayes Estimators
End of explanation
qda_clf_b = pipeline.Pipeline(steps=[('tfidf',tfidf),('to_dense', DenseTransformer()),('clf',qda_clf)])
qda_clf_est_b = GridSearchCV(estimator=qda_clf_b,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(clf__reg_param=reg_param))
Explanation: QDA Estimators
End of explanation
log_clf_b = pipeline.Pipeline(steps=[('tfidf',tfidf),('clf',log_clf)])
log_clf_est_b = GridSearchCV(estimator=log_clf_b,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(clf__C=C,
clf__class_weight=class_weight))
Explanation: Logistic Estimators
End of explanation
rf_clf_b = pipeline.Pipeline(steps=[('tfidf',tfidf),('clf',rf_clf)])
rf_clf_est_b = GridSearchCV(estimator=rf_clf_b,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(clf__n_estimators=n_estimators,
clf__max_features=max_features,
clf__class_weight=class_weight))
Explanation: Random Forest Estimators
End of explanation
from sklearn.externals import joblib
Explanation: Fitting Estimators
End of explanation
nb_clf_est_b.fit(X_train,y_train)
joblib.dump(nb_clf_est_b, './other_output/bow/nb_clf_est_b.pkl')
qda_clf_est_b.fit(X_train,y_train)
joblib.dump(qda_clf_est_b, './other_output/bow/qda_clf_est_b.pkl')
log_clf_est_b.fit(X_train,y_train)
joblib.dump(log_clf_est_b, './other_output/bow/log_clf_est_b.pkl')
rf_clf_est_b.fit(X_train,y_train)
joblib.dump(rf_clf_est_b, './other_output/bow/rf_clf_est_b.pkl')
Explanation: Basic Estimators: no bag of words or PCA
End of explanation
from sklearn.metrics import roc_curve, auc
nb_clf_est_b=joblib.load('./other_output/bow/nb_clf_est_b.pkl')
qda_clf_est_b=joblib.load('./other_output/bow/qda_clf_est_b.pkl')
log_clf_est_b=joblib.load('./other_output/bow/log_clf_est_b.pkl')
rf_clf_est_b=joblib.load('./other_output/bow/rf_clf_est_b.pkl')
nb_fpr, nb_tpr, _ = roc_curve(y_test,
nb_clf_est_b.predict_proba(X_test)[:,1])
nb_roc_auc = auc(nb_fpr, nb_tpr)
qda_fpr, qda_tpr, _ = roc_curve(y_test,
qda_clf_est_b.predict_proba(X_test)[:,1])
qda_roc_auc = auc(qda_fpr, qda_tpr)
log_fpr, log_tpr, _ = roc_curve(y_test,
log_clf_est_b.predict_proba(X_test)[:,1])
log_roc_auc = auc(log_fpr, log_tpr)
rf_fpr, rf_tpr, _ = roc_curve(y_test,
rf_clf_est_b.predict_proba(X_test)[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
plt.plot(nb_fpr, nb_tpr, color='yellow', linestyle='--',
label='LOG (area = %0.2f)' % nb_roc_auc, lw=2)
plt.plot(qda_fpr, qda_tpr, color='red', linestyle='--',
label='QDA (area = %0.2f)' % qda_roc_auc, lw=2)
plt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',
label='LOG (area = %0.2f)' % log_roc_auc, lw=2)
plt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',
label='RF (area = %0.2f)' % rf_roc_auc, lw=2)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',
label='Luck')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves of Basic Models Using Just Bag of Words')
plt.legend(loc="lower right")
plt.savefig('./plots/ROC_Basic_BOW.png', bbox_inches='tight')
plt.show()
Explanation: Testing Estimators
End of explanation
from scipy.stats import sem
len(y_test) #the sample is large enough that we can get away with 5% draws
y_test=np.array(y_test)
X_test=np.array(X_test)
#initialize
n_bootstraps = 2000
rng_seed = 1
sample_percent=0.05
min_index=0
max_index=len(y_test)-1
draw_size=int(len(y_test)*sample_percent)
bootstrapped_scores = []
rng = np.random.RandomState(rng_seed)
for i in range(n_bootstraps):
# bootstrap by sampling with replacement on
indices = rng.random_integers(min_index, max_index, draw_size)
#calculate ROC from
rf_fpr, rf_tpr, _ = roc_curve(y_test[indices],
rf_clf_est_b.predict_proba(X_test[indices])[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
#save
bootstrapped_scores.append(rf_roc_auc)
import pickle
with open('./other_output/bow/rf_bootstrapped_scores.pkl', 'wb') as f:
pickle.dump(bootstrapped_scores, f)
plt.hist(bootstrapped_scores, bins=50)
plt.title('Histogram of Bootstrapped AUC ROC of\nRandom Forest Model Using Just Bag of Words')
plt.savefig('./plots/ROC_Histogram_Basic_BOW.png', bbox_inches='tight')
plt.show()
pd.DataFrame({'auc':bootstrapped_scores}).auc.describe()
Explanation: Closer look at the variability of the best model
End of explanation |
285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression with Random Forest
Random Forest is also a popular algorithm in machine learning, it is very flexible and based on the decision tree.
Generate data
Let's first generate a toy dataset that we will use a Random Forest model to fit it. We generate a periodical dataset using two sine wave with different period, and then add some noise to it. It can be visualized in the following figure
Step1: Fit a Random Forest Model
In this section, we fit a random forest model with 100 trees, and all the other parameters are using the default.
Step2: Print out the misfit using the mean squared error.
Step3: Using ANN
We can also use ANN for regression as well, the difference will be at the activation function in the output layer. Instead of using some functions like tanh or sigmoid to squeenze the results to a range between 0 and 1, we can use some linear activation function to generate any results.
Step4: Exercise
The Support Vector Machine method we talked about in the previous notebook can also be used in regression. Instead of import svm, we import svr for regression probelm. In this exercise, please do the regression using support vector regression. You may need to ajust some parameters to get a better results, such as C, penalty parameter of the error term. You can find more descriptions here. | Python Code:
np.random.seed(0)
x = 10 * np.random.rand(100)
def model(x, sigma=0.3):
fast_oscillation = np.sin(5 * x)
slow_oscillation = np.sin(0.5 * x)
noise = sigma * np.random.rand(len(x))
return slow_oscillation + fast_oscillation + noise
plt.figure(figsize = (12,10))
y = model(x)
plt.errorbar(x, y, 0.3, fmt='o')
Explanation: Regression with Random Forest
Random Forest is also a popular algorithm in machine learning, it is very flexible and based on the decision tree.
Generate data
Let's first generate a toy dataset that we will use a Random Forest model to fit it. We generate a periodical dataset using two sine wave with different period, and then add some noise to it. It can be visualized in the following figure:
End of explanation
xfit = np.linspace(0, 10, 1000)
# fit the model and get the estimation for each data points
yfit = RandomForestRegressor(100, random_state=42).fit(x[:, None], y).predict(xfit[:, None])
ytrue = model(xfit, 0)
plt.figure(figsize = (12,10))
plt.errorbar(x, y, 0.3, fmt='o')
plt.plot(xfit, yfit, '-r', label = 'predicted', zorder = 10)
plt.plot(xfit, ytrue, '-k', alpha=0.5, label = 'true model', zorder = 10)
plt.legend()
Explanation: Fit a Random Forest Model
In this section, we fit a random forest model with 100 trees, and all the other parameters are using the default.
End of explanation
mse = mean_squared_error(ytrue, yfit)
print(mse)
Explanation: Print out the misfit using the mean squared error.
End of explanation
from sklearn.neural_network import MLPRegressor
mlp = MLPRegressor(hidden_layer_sizes=(200,200,200), max_iter = 4000, solver='lbfgs', \
alpha=0.01, activation = 'tanh', random_state = 8)
yfit = mlp.fit(x[:, None], y).predict(xfit[:, None])
plt.figure(figsize = (12,10))
plt.errorbar(x, y, 0.3, fmt='o')
plt.plot(xfit, yfit, '-r', label = 'predicted', zorder = 10)
plt.plot(xfit, ytrue, '-k', alpha=0.5, label = 'true model', zorder = 10)
plt.legend()
mse = mean_squared_error(ytrue, yfit)
print(mse)
Explanation: Using ANN
We can also use ANN for regression as well, the difference will be at the activation function in the output layer. Instead of using some functions like tanh or sigmoid to squeenze the results to a range between 0 and 1, we can use some linear activation function to generate any results.
End of explanation
from sklearn.svm import SVR
# define your model
svr =
# get the estimation from the model
yfit =
# plot the results as above
plt.figure(figsize = (12,10))
plt.errorbar(x, y, 0.3, fmt='o')
plt.plot(xfit, yfit, '-r', label = 'predicted', zorder = 10)
plt.plot(xfit, ytrue, '-k', alpha=0.5, label = 'true model', zorder = 10)
plt.legend()
%load ../solutions/solution_03.py
Explanation: Exercise
The Support Vector Machine method we talked about in the previous notebook can also be used in regression. Instead of import svm, we import svr for regression probelm. In this exercise, please do the regression using support vector regression. You may need to ajust some parameters to get a better results, such as C, penalty parameter of the error term. You can find more descriptions here.
End of explanation |
286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Guided Project 1
Learning Objectives
Step1: Step 1. Environment setup
skaffold tool setup
Step2: Modify the PATH environment variable so that skaffold is available
Step3: Environment variable setup
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using Kubeflow Pipelines.
Let's set some environment variables to use Kubeflow Pipelines.
First, get your GCP project ID.
Step4: We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu.
The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard,
or you can get it from the URL of the Getting Started page where you launched this notebook.
Let's create an ENDPOINT environment variable and set it to the KFP cluster endpoint.
ENDPOINT should contain only the hostname part of the URL.
For example, if the URL of the KFP dashboard is
<a href="https
Step5: Set the image name as tfx-pipeline under the current GCP project
Step6: Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and
files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below.
This will also become the name of the project directory where your files will be put.
Step7: TFX includes the taxi template with the TFX python package.
If you are planning to solve a point-wise prediction problem,
including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
Step8: Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code,
sample data, and Jupyter Notebooks to analyse the output of the pipeline.
The taxi template uses the Chicago Taxi dataset.
Here is brief introduction to each of the Python files
Step9: Let's quickly go over the structure of a test file to test Tensorflow code
Step10: First of all, notice that you start by importing the code you want to test by importing the corresponding module. Here we want to test the code in features.py so we import the module features
Step11: Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
Step12: Let's create a TFX pipeline using the tfx pipeline create command.
Note
Step13: While creating a pipeline, Dockerfile and build.yaml will be generated to build a Docker image.
Don't forget to add these files to the source control system (for example, git) along with other source files.
A pipeline definition file for argo will be generated, too.
The name of this file is ${PIPELINE_NAME}.tar.gz.
For example, it will be guided_project_1.tar.gz if the name of your pipeline is guided_project_1.
It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in .gitignore which is generated automatically.
Now start an execution run with the newly created pipeline using the tfx run create command.
Note
Step14: Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed
under Experiments in the KFP Dashboard.
Clicking into the experiment will allow you to monitor progress and visualize
the artifacts created during the execution run.
However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from
the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard,
you will be able to find the pipeline, and access a wealth of information about the pipeline.
For example, you can find your runs under the Experiments menu, and when you open your
execution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator.
If you are interested in data validation, please see
Get started with Tensorflow Data Validation.
Double-click to change directory to pipeline and double-click again to open pipeline.py.
Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline.
(Tip
Step15: Check pipeline outputs
Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline.
See link below to access the dashboard
Step16: Step 6. Add components for training
In this step, you will add components for training and model validation including Transform, Trainer, ResolverNode, Evaluator, and Pusher.
Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, ResolverNode, Evaluator and Pusher to the pipeline. (Tip
Step17: When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!
Step 7. Try BigQueryExampleGen
BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse.
BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.
Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the region value in this file with the correct values for your GCP project.
Note
Step18: Step 8. Try Dataflow with KFP
Several TFX Components uses Apache Beam to implement data-parallel pipelines, and it means that you can distribute data processing workloads using Google Cloud Dataflow. In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, and DATAFLOW_BEAM_PIPELINE_ARGS.
Double-click to open pipeline.py. Change the value of enable_cache to False.
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is guided_project_1 if you didn't change.
Double-click to open kubeflow_dag_runner.py. Uncomment beam_pipeline_args. (Also make sure to comment out current beam_pipeline_args that you added in Step 7.)
Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.
Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
Step19: You can find your Dataflow jobs in Dataflow in Cloud Console.
Please reset enable_cache to True to benefit from caching execution results.
Double-click to open pipeline.py. Reset the value of enable_cache to True.
Step 9. Try Cloud AI Platform Training and Prediction with KFP
TFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services.
Before editing files, you might first have to enable AI Platform Training & Prediction API.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above.
Change directory one level up, and double-click to open kubeflow_dag_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args.
Update the pipeline and create an execution run as we did in step 5 and 6. | Python Code:
import os
Explanation: Guided Project 1
Learning Objectives:
Learn how to generate a standard TFX template pipeline using tfx template
Learn how to modify and run a templated TFX pipeline
Note: This guided project is adapted from Create a TFX pipeline using templates).
End of explanation
PATH = %env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
%%bash
LOCAL_BIN="/home/jupyter/.local/bin"
SKAFFOLD_URI="https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64"
test -d $LOCAL_BIN || mkdir -p $LOCAL_BIN
which skaffold || (
curl -Lo skaffold $SKAFFOLD_URI &&
chmod +x skaffold &&
mv skaffold $LOCAL_BIN
)
Explanation: Step 1. Environment setup
skaffold tool setup
End of explanation
!which skaffold
Explanation: Modify the PATH environment variable so that skaffold is available:
At this point, you shoud see the skaffold tool with the command which:
End of explanation
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT = shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
Explanation: Environment variable setup
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using Kubeflow Pipelines.
Let's set some environment variables to use Kubeflow Pipelines.
First, get your GCP project ID.
End of explanation
ENDPOINT = # Enter your ENDPOINT here.
Explanation: We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu.
The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard,
or you can get it from the URL of the Getting Started page where you launched this notebook.
Let's create an ENDPOINT environment variable and set it to the KFP cluster endpoint.
ENDPOINT should contain only the hostname part of the URL.
For example, if the URL of the KFP dashboard is
<a href="https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start">https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start</a>,
ENDPOINT value becomes 1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com.
End of explanation
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE = "gcr.io/" + GOOGLE_CLOUD_PROJECT + "/tfx-pipeline"
CUSTOM_TFX_IMAGE
Explanation: Set the image name as tfx-pipeline under the current GCP project:
End of explanation
PIPELINE_NAME = "guided_project_1"
PROJECT_DIR = os.path.join(os.path.expanduser("."), PIPELINE_NAME)
PROJECT_DIR
Explanation: Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and
files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below.
This will also become the name of the project directory where your files will be put.
End of explanation
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
%cd {PROJECT_DIR}
Explanation: TFX includes the taxi template with the TFX python package.
If you are planning to solve a point-wise prediction problem,
including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
End of explanation
!python -m models.features_test
!python -m models.keras.model_test
Explanation: Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code,
sample data, and Jupyter Notebooks to analyse the output of the pipeline.
The taxi template uses the Chicago Taxi dataset.
Here is brief introduction to each of the Python files:
pipeline - This directory contains the definition of the pipeline
* configs.py — defines common constants for pipeline runners
* pipeline.py — defines TFX components and a pipeline
models - This directory contains ML model definitions.
* features.py, features_test.py — defines features for the model
* preprocessing.py, preprocessing_test.py — defines preprocessing jobs using tf::Transform
models/estimator - This directory contains an Estimator based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using TF estimator
models/keras - This directory contains a Keras based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using Keras
beam_dag_runner.py, kubeflow_dag_runner.py — define runners for each orchestration engine
Running the tests:
You might notice that there are some files with _test.py in their name.
These are unit tests of the pipeline and it is recommended to add more unit
tests as you implement your own pipelines.
You can run unit tests by supplying the module name of test files with -m flag.
You can usually get a module name by deleting .py extension and replacing / with ..
For example:
End of explanation
!tail -26 models/features_test.py
Explanation: Let's quickly go over the structure of a test file to test Tensorflow code:
End of explanation
GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + "-kubeflowpipelines-default"
GCS_BUCKET_NAME
!gsutil mb gs://{GCS_BUCKET_NAME}
Explanation: First of all, notice that you start by importing the code you want to test by importing the corresponding module. Here we want to test the code in features.py so we import the module features:
python
from models import features
To implement test cases start by defining your own test class inheriting from tf.test.TestCase:
python
class FeaturesTest(tf.test.TestCase):
Wen you execute the test file with
bash
python -m models.features_test
the main method
python
tf.test.main()
will parse your test class (here: FeaturesTest) and execute every method whose name starts by test. Here we have two such methods for instance:
python
def testNumberOfBucketFeatureBucketCount(self):
def testTransformedNames(self):
So when you want to add a test case, just add a method to that test class whose name starts by test. Now inside the body of these test methods is where the actual testing takes place. In this case for instance, testTransformedNames test the function features.transformed_name and makes sure it outputs what is expected.
Since your test class inherits from tf.test.TestCase it has a number of helper methods you can use to help you create tests, as for instance
python
self.assertEqual(expected_outputs, obtained_outputs)
that will fail the test case if obtained_outputs do the match the expected_outputs.
Typical examples of test case you may want to implement for machine learning code would comprise test insurring that your model builds correctly, your preprocessing function preprocesses raw data as expected, or that your model can train successfully on a few mock examples. When writing tests make sure that their execution is fast (we just want to check that the code works not actually train a performant model when testing). For that you may have to create synthetic data in your test files. For more information, read the tf.test.TestCase documentation and the Tensorflow testing best practices.
Step 4. Run your first TFX pipeline
Components in the TFX pipeline will generate outputs for each run as
ML Metadata Artifacts, and they need to be stored somewhere.
You can use any storage which the KFP cluster can access, and for this example we
will use Google Cloud Storage (GCS).
Let us create this bucket. Its name will be <YOUR_PROJECT>-kubeflowpipelines-default.
End of explanation
!gsutil cp data/data.csv gs://{GCS_BUCKET_NAME}/tfx-template/data/data.csv
Explanation: Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
End of explanation
!tfx pipeline create \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
Explanation: Let's create a TFX pipeline using the tfx pipeline create command.
Note: When creating a pipeline for KFP, we need a container image which will
be used to run our pipeline. And skaffold will build the image for us. Because skaffold
pulls base images from the docker hub, it will take 5~10 minutes when we build
the image for the first time, but it will take much less time from the second build.
End of explanation
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: While creating a pipeline, Dockerfile and build.yaml will be generated to build a Docker image.
Don't forget to add these files to the source control system (for example, git) along with other source files.
A pipeline definition file for argo will be generated, too.
The name of this file is ${PIPELINE_NAME}.tar.gz.
For example, it will be guided_project_1.tar.gz if the name of your pipeline is guided_project_1.
It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in .gitignore which is generated automatically.
Now start an execution run with the newly created pipeline using the tfx run create command.
Note: You may see the following error Error importing tfx_bsl_extension.coders. Please ignore it.
Debugging tip: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems.
Please make sure your KFP cluster has permissions to access Google Cloud APIs.
This can be configured when you create a KFP cluster in GCP,
or see Troubleshooting document in GCP.
End of explanation
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed
under Experiments in the KFP Dashboard.
Clicking into the experiment will allow you to monitor progress and visualize
the artifacts created during the execution run.
However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from
the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard,
you will be able to find the pipeline, and access a wealth of information about the pipeline.
For example, you can find your runs under the Experiments menu, and when you open your
execution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator.
If you are interested in data validation, please see
Get started with Tensorflow Data Validation.
Double-click to change directory to pipeline and double-click again to open pipeline.py.
Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline.
(Tip: search for comments containing TODO(step 5):). Make sure to save pipeline.py after you edit it.
You now need to update the existing pipeline with modified pipeline definition. Use the tfx pipeline update command to update your pipeline, followed by the tfx run create command to create a new execution run of your updated pipeline.
End of explanation
print("https://" + ENDPOINT)
Explanation: Check pipeline outputs
Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline.
See link below to access the dashboard:
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
print("https://" + ENDPOINT)
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Step 6. Add components for training
In this step, you will add components for training and model validation including Transform, Trainer, ResolverNode, Evaluator, and Pusher.
Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, ResolverNode, Evaluator and Pusher to the pipeline. (Tip: search for TODO(step 6):)
As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using tfx pipeline update, and create an execution run using tfx run create.
Verify that the pipeline DAG has changed accordingly in the Kubeflow UI:
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!
Step 7. Try BigQueryExampleGen
BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse.
BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.
Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the region value in this file with the correct values for your GCP project.
Note: You MUST set your GCP region in the configs.py file before proceeding
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is guided_project_1 if you didn't change.
Double-click to open kubeflow_dag_runner.py. Uncomment two arguments, query and beam_pipeline_args, for the create_pipeline function.
Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Step 8. Try Dataflow with KFP
Several TFX Components uses Apache Beam to implement data-parallel pipelines, and it means that you can distribute data processing workloads using Google Cloud Dataflow. In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, and DATAFLOW_BEAM_PIPELINE_ARGS.
Double-click to open pipeline.py. Change the value of enable_cache to False.
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is guided_project_1 if you didn't change.
Double-click to open kubeflow_dag_runner.py. Uncomment beam_pipeline_args. (Also make sure to comment out current beam_pipeline_args that you added in Step 7.)
Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.
Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: You can find your Dataflow jobs in Dataflow in Cloud Console.
Please reset enable_cache to True to benefit from caching execution results.
Double-click to open pipeline.py. Reset the value of enable_cache to True.
Step 9. Try Cloud AI Platform Training and Prediction with KFP
TFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services.
Before editing files, you might first have to enable AI Platform Training & Prediction API.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above.
Change directory one level up, and double-click to open kubeflow_dag_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args.
Update the pipeline and create an execution run as we did in step 5 and 6.
End of explanation |
287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Link Prediction
Definition of Link Prediction
Perform link prediction on dataset
Jaccard coefficient
Preferential Attachment
Step1: Link Prediction
The idea of link prediction was first proposed by Liben-Nowell and Kleinberg in 2004 as the following question
Step2: Preferential Attachment
The preferential attachement methods mirrors the “rich get richer” -- nodes with more connections will be the ones to be more likely to get future connections.
Essentially, the measure is the product of a node pairs degrees | Python Code:
import networkx as nx
import matplotlib.pyplot as plt # for plotting graphs
%matplotlib inline
GA = nx.read_gexf('../data/ga_graph.gexf')
Explanation: Link Prediction
Definition of Link Prediction
Perform link prediction on dataset
Jaccard coefficient
Preferential Attachment
End of explanation
preds_jc = nx.jaccard_coefficient(GA)
pred_jc_dict = {}
for u, v, p in preds_jc:
pred_jc_dict[(u,v)] = p
sorted(pred_jc_dict.items(), key=lambda x:x[1], reverse=True)[:10]
extra_attrs = {'finn':('Finn Dandridge','M','S'),
'olivia':('Olivia Harper','F','S'),
'steve':('Steve Murphy','M','S'),
'torres':('Callie Torres','F','B'),
'colin':('Colin Marlow','M','S'),
'grey':('Meredith Grey','F','S'),
'mrs. seabury':('Dana Seabury','F','S'),
'altman':('Teddy Altman','F','S'),
'tucker':('Tucker Jones','M','S'),
'ben':('Ben Warren','M','S'),
"o'malley":("George O'Malley",'M','S'),
'thatch grey':('Thatcher Grey','M','S'),
'susan grey':('Susan Grey','F','S'),
'derek':('Derek Shepherd','M','S'),
'chief':('Richard Webber','M','S'),
'addison':('Addison Montgomery','F','S'),
'karev':('Alex Karev','M','S'),
'hank':('Hank','M','S'),
'lexi':('Lexie Grey','F','S'),
'adele':('Adele Webber','F','S'),
'owen':('Owen Hunt','M','S'),
'sloan':('Mark Sloan','M','S'),
'arizona':('Arizona Robbins','F','G'),
'izzie':('Izzie Stevens','F','S'),
'preston':('Preston Burke','M','S'),
'kepner':('April Kepner','M','S'),
'bailey':('Miranda Bailey','F','S'),
'ellis grey':('Ellis Grey','F','S'),
'denny':('Denny Duquette','M','S'),
'yang':('Cristina Yang','F','S'),
'nancy':('Nancy Shepherd','F','S'),
'avery':('Jackson Avery','M','S')}
for i in GA.nodes():
GA.node[i]["full_name"] = extra_attrs[i][0]
GA.node[i]["gender"] = extra_attrs[i][1]
GA.node[i]["orientation"] = extra_attrs[i][2]
GA.node['grey']
Explanation: Link Prediction
The idea of link prediction was first proposed by Liben-Nowell and Kleinberg in 2004 as the following question:
"Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future?"
It's an inticing idea and has led to many interesting developments in the network literature. For our example, the question could be rephrased as:
"Given a snapshot of the Grey's Anatomy relationship network, can we infer which new relationships are likely to occur in the near future?"
Sounds awesome, but how does it work?
Jaccard Coefficient
The most popular measures for link prediction analyze the “proximity” of nodes in a network. One way to measure proximity is to see what proportion of neighbors a pair of nodes share. This can be capture succintly with the Jaccard index.
In the context of a network, we're comparing sets of neighbors:
$$ Jaccard = \frac{|\Gamma(u) \cap \Gamma(v)|}{|\Gamma(u) \cup \Gamma(v)|} $$
where $\Gamma(u)$ denotes the set of neighbors of $u$.
End of explanation
preds_pa = nx.preferential_attachment(GA)
pred_pa_dict = {}
for u, v, p in preds_pa:
pred_pa_dict[(u,v)] = p
sorted(pred_pa_dict.items(), key=lambda x:x[1], reverse=True)[:10]
Explanation: Preferential Attachment
The preferential attachement methods mirrors the “rich get richer” -- nodes with more connections will be the ones to be more likely to get future connections.
Essentially, the measure is the product of a node pairs degrees:
$$ PA = |\Gamma(u)| \bullet |\Gamma(v)|$$
where $\Gamma(u)$ denotes the set of neighbors (degree) of $u$.
End of explanation |
288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Window functions
Stingray now has a bunch of window functions that can be used for various applications in signal processing.
Windows available include
Step1: create_window function in stingray.utils takes two parameters.
N
Step2: Parzen Window
Step3: Hamming Window
Step4: Hanning Window
Step5: Traingular Window
Step6: Welch Window
Step7: Blackmann's Window
Step8: Flat Top Window | Python Code:
from stingray.utils import create_window
from scipy.fftpack import fft, fftshift, fftfreq
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Window functions
Stingray now has a bunch of window functions that can be used for various applications in signal processing.
Windows available include:
1. Uniform or Rectangular Window
2. Parzen window
3. Hamming window
4. Hanning Window
5. Triangular window
6. Welch Window
7. Blackmann Window
8. Flat-top Window
All windows are available in stingray.utils package and called be used by calling create_window function. Below are some of the examples demonstrating different window functions.
End of explanation
N = 100
window = create_window(N)
plt.plot(window)
plt.title("Uniform window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(uniform_window,nfft ) / (len(uniform_window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Uniform window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: create_window function in stingray.utils takes two parameters.
N : Number of data points in the window
window_type : Type of window to create. Default is uniform.
Uniform Window
End of explanation
N = 100
window = create_window(N, window_type='parzen')
plt.plot(window)
plt.title("Parzen window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Parzen window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Parzen Window
End of explanation
N = 50
window = create_window(N, window_type='hamming')
plt.plot(window)
plt.title("Hamming window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Hamming window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Hamming Window
End of explanation
N = 50
window = create_window(N, window_type='hanning')
plt.plot(window)
plt.title("Hanning window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Hanning window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Hanning Window
End of explanation
N = 50
window = create_window(N, window_type='triangular')
plt.plot(window)
plt.title("Traingualr window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Triangular window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Traingular Window
End of explanation
N = 50
window = create_window(N, window_type='welch')
plt.plot(window)
plt.title("Welch window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Welch window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Welch Window
End of explanation
N = 50
window = create_window(N, window_type='blackmann')
plt.plot(window)
plt.title("Blackmann window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Blackmann window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Blackmann's Window
End of explanation
N = 50
window = create_window(N, window_type='flat-top')
plt.plot(window)
plt.title("Flat-top window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Flat-top window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Flat Top Window
End of explanation |
289 | Given the following text description, write Python code to implement the functionality described.
Description:
Minimize the maximum difference of any pair by doubling odd elements and reducing even elements by half
Function to minimize the maximum difference between any pair of elements of the array by the given operations ; Traverse the array ; If current element is even ; Insert it into even ; Otherwise ; Make it even by multiplying by 2 and insert it into set ; Calculate difference between first and the last element of the set ; Iterate until difference is minimized ; Erase the current element ; Reduce current element by half and insert it into the Set ; Update difference ; Return the resultant difference ; Driver Code
| Python Code:
def minimumMaxDiff(nums ) :
s = { }
for i in range(len(nums ) ) :
if(nums[i ] % 2 == 0 ) :
s[nums[i ] ] = 1
else :
s[nums[i ] * 2 ] = 1
sr = list(s . keys() )
res = sr[- 1 ] - sr[0 ]
while(list(s . keys() )[- 1 ] % 2 == 0 ) :
r = list(s . keys() )
x = r[- 1 ]
del s[x ]
s[x // 2 ] = 1
rr = list(s . keys() )
res = min(res , rr[- 1 ] - r[0 ] )
return res
if __name__== ' __main __' :
arr =[1 , 2 , 5 , 9 ]
print(minimumMaxDiff(arr ) )
|
290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimal Example to Produce a Synthetic Light Curve
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Adding Datasets
Now we'll create an empty lc dataset
Step3: Running Compute
Now we'll compute synthetics at the times provided using the default options
Step4: Plotting
Now we can simply plot the resulting synthetic light curve. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Minimal Example to Produce a Synthetic Light Curve
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,201), dataset='mylc')
Explanation: Adding Datasets
Now we'll create an empty lc dataset:
End of explanation
b.run_compute(irrad_method='none')
Explanation: Running Compute
Now we'll compute synthetics at the times provided using the default options
End of explanation
afig, mplfig = b['mylc@model'].plot(show=True)
afig, mplfig = b['mylc@model'].plot(x='phases', show=True)
Explanation: Plotting
Now we can simply plot the resulting synthetic light curve.
End of explanation |
291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rand 2011 Cooperation Study
This notebook outlines how to recreate the analysis of the Rand et al. 2011 study "Dynamic social networks promote cooperation in experiments with humans" Link to Paper
This workbook focuses on the re-wire analysis workflow portion of the study
Run the cooperation analysis first for a step by step description of interacting with Bedrock, this workflow uses those concepts to complete the rewire study
Step1: Check for csv file locally
The following code opens the file and prints out the first part. The file must be a csv file with a header that has labels for each column. The file is comma delimited csv.
Step2: Now Upload the source file to the Bedrock Server
This code block uses the Spreadsheet ingest module to upload the source file to Bedrock. Note
Step3: Create a Bedrock Matrix from the CSV Source
In order to use the data, the data source must be converted to a Bedrock matrix. The following code steps through that process. Here we are doing a simple transform of csv to matrix. There are options to apply filters (like renaming columns, excluding colum
Step4: Look at basic statistics on the source data
Here we can see that Bedrock has computed some basic statistics on the source data.
For numeric data
The quartiles, max, mean, min, and standard deviation are provided
For non-numeric data
The label values and counts for each label are provided.
For both types
The proposed tags and data type that Bedrock is suggesting are provided
Step5: Create a filtered matrix where previouslytie==0
Step6: Look at otherd effect on nowtie
Note we have to remove rows that contain missing values for either our exogenous or endogenous factors or else clustered standard errors will fail
Step7: Look at otherd and roundnum effect on nowtie
Note we have to remove rows that contain missing values for either our exogenous or endogenous factors or else clustered standard errors will fail
Step8: Previouslytie == 1
Step9: Look at otherd effect on nowtie
Note we have to remove rows that contain missing values for either our exogenous or endogenous factors or else clustered standard errors will fail
Step10: Look at otherd and roundnum effect on nowtie
Note we have to remove rows that contain missing values for either our exogenous or endogenous factors or else clustered standard errors will fail
Step11: Filter where previouslytie == otherD
Step12: Look at otherd effect on act when prevtie == otherD
Step13: Look at otherD and roundnum effect on act when prevtie == otherD
Step14: Look at CC v CD/DC
Step15: Look at CC v DD
Step16: Look at DD v CD/DC | Python Code:
from bedrock.client.client import BedrockAPI
import requests
import pandas
import pprint
SERVER = "http://localhost:81/"
api = BedrockAPI(SERVER)
Explanation: Rand 2011 Cooperation Study
This notebook outlines how to recreate the analysis of the Rand et al. 2011 study "Dynamic social networks promote cooperation in experiments with humans" Link to Paper
This workbook focuses on the re-wire analysis workflow portion of the study
Run the cooperation analysis first for a step by step description of interacting with Bedrock, this workflow uses those concepts to complete the rewire study
End of explanation
filepath = 'Rand2011PNAS_rewire_data.csv'
datafile = pandas.read_csv('Rand2011PNAS_rewire_data.csv')
datafile.head(10)
null_data = datafile[datafile.isnull().any(axis=1)]
null_data
Explanation: Check for csv file locally
The following code opens the file and prints out the first part. The file must be a csv file with a header that has labels for each column. The file is comma delimited csv.
End of explanation
ingest_id = 'opals.spreadsheet.Spreadsheet.Spreadsheet'
resp = api.put_source('Rand2011_rewire', ingest_id, 'default', {'file': open(filepath, "rb")})
if resp.status_code == 201:
source_id = resp.json()['src_id']
print('Source {0} successfully uploaded'.format(filepath))
else:
try:
print("Error in Upload: {}".format(resp.json()['msg']))
except Exception:
pass
try:
source_id = resp.json()['src_id']
print("Using existing source. If this is not the desired behavior, upload with a different name.")
except Exception:
print("No existing source id provided")
Explanation: Now Upload the source file to the Bedrock Server
This code block uses the Spreadsheet ingest module to upload the source file to Bedrock. Note: This simply copies the file to the server, but does not create a Bedrock Matrix format
If the following fails to upload. Check that the csv file is in the correct comma delimited format with headers.
End of explanation
resp = api.create_matrix(source_id, 'rand_mtx')
base_mtx = resp[0]
matrix_id = base_mtx['id']
base_mtx
Explanation: Create a Bedrock Matrix from the CSV Source
In order to use the data, the data source must be converted to a Bedrock matrix. The following code steps through that process. Here we are doing a simple transform of csv to matrix. There are options to apply filters (like renaming columns, excluding colum
End of explanation
analytic_id = "opals.summarize.Summarize.Summarize"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = []
summary_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_mtx_summary', input_data=inputData, parameter_data=paramsData)
output = api.download_results_matrix(summary_mtx['src_id'], summary_mtx['id'], 'matrix.csv')
output
Explanation: Look at basic statistics on the source data
Here we can see that Bedrock has computed some basic statistics on the source data.
For numeric data
The quartiles, max, mean, min, and standard deviation are provided
For non-numeric data
The label values and counts for each label are provided.
For both types
The proposed tags and data type that Bedrock is suggesting are provided
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"colname","value":"previouslytie"},
{"attrname":"comparator","value":"=="},
{"attrname":"value","value":"0"}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'prevtie0', input_data=inputData, parameter_data=paramsData)
f = api.download_results_matrix(filtered_mtx['src_id'], filtered_mtx['id'], 'matrix.csv', remote_header_file='features.txt')
f.head(10)
Explanation: Create a filtered matrix where previouslytie==0
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"colname","value":"otherD"},
{"attrname":"comparator","value":"notnull"},
{"attrname":"value","value":""}
]
otherd_mtx = api.run_analytic(analytic_id, filtered_mtx, 'otherD', input_data=inputData, parameter_data=paramsData)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': otherd_mtx,
'features.txt': otherd_mtx
}
paramsData = [
{"attrname":"formula","value":"C(nowtie) ~ C(otherD)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, otherd_mtx, 'rewire_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Look at otherd effect on nowtie
Note we have to remove rows that contain missing values for either our exogenous or endogenous factors or else clustered standard errors will fail
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': otherd_mtx,
'features.txt': otherd_mtx
}
paramsData = [
{"attrname":"formula","value":"C(nowtie) ~ C(otherD) + C(roundnum)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, otherd_mtx, 'rewire_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Look at otherd and roundnum effect on nowtie
Note we have to remove rows that contain missing values for either our exogenous or endogenous factors or else clustered standard errors will fail
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"colname","value":"previouslytie"},
{"attrname":"comparator","value":"=="},
{"attrname":"value","value":"1"}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'prevtie1', input_data=inputData, parameter_data=paramsData)
f = api.download_results_matrix(filtered_mtx['src_id'], filtered_mtx['id'], 'matrix.csv', remote_header_file='features.txt')
f.head(10)
Explanation: Previouslytie == 1
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"colname","value":"otherD"},
{"attrname":"comparator","value":"notnull"},
{"attrname":"value","value":""}
]
otherd_mtx = api.run_analytic(analytic_id, filtered_mtx, 'otherD', input_data=inputData, parameter_data=paramsData)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': otherd_mtx,
'features.txt': otherd_mtx
}
paramsData = [
{"attrname":"formula","value":"C(nowtie) ~ C(otherD)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, otherd_mtx, 'rewire_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Look at otherd effect on nowtie
Note we have to remove rows that contain missing values for either our exogenous or endogenous factors or else clustered standard errors will fail
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': otherd_mtx,
'features.txt': otherd_mtx
}
paramsData = [
{"attrname":"formula","value":"C(nowtie) ~ C(otherD) + C(roundnum)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, otherd_mtx, 'rewire_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Look at otherd and roundnum effect on nowtie
Note we have to remove rows that contain missing values for either our exogenous or endogenous factors or else clustered standard errors will fail
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByComplexCondition.SelectByComplexCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"condition","value":"(previouslytie == otherD)"}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'prevtie1', input_data=inputData, parameter_data=paramsData)
f = api.download_results_matrix(filtered_mtx['src_id'], filtered_mtx['id'], 'matrix.csv', remote_header_file='features.txt')
f.head(10)
Explanation: Filter where previouslytie == otherD
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"colname","value":"otherD"},
{"attrname":"comparator","value":"notnull"},
{"attrname":"value","value":""}
]
otherd_mtx = api.run_analytic(analytic_id, filtered_mtx, 'otherD', input_data=inputData, parameter_data=paramsData)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': otherd_mtx,
'features.txt': otherd_mtx
}
paramsData = [
{"attrname":"formula","value":"C(act) ~ C(otherD)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":""}
]
result_mtx = api.run_analytic(analytic_id, otherd_mtx, 'rewire_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Look at otherd effect on act when prevtie == otherD
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': otherd_mtx,
'features.txt': otherd_mtx
}
paramsData = [
{"attrname":"formula","value":"C(act) ~ C(otherD) + C(roundnum)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":""}
]
result_mtx = api.run_analytic(analytic_id, otherd_mtx, 'rewire_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Look at otherD and roundnum effect on act when prevtie == otherD
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByComplexCondition.SelectByComplexCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"condition","value":'(previouslytie == 1) & ((state=="CC") | (state=="CD") | (state=="DC"))'}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'prevtie1', input_data=inputData, parameter_data=paramsData)
f = api.download_results_matrix(filtered_mtx['src_id'], filtered_mtx['id'], 'matrix.csv', remote_header_file='features.txt')
f.head(10)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"C(break_) ~ C(CC)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rewire_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Look at CC v CD/DC
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByComplexCondition.SelectByComplexCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"condition","value":'(previouslytie == 1) & ((state=="CC") | (state=="DD"))'}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'prevtie1', input_data=inputData, parameter_data=paramsData)
f = api.download_results_matrix(filtered_mtx['src_id'], filtered_mtx['id'], 'matrix.csv', remote_header_file='features.txt')
f.head(10)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"C(break_) ~ C(CC)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rewire_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Look at CC v DD
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByComplexCondition.SelectByComplexCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"condition","value":'(previouslytie == 1) & ((state=="DD") | (state=="CD") | (state=="DC"))'}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'prevtie1', input_data=inputData, parameter_data=paramsData)
f = api.download_results_matrix(filtered_mtx['src_id'], filtered_mtx['id'], 'matrix.csv', remote_header_file='features.txt')
f.head(10)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"C(break_) ~ C(DD)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rewire_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Look at DD v CD/DC
End of explanation |
292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Practical Deep Learning for Coders, v3
Lesson3_imdb
IMDB影评数据
Step1: Preparing the data 准备数据
First let's download the dataset we are going to study. The dataset has been curated by Andrew Maas et al. and contains a total of 100,000 reviews on IMDB. 25,000 of them are labelled as positive and negative for training, another 25,000 are labelled for testing (in both cases they are highly polarized). The remaning 50,000 is an additional unlabelled data (but we will find a use for it nonetheless).
首先,让我们先下载需要使用的数据。 IMDB 数据集由Andrew Maas等人收集,里面有10万条IMDB网站上的影评。其中2.5万条是积极的评论, 2.5万条是消极的评论, 另外2.5万条是用作测试的评论(这些数据两极分化得很厉害),剩余的5万条是额外的未标记的数据(以后我们会将这些数据用做其他用途)。
We'll begin with a sample we've prepared for you, so that things run quickly before going over the full dataset.
我们一起来看一下提前准备好的样本,这样会比跑遍整个数据集快一些。
Step2: It only contains one csv file, let's have a look at it.
例子里只包含了一个csv文档,我们一起来看一下里面的数据。
Step3: It contains one line per review, with the label ('negative' or 'positive'), the text and a flag to determine if it should be part of the validation set or the training set. If we ignore this flag, we can create a DataBunch containing this data in one line of code
Step4: By executing this line a process was launched that took a bit of time. Let's dig a bit into it. Images could be fed (almost) directly into a model because they're just a big array of pixel values that are floats between 0 and 1. A text is composed of words, and we can't apply mathematical functions to them directly. We first have to convert them to numbers. This is done in two differents steps
Step5: Next time we launch this notebook, we can skip the cell above that took a bit of time (and that will take a lot more when you get to the full dataset) and load those results like this
Step6: Tokenization 分词
The first step of processing we make the texts go through is to split the raw sentences into words, or more exactly tokens. The easiest way to do this would be to split the string on spaces, but we can be smarter
Step7: The texts are truncated at 100 tokens for more readability. We can see that it did more than just split on space and punctuation symbols
Step8: And if we look at what a what's in our datasets, we'll see the tokenized text as a representation
Step9: But the underlying data is all numbers
但实际上,底层的数据形式都是数字
Step10: With the data block API 用data block API处理文字
We can use the data block API with NLP and have a lot more flexibility than what the default factory methods offer. In the previous example for instance, the data was randomly split between train and validation instead of reading the third column of the csv.
活地处理各种情况。比如在之前的例子中,数据随机分为训练集和验证集,而非通过读取csv中第三列的标签来分组。
With the data block API though, we have to manually call the tokenize and numericalize steps. This allows more flexibility, and if you're not using the defaults from fastai, the various arguments to pass will appear in the step they're revelant, so it'll be more readable.
不过如果要使用数据块API,我们需要手动完成分词和数值化的各个步骤。这样可以更加灵活。如果你没有使用fastai工具包里的默认设置,你也可以像下面的步骤一样进行各种设置,并且代码可读性也更高。
Step11: Language model 语言模型
Note that language models can use a lot of GPU, so you may need to decrease batchsize here.
需要注意的是语言文字模型会用掉许多GPU,因此你可能会需要减小每个批次的样本容量。
Step12: Now let's grab the full dataset for what follows.
现在我们为接下来的步骤获取完整的数据集。
Step13: The reviews are in a training and test set following an imagenet structure. The only difference is that there is an unsup folder on top of train and test that contains the unlabelled data.
现在影评遵循imagenet的结构分到了训练集和测试集中。唯一的区别是,在测试集和训练集上会有个包括未标记数据的unsup文件夹。
We're not going to train a model that classifies the reviews from scratch. Like in computer vision, we'll use a model pretrained on a bigger dataset (a cleaned subset of wikipedia called wikitext-103). That model has been trained to guess what the next word is, its input being all the previous words. It has a recurrent structure and a hidden state that is updated each time it sees a new word. This hidden state thus contains information about the sentence up to that point.
我们不需要从无到有地训练一个影评分类模型。就像计算机视觉模型一样,我们将使用一个在更大训练集上预训练好的模型(在维基上有一个清洗好的子集 wikitext-103 )。这个模型被训练来猜测下一个词是什么,它的输入数据是之前已有的词汇。该模型采用循环神经网络结构,并且有一个每次看到新词都会更新的隐层状态。 隐层状态里包含的信息,是文本中到截止这个点之前的所有句子。
We are going to use that 'knowledge' of the English language to build our classifier, but first, like for computer vision, we need to fine-tune the pretrained model to our particular dataset. Because the English of the reviews left by people on IMDB isn't the same as the English of wikipedia, we'll need to adjust the parameters of our model by a little bit. Plus there might be some words that would be extremely common in the reviews dataset but would be barely present in wikipedia, and therefore might not be part of the vocabulary the model was trained on.
我们用这样的预训练模型信息来创建我们的分类器。但首先,正如计算机视觉一样,我们需要对预训练的模型进行调参来适应我们的这个数据集。由于IMDB上影评的英语语言和维基百科上的英语语言风格不尽相同,我们需要将参数进行一定的调整。另外,可能会有些词在影评数据中出现的频率较高,但在维基百科上基本没出现过,因此可能和模型预训练时用的词库不太一样。
This is where the unlabelled data is going to be useful to us, as we can use it to fine-tune our model. Let's create our data object with the data block API (next line takes a few minutes).
我们可以用未标记的数据进行模型微调,这就是未标记数据具有价值的地方。让我们通过数据块API来建立一个数据对象。(下行会花费数分钟的时间)
Step14: We have to use a special kind of TextDataBunch for the language model, that ignores the labels (that's why we put 0 everywhere), will shuffle the texts at each epoch before concatenating them all together (only for training, we don't shuffle for the validation set) and will send batches that read that text in order with targets that are the next word in the sentence.
对于语言模型,我们需要用一个特殊的TextDataBunch,它会忽略标签(这就是为什么我们给所有地方都设置为0的原因),在将每个轮次的文字合并在一起之前打乱所有的文字(仅限于模型训练,我们不会对验证集进行混洗),并会分批次按顺序读取文字和接下来对应的单词。
The line before being a bit long, we want to load quickly the final ids by using the following cell.
之前的代码会有点长,我们可以用下面的代码用id快速导入对应的文字。
Step15: We can then put this in a learner object very easily with a model loaded with the pretrained weights. They'll be downloaded the first time you'll execute the following line and stored in ~/.fastai/models/ (or elsewhere if you specified different paths in your config file).
我们可以很轻易地将模型和预训练的权重结合为一个学习器对象。在你第一次运行下面的代码时,所有模型的信息会下载并存储到~/.fastai/models/ 或者其他由你的config文件指定的地方。
Step16: To complete the fine-tuning, we can then unfeeze and launch a new training.
要完成微调,我们可以解冻模型并开启新的训练。
Step17: How good is our model? Well let's try to see what it predicts after a few given words.
我们的模型表现怎么样呢? 嗯,让我们来看看在几个词过后模型预测出的词是怎样的。
Step18: We have to save not only the model, but also its encoder, the part that's responsible for creating and updating the hidden state. For the next part, we don't care about the part that tries to guess the next word.
我们不但保存了模型,而且保存了它的编码器,(也就是)负责创建和更新隐层状态(的部分)。剩下的负责猜词的部分,我们就不管了。
Step19: Classifier 分类器
Now, we'll create a new data object that only grabs the labelled data and keeps those labels. Again, this line takes a bit of time.
现在我们要创建一个新的数据对象,仅抓取有标签的数据并且保留标签。这个步骤可能会需要一点时间。
Step20: We can then create a model to classify those reviews and load the encoder we saved before.
我们可以建立一个模型来对影评进行分类,并且导入之前存储好的编码器。 | Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.text import *
Explanation: Practical Deep Learning for Coders, v3
Lesson3_imdb
IMDB影评数据
End of explanation
path = untar_data(URLs.IMDB_SAMPLE)
path.ls()
Explanation: Preparing the data 准备数据
First let's download the dataset we are going to study. The dataset has been curated by Andrew Maas et al. and contains a total of 100,000 reviews on IMDB. 25,000 of them are labelled as positive and negative for training, another 25,000 are labelled for testing (in both cases they are highly polarized). The remaning 50,000 is an additional unlabelled data (but we will find a use for it nonetheless).
首先,让我们先下载需要使用的数据。 IMDB 数据集由Andrew Maas等人收集,里面有10万条IMDB网站上的影评。其中2.5万条是积极的评论, 2.5万条是消极的评论, 另外2.5万条是用作测试的评论(这些数据两极分化得很厉害),剩余的5万条是额外的未标记的数据(以后我们会将这些数据用做其他用途)。
We'll begin with a sample we've prepared for you, so that things run quickly before going over the full dataset.
我们一起来看一下提前准备好的样本,这样会比跑遍整个数据集快一些。
End of explanation
df = pd.read_csv(path/'texts.csv')
df.head()
df['text'][1]
Explanation: It only contains one csv file, let's have a look at it.
例子里只包含了一个csv文档,我们一起来看一下里面的数据。
End of explanation
data_lm = TextDataBunch.from_csv(path, 'texts.csv')
Explanation: It contains one line per review, with the label ('negative' or 'positive'), the text and a flag to determine if it should be part of the validation set or the training set. If we ignore this flag, we can create a DataBunch containing this data in one line of code:
文档里的每一行都是一个影评,影评附有标签(“负面”或是“正面”)、评论文字以及一个标明是属于训练集还是验证集的标签,如果我们忽略这个(标明所属数据集的)标签,我们可以有下面这行代码来产生一个 DataBunch(数据堆):
End of explanation
data_lm.save()
Explanation: By executing this line a process was launched that took a bit of time. Let's dig a bit into it. Images could be fed (almost) directly into a model because they're just a big array of pixel values that are floats between 0 and 1. A text is composed of words, and we can't apply mathematical functions to them directly. We first have to convert them to numbers. This is done in two differents steps: tokenization and numericalization. A TextDataBunch does all of that behind the scenes for you.
运行这行代码会启动一个需要稍微花点时间的程序,让我们来更深入地了解一下。图像本质上是一个巨大的像素值数列,这个数列由0到1 之间的数字组成,因此图像数据基本上可以直接输入到模型中。但是,一段文字是由词组成的,而我们不能直接对词运用数学函数。那么我们首先需要将这些信息转化为数字。这一过程需要通过两部完成:分词和数值化。TextDataBunch在幕后为您完成所有这些工作。
Before we delve into the explanations, let's take the time to save the things that were calculated.
在我们开始讲解内容之前,让我们先花点时间将计算好的数据存档。
End of explanation
data = load_data(path)
Explanation: Next time we launch this notebook, we can skip the cell above that took a bit of time (and that will take a lot more when you get to the full dataset) and load those results like this:
下次我们启动这个notebook, 可以直接跳过之前稍费时间的单元格,直接用下面的代码载入之前保存的结果(如果你载入的是全部数据,之前这些步骤会花费更多时间):
End of explanation
data = TextClasDataBunch.from_csv(path, 'texts.csv')
data.show_batch()
Explanation: Tokenization 分词
The first step of processing we make the texts go through is to split the raw sentences into words, or more exactly tokens. The easiest way to do this would be to split the string on spaces, but we can be smarter:
处理数据的第一步是将文字分拆成单词, 或者更确切地说, 标准词(tokens)。最简单的方式是基于空格对句子进行分拆, 但我们能更智能地分词:
we need to take care of punctuation
<br>我们需要考虑标点
some words are contractions of two different words, like isn't or don't
<br>有些词是由两个不同的词缩写的,比如isn't或don't
we may need to clean some parts of our texts, if there's HTML code for instance
<br>我们可能需要清理文本的某些部分,比如文字中可能会有HTML代码
To see what the tokenizer had done behind the scenes, let's have a look at a few texts in a batch.<br>
为了明白分词器幕后是如何工作的,让我们来看一下数据堆中的一些文本。
End of explanation
data.vocab.itos[:10]
Explanation: The texts are truncated at 100 tokens for more readability. We can see that it did more than just split on space and punctuation symbols:
为了更简洁易读,我们将所有评论删节到100个词。我们可以看到文字标记化算法不仅仅是基于空格和标点进行了分词:
the "'s" are grouped together in one token
<br>所有“'s”都被合并为一个标准词
the contractions are separated like this: "did", "n't"
<br>词语的缩写被分开,比如“did” 和 “n't”
content has been cleaned for any HTML symbol and lower cased
<br>所有包含HTML连接的内容被清理,并且所有文字都采用小写
there are several special tokens (all those that begin by xx), to replace unknown tokens (see below) or to introduce different text fields (here we only have one).
<br>为了代替未知的标准词(如下)或者引入不同的文本字段(这里我们只有一个),(在结果中可以看到)有一些特殊的标准词(它们都以xx开头)
Numericalization 数值化
Once we have extracted tokens from our texts, we convert to integers by creating a list of all the words used. We only keep the ones that appear at least twice with a maximum vocabulary size of 60,000 (by default) and replace the ones that don't make the cut by the unknown token UNK.
一旦我们从文本中完成了标准词提取,就会生成一个包含所有词汇的列表,将标准词转化成整数。这里我们只保留至少出现两次的标准词,并设置词库上限为60,000(默认设置), 同时将所有不能分进行分词的词标记为“未知标准词” UNK。
The correspondance from ids to tokens is stored in the vocab attribute of our datasets, in a dictionary called itos (for int to string).
id和标准词的关系存储在数据集的vocab属性中,在字典 itos 中(由int类型转换成string类型)。
End of explanation
data.train_ds[0][0]
Explanation: And if we look at what a what's in our datasets, we'll see the tokenized text as a representation:
如果我们查看数据集里的“what's”的形式,我们会看到如下经过分词后的文本:
End of explanation
data.train_ds[0][0].data[:10]
Explanation: But the underlying data is all numbers
但实际上,底层的数据形式都是数字
End of explanation
data = (TextList.from_csv(path, 'texts.csv', cols='text')
.split_from_df(col=2)
.label_from_df(cols=0)
.databunch())
Explanation: With the data block API 用data block API处理文字
We can use the data block API with NLP and have a lot more flexibility than what the default factory methods offer. In the previous example for instance, the data was randomly split between train and validation instead of reading the third column of the csv.
活地处理各种情况。比如在之前的例子中,数据随机分为训练集和验证集,而非通过读取csv中第三列的标签来分组。
With the data block API though, we have to manually call the tokenize and numericalize steps. This allows more flexibility, and if you're not using the defaults from fastai, the various arguments to pass will appear in the step they're revelant, so it'll be more readable.
不过如果要使用数据块API,我们需要手动完成分词和数值化的各个步骤。这样可以更加灵活。如果你没有使用fastai工具包里的默认设置,你也可以像下面的步骤一样进行各种设置,并且代码可读性也更高。
End of explanation
bs=48
Explanation: Language model 语言模型
Note that language models can use a lot of GPU, so you may need to decrease batchsize here.
需要注意的是语言文字模型会用掉许多GPU,因此你可能会需要减小每个批次的样本容量。
End of explanation
path = untar_data(URLs.IMDB)
path.ls()
(path/'train').ls()
Explanation: Now let's grab the full dataset for what follows.
现在我们为接下来的步骤获取完整的数据集。
End of explanation
data_lm = (TextList.from_folder(path)
#Inputs: all the text files in path
.filter_by_folder(include=['train', 'test', 'unsup'])
#We may have other temp folders that contain text files so we only keep what's in train and test
.split_by_rand_pct(0.1)
#We randomly split and keep 10% (10,000 reviews) for validation
.label_for_lm()
#We want to do a language model so we label accordingly
.databunch(bs=bs))
data_lm.save('data_lm.pkl')
Explanation: The reviews are in a training and test set following an imagenet structure. The only difference is that there is an unsup folder on top of train and test that contains the unlabelled data.
现在影评遵循imagenet的结构分到了训练集和测试集中。唯一的区别是,在测试集和训练集上会有个包括未标记数据的unsup文件夹。
We're not going to train a model that classifies the reviews from scratch. Like in computer vision, we'll use a model pretrained on a bigger dataset (a cleaned subset of wikipedia called wikitext-103). That model has been trained to guess what the next word is, its input being all the previous words. It has a recurrent structure and a hidden state that is updated each time it sees a new word. This hidden state thus contains information about the sentence up to that point.
我们不需要从无到有地训练一个影评分类模型。就像计算机视觉模型一样,我们将使用一个在更大训练集上预训练好的模型(在维基上有一个清洗好的子集 wikitext-103 )。这个模型被训练来猜测下一个词是什么,它的输入数据是之前已有的词汇。该模型采用循环神经网络结构,并且有一个每次看到新词都会更新的隐层状态。 隐层状态里包含的信息,是文本中到截止这个点之前的所有句子。
We are going to use that 'knowledge' of the English language to build our classifier, but first, like for computer vision, we need to fine-tune the pretrained model to our particular dataset. Because the English of the reviews left by people on IMDB isn't the same as the English of wikipedia, we'll need to adjust the parameters of our model by a little bit. Plus there might be some words that would be extremely common in the reviews dataset but would be barely present in wikipedia, and therefore might not be part of the vocabulary the model was trained on.
我们用这样的预训练模型信息来创建我们的分类器。但首先,正如计算机视觉一样,我们需要对预训练的模型进行调参来适应我们的这个数据集。由于IMDB上影评的英语语言和维基百科上的英语语言风格不尽相同,我们需要将参数进行一定的调整。另外,可能会有些词在影评数据中出现的频率较高,但在维基百科上基本没出现过,因此可能和模型预训练时用的词库不太一样。
This is where the unlabelled data is going to be useful to us, as we can use it to fine-tune our model. Let's create our data object with the data block API (next line takes a few minutes).
我们可以用未标记的数据进行模型微调,这就是未标记数据具有价值的地方。让我们通过数据块API来建立一个数据对象。(下行会花费数分钟的时间)
End of explanation
data_lm = load_data(path, 'data_lm.pkl', bs=bs)
data_lm.show_batch()
Explanation: We have to use a special kind of TextDataBunch for the language model, that ignores the labels (that's why we put 0 everywhere), will shuffle the texts at each epoch before concatenating them all together (only for training, we don't shuffle for the validation set) and will send batches that read that text in order with targets that are the next word in the sentence.
对于语言模型,我们需要用一个特殊的TextDataBunch,它会忽略标签(这就是为什么我们给所有地方都设置为0的原因),在将每个轮次的文字合并在一起之前打乱所有的文字(仅限于模型训练,我们不会对验证集进行混洗),并会分批次按顺序读取文字和接下来对应的单词。
The line before being a bit long, we want to load quickly the final ids by using the following cell.
之前的代码会有点长,我们可以用下面的代码用id快速导入对应的文字。
End of explanation
learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3)
learn.lr_find()
learn.recorder.plot(skip_end=15)
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7))
learn.save('fit_head')
learn.load('fit_head');
Explanation: We can then put this in a learner object very easily with a model loaded with the pretrained weights. They'll be downloaded the first time you'll execute the following line and stored in ~/.fastai/models/ (or elsewhere if you specified different paths in your config file).
我们可以很轻易地将模型和预训练的权重结合为一个学习器对象。在你第一次运行下面的代码时,所有模型的信息会下载并存储到~/.fastai/models/ 或者其他由你的config文件指定的地方。
End of explanation
learn.unfreeze()
learn.fit_one_cycle(10, 1e-3, moms=(0.8,0.7))
learn.save('fine_tuned')
Explanation: To complete the fine-tuning, we can then unfeeze and launch a new training.
要完成微调,我们可以解冻模型并开启新的训练。
End of explanation
learn.load('fine_tuned');
TEXT = "I liked this movie because"
N_WORDS = 40
N_SENTENCES = 2
print("\n".join(learn.predict(TEXT, N_WORDS, temperature=0.75) for _ in range(N_SENTENCES)))
Explanation: How good is our model? Well let's try to see what it predicts after a few given words.
我们的模型表现怎么样呢? 嗯,让我们来看看在几个词过后模型预测出的词是怎样的。
End of explanation
learn.save_encoder('fine_tuned_enc')
Explanation: We have to save not only the model, but also its encoder, the part that's responsible for creating and updating the hidden state. For the next part, we don't care about the part that tries to guess the next word.
我们不但保存了模型,而且保存了它的编码器,(也就是)负责创建和更新隐层状态(的部分)。剩下的负责猜词的部分,我们就不管了。
End of explanation
path = untar_data(URLs.IMDB)
data_clas = (TextList.from_folder(path, vocab=data_lm.vocab)
#grab all the text files in path
.split_by_folder(valid='test')
#split by train and valid folder (that only keeps 'train' and 'test' so no need to filter)
.label_from_folder(classes=['neg', 'pos'])
#label them all with their folders
.databunch(bs=bs))
data_clas.save('data_clas.pkl')
data_clas = load_data(path, 'data_clas.pkl', bs=bs)
data_clas.show_batch()
Explanation: Classifier 分类器
Now, we'll create a new data object that only grabs the labelled data and keeps those labels. Again, this line takes a bit of time.
现在我们要创建一个新的数据对象,仅抓取有标签的数据并且保留标签。这个步骤可能会需要一点时间。
End of explanation
learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5)
learn.load_encoder('fine_tuned_enc')
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7))
learn.save('first')
learn.load('first');
learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2), moms=(0.8,0.7))
learn.save('second')
learn.load('second');
learn.freeze_to(-3)
learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3), moms=(0.8,0.7))
learn.save('third')
learn.load('third');
learn.unfreeze()
learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3), moms=(0.8,0.7))
learn.predict("I really loved that movie, it was awesome!")
Explanation: We can then create a model to classify those reviews and load the encoder we saved before.
我们可以建立一个模型来对影评进行分类,并且导入之前存储好的编码器。
End of explanation |
293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import Packages, Load Data
Step1: Look at the data
Patterns differ from state to state
Step2: What do we learn?
Variation over provinces
If we ignore space
Step3: What do we learn?
Autocorrelation in time
Some weird time stuff going on at later lags.
If we ignore time
Step4: In context, imagine y is tweet volume, and x is some outcome of interest that occurs at the local level. We know that the tweet volume is higher in Anbar than Baghdad. In these circumstances, local effects would be mased with a bivariate correlation.
Note also that, while it is a good idea to look at your data's distributions, you want to make these decisions before you start modeling if you can. You <i>can</i> lie with statistics. And human heuristics make it easy to justify. Protect yourself from yourself, so you don't. Think about model design before you look at results
Takeaway
Step5: Those are some long tails...
Step6: Poisson distributions have mean=variance.
Use Negative Binomial instead to model mean and variance separately
Negative Binomial Distribution is the most appropriate distribution for our outcome variables of interest.
Note
Step7: Just doesn't fit
Step8: Heteroskedastic
Step9: And now with a GLM | Python Code:
import pandas as pd
import statsmodels.api as sm
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
%matplotlib inline
import datetime
import time
migration_df = pd.read_csv('migration_dums.csv')
migration_df.set_index(['date_stamp','Province'],inplace=True)
Explanation: Import Packages, Load Data
End of explanation
f,a=plt.subplots(3,1)
a[0].set_title("Country-wide Volume")
a[1].set_title("Anbar Volume")
a[2].set_title("Babylon Volume")
foo = migration_df.loc[:,['vol','vol_arabic','Date']].groupby('Date').sum()
foo.loc[:,['vol','vol_arabic']].plot(ax=a[0],figsize=(10,7))
foo = migration_df.loc[(slice(None),'Anbar'),['vol','vol_arabic']]
foo.reset_index('Province').plot(ax=a[1])
migration_df.loc[(slice(None),'Babylon'),['vol','vol_arabic']].reset_index('Province').plot(ax=a[2])
f.tight_layout()
# f,a=plt.figure(figsize=(5,5))
vol_plot=migration_df.loc[:,['vol']].unstack(level="Province")
vol_plot.columns = vol_plot.columns.droplevel(0)
vol_plot.drop('Sulaymaniyah',axis=1,inplace=True)
ax =vol_plot.loc[:,['Anbar','Babylon','Thi-Qar','Baghdad']].plot(kind='hist',alpha=.5,bins=50)
# vol_plot.plot.density()
ax.figsize=(10,5)
Explanation: Look at the data
Patterns differ from state to state
End of explanation
from statsmodels.tsa.stattools import acf, pacf
foo = migration_df.loc[:,['vol','vol_arabic','origin','destination','Date']].groupby('Date').sum()
f,axs = plt.subplots(5,2)
axs[0][0].set_title('acf for English')
axs[0][1].set_title('pacf for English')
axs[1][0].set_title('acf for Arabic')
axs[1][1].set_title('pacf for Arabic')
axs[2][0].set_title('acf for Arabic dif')
axs[2][1].set_title('pacf for Arabic dif')
axs[3][0].set_title('acf for origin')
axs[3][1].set_title('pacf for origin')
axs[4][0].set_title('acf for destination')
axs[4][1].set_title('pacf for destination')
a = acf(foo.vol)
a = pd.DataFrame([a]).T
a.plot(kind='bar',ax = axs[0][0],figsize=(10,12))
# foo = foo.dropna(axis=0)
a = pacf(foo.vol)
a = pd.DataFrame([a]).T
a = a.dropna(axis=0)
a.plot(kind='bar',ax = axs[0][1])
a = acf(foo.origin)
a = pd.DataFrame([a]).T
a.plot(kind='bar',ax = axs[3][0])
a = pacf(foo.origin)
a = pd.DataFrame([a]).T
a = a.dropna(axis=0)
a.plot(kind='bar',ax = axs[3][1],ylim=[-10,3])
foo = foo.dropna(axis=0)
a = acf(foo.destination)
a = pd.DataFrame([a]).T
a.plot(kind='bar',ax = axs[4][0])
a = pacf(foo.destination)
a = pd.DataFrame([a]).T
a = a.dropna(axis=0)
a.plot(kind='bar',ax = axs[4][1])
foo = foo.dropna(axis=0)
a = acf(foo.vol_arabic)
a = pd.DataFrame([a]).T
a.plot(kind='bar',ax = axs[1][0])
a = pacf(foo.vol_arabic)
a = pd.DataFrame([a]).T
a = a.dropna(axis=0)
a.plot(kind='bar',ax = axs[1][1])
foo['vol_arabic_dif'] = foo.vol_arabic- foo.vol_arabic.shift(1)
foo = foo.dropna(axis=0)
a = acf(foo.vol_arabic_dif)
a = pd.DataFrame([a]).T
a.plot(kind='bar',ax = axs[2][0])
a = pacf(foo.vol_arabic_dif)
a = pd.DataFrame([a]).T
a = a.dropna(axis=0)
a.plot(kind='bar',ax = axs[2][1])
f.tight_layout()
Explanation: What do we learn?
Variation over provinces
If we ignore space:
May be that people tweet about and flee some provinces more than others, says nothing about <b>when</b> people flee
IID violation
Autocorrelation within space. Confidence estimates wrong.
What about time?
End of explanation
bar = pd.DataFrame([1,2,3,4],columns=['x'])
bar['y']=[2,1,4,3]
bar.plot.scatter('x','y')
bar['condition']=[1,1,0,0]
bar['c']=1
print(sm.OLS(bar.y,bar.loc[:,['x','c']]).fit().summary())
bar['fit1']=bar.x*.6+1
plt.plot(bar.x,bar.fit1,"r--")
print('\n\nCorrelation:',sp.stats.stats.pearsonr(bar.x,bar.y)[0])
# bar.loc[bar.condition==1,['x','y']].plot.scatter('x','y')
print(sm.OLS(bar.y,bar.loc[:,['x','c','condition']]).fit().summary())
bar.plot.scatter('x','y',c=['r','r','b','b'])
bar['fit2']=7-bar.x
bar['fit3']=7-bar.x
bar['fit3']=bar.fit3 - 4
plt.plot(bar.loc[bar.condition==0,'x'],bar.loc[bar.condition==0,'fit2'],"b--")
plt.plot(bar.loc[bar.condition==1,'x'],bar.loc[bar.condition==1,'fit3'],"r--")
Explanation: What do we learn?
Autocorrelation in time
Some weird time stuff going on at later lags.
If we ignore time:
AR process, non-stationary data. Reduced predictive validity
Spurious results more likely
IID violation
Autocorrelation within time. Confidence estimates wrong.
What does this mean?
Don't know whether bivariate correlation estimates are noise or 0
We care about where and when something happens, can't get that from country-level pooled estimates
Solution:
differencing, lags, fixed effects
Fixed Effects:
Add a constant for every month, and every place.
If Anbar always has more tweets, compare Anbar against Anbar
Why:
Control for unknowns to <b>isolate effect of the signal </b>
End of explanation
migration_df.loc[:,['vol','vol_arabic','origin','destination']].plot.hist(bins=60,alpha=.5)
Explanation: In context, imagine y is tweet volume, and x is some outcome of interest that occurs at the local level. We know that the tweet volume is higher in Anbar than Baghdad. In these circumstances, local effects would be mased with a bivariate correlation.
Note also that, while it is a good idea to look at your data's distributions, you want to make these decisions before you start modeling if you can. You <i>can</i> lie with statistics. And human heuristics make it easy to justify. Protect yourself from yourself, so you don't. Think about model design before you look at results
Takeaway:
Our final model will have a lot of other predictors and controls, but this model doesn't
Can get around that by isolating the signal with fixed effects
Look at the effect of a signal in Anbar, rather than comparing the effect of that signal in Anbar and while avoiding comparing it unduely to the effect in Baghdad.
"Partially pooled". Allow regional and temporal variation without multiple comparisions or iid violations with pooled.
Expect similar effects, with different magnitudes
Could go all the way to random effects, allow each governorate to have have its own effect, drawn from a distribution, and estimate that distribution. But we don't have that much data here, might loose so much power than real results fade away.
OLS vs GLM
Count data
We know the data are count. Poisson <i> should </i>be our first guess
End of explanation
spreads = migration_df.loc[:,['vol','vol_arabic','origin','destination','Orig_difs','Dest_difs']].mean()
spreads = pd.DataFrame(spreads,columns = ['mean'])
spreads['var'] = migration_df.loc[:,['vol','vol_arabic','origin','destination','Orig_difs','Dest_difs']].var(skipna=True)
spreads
Explanation: Those are some long tails...
End of explanation
dates =['14-09',
'14-10', '14-11', '14-12', '15-01', '15-02', '15-03', '15-04',
'15-05', '15-06', '15-07', '15-08', '15-09', '15-10', '15-11',
'15-12', '16-01', '16-02', '16-03', '16-04', '16-05', '16-06',
'16-07', '16-08', '16-09', '16-10', '16-11', '16-12', '17-01',
'17-02', '17-03', '17-04', '17-05',]
provinces = migration_df.index.get_level_values(1).unique()
yvar = 'origin'
xvars = ['vol','origin_lag']
xvars.extend(provinces)
xvars.extend(dates)
glm =False
model_olsg = sm.GLM(migration_df.loc[:,yvar],
migration_df.loc[:,xvars],missing='drop',
family=sm.families.Gaussian(),
)
model_nb = sm.GLM(migration_df.loc[:,yvar],
migration_df.loc[:,xvars],missing='drop',
family=sm.families.NegativeBinomial(),
)
model_ols = sm.OLS(migration_df.loc[:,yvar],
migration_df.loc[:,xvars],missing='drop')
if glm:
results_nb = model_nb.fit()
print(results_nb.summary())
else:
results_olsg = model_olsg.fit()
results_ols = model_ols.fit()
print(results_ols.summary())
Explanation: Poisson distributions have mean=variance.
Use Negative Binomial instead to model mean and variance separately
Negative Binomial Distribution is the most appropriate distribution for our outcome variables of interest.
Note: there are also a lot of zeros, should probably run zero-inflated negative binomial, to model 0s as distinct processes. But that's harder in python, so we can check model fit to see if it's necessary or if we can get reasonable estimates without it.
What's wrong with OLS?
Homoskedasticity Assumption
{k+r-1 \choose k}\cdot (1-p)^{r}p^{k}
Variance changes as mean changes. Data are heteroskedastic. Since regression is essentially a way to measure variance, you have to account for the variance appropriately or your certainty estimates are wrng
It doesn't fit the data.
Can predict negative numbers
Different relationship between predictors and probability of observed outcome than a gaussian regression.
End of explanation
fig = plt.figure(figsize=(12,8))
fig=sm.graphics.plot_regress_exog(results_ols, "vol",fig=fig)
Explanation: Just doesn't fit:
End of explanation
fig, ax = plt.subplots(figsize=(4,4))
ax.scatter(results_olsg.mu, results_olsg.resid_response)
# ax.hlines(0, 0, 3000000)
# ax.set_xlim(0, 70000)
# ax.set_ylim(0, 70000)
ax.hlines(0, 0, 250000)
ax.set_title('Residual Dependence Plot, Volume and Origin, NB')
ax.set_ylabel('Pearson Residuals')
ax.set_xlabel('Fitted values')
Explanation: Heteroskedastic
End of explanation
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "resid.png")
Explanation: And now with a GLM:
Note: statsmodels isn't as sophisticated as many of the packages in R, and the negative binomial regression is still a little new. Converges with the MASS package in R, but has trouble with Statsmodels. I also just trust MASS a little more than statsmodels. So the results are pasted below:
Call:
glm.nb(formula = origin ~ vol + origin_lag + Anbar + Babylon +
Baghdad + Basrah + Dahuk + Diyala + Erbil + Kerbala + Kirkuk +
Missan + Muthanna + Najaf + Ninewa + Qadissiya + Salah.al.Din +
Sulaymaniyah + Thi.Qar + Wassit + X14.10 + X14.11 + X14.12 +
X15.01 + X15.02 + X15.03 + X15.04 + X15.05 + X15.06 + X15.07 +
X15.08 + X15.09 + X15.10 + X15.11 + X15.12 + X16.01 + X16.02 +
X16.03 + X16.04 + X16.05 + X16.06 + X16.07 + X16.08 + X16.09 +
X16.10 + X16.11 + X16.12 + X17.01 + X17.02 + X17.03 + X17.04 -
1, data = data, init.theta = 1.394043988, link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.9672 -0.6948 -0.1600 0.1415 3.8842
```Coefficients:
Estimate Std. Error z value Pr(>|z|)
vol 2.301e-05 9.822e-06 2.342 0.019157
origin_lag 1.177e-05 2.679e-06 4.394 1.11e-05
Anbar 9.456e+00 5.647e-01 16.745 < 2e-16
Babylon 8.183e+00 3.059e-01 26.749 < 2e-16
Baghdad 8.718e+00 3.065e-01 28.444 < 2e-16
Basrah -1.776e-01 3.503e-01 -0.507 0.612050
Dahuk -4.087e+00 1.043e+00 -3.918 8.95e-05
Diyala 9.614e+00 3.158e-01 30.441 < 2e-16
Erbil 7.699e+00 3.069e-01 25.089 < 2e-16
Kerbala -3.739e+01 1.125e+07 0.000 0.999997
Kirkuk 9.624e+00 3.124e-01 30.808 < 2e-16
Missan 8.451e-02 3.415e-01 0.247 0.804572
Muthanna -3.739e+01 1.125e+07 0.000 0.999997
Najaf -2.089e+00 4.998e-01 -4.179 2.92e-05
Ninewa 9.628e+00 5.818e-01 16.549 < 2e-16
Qadissiya 1.482e+00 3.154e-01 4.700 2.60e-06
Salah.al.Din 1.018e+01 3.587e-01 28.377 < 2e-16
Sulaymaniyah -1.625e+00 4.444e-01 -3.656 0.000256
Thi.Qar -4.126e+00 1.062e+00 -3.884 0.000103
Wassit -3.739e+01 1.125e+07 0.000 0.999997
X14.10 1.383e-01 3.999e-01 0.346 0.729497
X14.11 6.279e-01 3.805e-01 1.650 0.098899 .
X14.12 6.501e-01 3.806e-01 1.708 0.087623 .
X15.01 7.865e-01 3.785e-01 2.078 0.037704
X15.02 1.454e+00 3.718e-01 3.912 9.14e-05
X15.03 1.516e+00 3.712e-01 4.085 4.41e-05
X15.04 1.433e+00 3.723e-01 3.849 0.000119
X15.05 1.718e-01 3.819e-01 0.450 0.652739
X15.06 1.581e-01 3.815e-01 0.415 0.678462
X15.07 1.622e-01 3.815e-01 0.425 0.670676
X15.08 1.561e-01 3.814e-01 0.409 0.682287
X15.09 1.379e-01 3.815e-01 0.361 0.717814
X15.10 2.568e+00 3.647e-01 7.041 1.90e-12
X15.11 1.951e+00 3.722e-01 5.241 1.60e-07 ***
X15.12 -1.175e-01 3.872e-01 -0.304 0.761502
X16.01 -1.209e-01 3.847e-01 -0.314 0.753366
X16.02 -7.577e-02 3.834e-01 -0.198 0.843339
X16.03 -1.287e-01 3.844e-01 -0.335 0.737728
X16.04 -1.511e-01 3.843e-01 -0.393 0.694187
X16.05 -2.037e-01 3.856e-01 -0.528 0.597330
X16.06 -2.027e-01 3.859e-01 -0.525 0.599386
X16.07 -2.204e-01 3.862e-01 -0.571 0.568232
X16.08 -2.304e-01 3.864e-01 -0.596 0.550960
X16.09 -2.075e-01 3.855e-01 -0.538 0.590401
X16.10 -2.240e-01 3.943e-01 -0.568 0.569996
X16.11 -9.720e-02 3.854e-01 -0.252 0.800856
X16.12 -6.413e-02 3.836e-01 -0.167 0.867236
X17.01 -3.999e-02 3.839e-01 -0.104 0.917048
X17.02 -2.726e-02 3.837e-01 -0.071 0.943351
X17.03 2.561e-02 3.837e-01 0.067 0.946770
X17.04 -7.492e-02 3.843e-01 -0.195 0.845445
Signif. codes: 0 ‘’ 0.001 ‘’ 0.01 ‘’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(1.394) family taken to be 1)
Null deviance: 2.8122e+07 on 576 degrees of freedom
Residual deviance: 4.7556e+02 on 525 degrees of freedom
(18 observations deleted due to missingness)
AIC: 6307.7
Number of Fisher Scoring iterations: 1
Theta: 1.394
Std. Err.: 0.127
Warning while fitting theta: alternation limit reached
2 x log-likelihood: -6203.692
Warning message:
In glm.nb(origin ~ vol + origin_lag + Anbar + Babylon + Baghdad + :
alternation limit reached
```
Transform residuals from NB GLM using DHARMa in R
```
sim <- simulateResiduals(fittedModel = m1, n=500)
plotSimulatedResiduals(simulationOutput = sim)```
End of explanation |
294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Baysian Classification and Naive Bayes for genre classification using lyrics
In this notebook we look at the problem of classifying songs to three genres (rap, rock and country) based on a simple binary bag of words representation. First we load the data and then we take a look at it. Using our implementation of discrete random variables we generate new random songs. Finally we show how classification can be performed using Bayes Rule. The data comes for the lyrics of songs from the Million Song DataSet and was created for an assignment in my course on MIR.
Data layout and visualization
The data layout and the way the classifier is implemented is not general and not optimal but done for pedagogical purposes. Each genre consists of 1000 tracks and the matrix containing the data is ordered by genre. That way the instances corresponding to a genre can easily be found by the index without having to check the class label as would be the case with a general classifier.
We have created a dictionary of 30 words by taking the 10 "best" words based on tf-idf score for each genre. Each track is represented by a binary vector (a row in the matrix) with ones for dictionary words that are in the track and 0 for words that are not. So the matrix is 3000 instances (3 * 1000 per genre) by 30 for each word in the dictionary. When visualized one can observe the block structure that shows that the the rap tracks have a lot of words from the first 10 words in the dictionary that are characteristic of rap.
Step1: Calculating the 30-dimensional word probability vector for each genre
Let's calculate the word probability vector for each genre and then look at the most probable words for each genre in our data as well as how particular songs are represented as bag of words. We can calculate the probabilities of each word in the dictionary for the songs in each genre by summing the columns of the part of the matrix that corrsponds to each genre. As some words might not appear at all I have added 1.0 to both the numerator and denominator. This is a simple form of what's called additive smoothing which is a common technique to avoid zeros for any class conditional probabilities that would lead to the whole likelihood being zero.
Step2: Checking out the words in some songs using the binary representation
Each row of the feature matrix contains ones for each word that is present in the song.
We can view the words of any particular song by mapping these ones using the dictionary of words.
Let's view the words in the 20th track (row of the matrix) of each genre and then look at track 250.
Step3: Generating random songs based on our simplified representation
Now let's generate some random songs represented as bag of words using the calculated word probabilities for each genre. This way we can understand better the assumptions and simplifications of this model. I simply generate 30 random number and then depending on the class-conditional probabilities for a particular genre if the number is great than the random number the corresponding word is selected for generation. This gives us a clear idea of what assumptions this Binomial Naive Bayes classifier makes. Running the cell multiple times show the variation we get from this very simple model.
Step4: Using the calculated word probabilities to make a classifier
Now let's look at classifying songs using a naive Bayes Bernoulli classifier. When the representation is binary vectors indicating absense or presence of words it is called a Bernoulli Naive Bayes. If the times a word appears in a document affect the classification it is called a Multinomial text classifier.
To make a classification decision we simply calculate the likelihood for each genre independently by taking the products of the genre dependent word probabilities. The genere with the highest likelihood is selected as the predicted class. In a more realistic implementation log-likelihoods would be used to avoid problems with small numbers. Notice that when a word is absent the probability it is absent (1 - the probability it is present) is used.
Step5: Using the trained classifier to predict
Now that we have a function to compute the likelihood given the parameters of a particular model in this case the model parameters are the probabilities for each word. We have three models to compare one for each genre. Given a test song we compute the three likelihoods and select the largest. We can randomly select a track from the country rows and then apply our predict function to see what it does. If you run the cell multiple times you will see that for most country tracks the prediction is correct but mistakes are made occassionally.
Step6: Performing a simple evaluation of our classifier
We can now write a function that given a test set and associated ground truth lables runs our Bernoulli classifier and calculates the associated classification accuracy. We can now check how well the classifier does for each subset of the data corresponding to the three genres. Using the data used to trained the classifier for testing as we do here is a methodological mistake and in a more realistic scenario or application a separate dataset would be used for testing and the processing could be repeated multiple times using a scheme like k-fold cross-validation. As the purpose of this notebook is to illustrate how probabilities are used to create a Naive Bayes classifier I don't bother with that. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pickle
import numpy as np
# load some lyrics bag of words data, binarize, separate matrix rows by genre
data = np.load('data/data.npz')
a = data['arr_0']
a[a > 0] = 1
labels = np.load('data/labels.npz')
labels = labels['arr_0']
dictionary = pickle.load(open('data/dictionary.pck','rb'), encoding='latin1')
word_indices = [ 41, 1465, 169, 217, 1036, 188, 260, 454, 173, 728, 163,
151, 107, 142, 90, 141, 161, 131, 86, 73, 165, 133,
84, 244, 153, 126, 137, 119, 80, 224]
words = [dictionary[r] for r in word_indices]
# binary row vectors separate by genre (rap, rock, country)
ra_rows = a[0:1000,:]
ro_rows = a[1000:2000,:]
co_rows = a[2000:3000,:]
print(ra_rows.shape, ro_rows.shape, co_rows.shape)
plt.imshow(a, aspect='auto', cmap='gray')
Explanation: Baysian Classification and Naive Bayes for genre classification using lyrics
In this notebook we look at the problem of classifying songs to three genres (rap, rock and country) based on a simple binary bag of words representation. First we load the data and then we take a look at it. Using our implementation of discrete random variables we generate new random songs. Finally we show how classification can be performed using Bayes Rule. The data comes for the lyrics of songs from the Million Song DataSet and was created for an assignment in my course on MIR.
Data layout and visualization
The data layout and the way the classifier is implemented is not general and not optimal but done for pedagogical purposes. Each genre consists of 1000 tracks and the matrix containing the data is ordered by genre. That way the instances corresponding to a genre can easily be found by the index without having to check the class label as would be the case with a general classifier.
We have created a dictionary of 30 words by taking the 10 "best" words based on tf-idf score for each genre. Each track is represented by a binary vector (a row in the matrix) with ones for dictionary words that are in the track and 0 for words that are not. So the matrix is 3000 instances (3 * 1000 per genre) by 30 for each word in the dictionary. When visualized one can observe the block structure that shows that the the rap tracks have a lot of words from the first 10 words in the dictionary that are characteristic of rap.
End of explanation
# calculate word counts for each genre
word_probs_ra = (ra_rows.sum(axis=0).astype(float) + 1.0) / (len(ra_rows)+1.0)
word_probs_ro = (ro_rows.sum(axis=0).astype(float) + 1.0) / (len(ro_rows)+1.0)
word_probs_co = (co_rows.sum(axis=0).astype(float) + 1.0) / (len(co_rows)+1.0)
# Let's llok at the word probabitites for rap music
for w in zip(word_probs_ra, words):
print(w)
Explanation: Calculating the 30-dimensional word probability vector for each genre
Let's calculate the word probability vector for each genre and then look at the most probable words for each genre in our data as well as how particular songs are represented as bag of words. We can calculate the probabilities of each word in the dictionary for the songs in each genre by summing the columns of the part of the matrix that corrsponds to each genre. As some words might not appear at all I have added 1.0 to both the numerator and denominator. This is a simple form of what's called additive smoothing which is a common technique to avoid zeros for any class conditional probabilities that would lead to the whole likelihood being zero.
End of explanation
#let's look at the bag of words for three particular songs
track_id = 20
print(track_id)
print("RAP for trackid:",[words[i] for i,r in enumerate(ra_rows[track_id]) if r==1])
print("ROCK for trackid:",[words[i] for i,r in enumerate(ro_rows[track_id]) if r==1])
print("COUNTRY for trackid:",[words[i] for i,r in enumerate(co_rows[track_id]) if r==1])
track_id = 250
print(track_id)
print("RAP for trackid:",[words[i] for i,r in enumerate(ra_rows[track_id]) if r==1])
print("ROCK for trackid:",[words[i] for i,r in enumerate(ro_rows[track_id]) if r==1])
print("COUNTRY for trackid:",[words[i] for i,r in enumerate(co_rows[track_id]) if r==1])
# let's look at the k most probable words for each genre based on the data we have
k = 5
[[words[x] for x in np.argpartition(word_probs_ra, -k)[-k:]],
[words[x] for x in np.argpartition(word_probs_ro, -k)[-k:]],
[words[x] for x in np.argpartition(word_probs_co, -k)[-k:]]]
Explanation: Checking out the words in some songs using the binary representation
Each row of the feature matrix contains ones for each word that is present in the song.
We can view the words of any particular song by mapping these ones using the dictionary of words.
Let's view the words in the 20th track (row of the matrix) of each genre and then look at track 250.
End of explanation
print('Random rap', [w for (i,w) in enumerate(words) if np.greater(word_probs_ra, np.random.rand(30))[i]])
print('Random rock', [w for (i,w) in enumerate(words) if np.greater(word_probs_ro, np.random.rand(30))[i]])
print('Random country', [w for (i,w) in enumerate(words) if np.greater(word_probs_co, np.random.rand(30))[i]])
Explanation: Generating random songs based on our simplified representation
Now let's generate some random songs represented as bag of words using the calculated word probabilities for each genre. This way we can understand better the assumptions and simplifications of this model. I simply generate 30 random number and then depending on the class-conditional probabilities for a particular genre if the number is great than the random number the corresponding word is selected for generation. This gives us a clear idea of what assumptions this Binomial Naive Bayes classifier makes. Running the cell multiple times show the variation we get from this very simple model.
End of explanation
# calcuate likelihood separately for each word
# using naive bayes assumption and multiply
# typically a sum of log-likelihoods is used
# rather than a multiplication.
def likelihood(test_song, word_probs_for_genre):
probability_product = 1.0
for (i,w) in enumerate(test_song):
if (w==1):
probability = word_probs_for_genre[i]
else:
probability = 1.0 - word_probs_for_genre[i]
probability_product *= probability
return probability_product
Explanation: Using the calculated word probabilities to make a classifier
Now let's look at classifying songs using a naive Bayes Bernoulli classifier. When the representation is binary vectors indicating absense or presence of words it is called a Bernoulli Naive Bayes. If the times a word appears in a document affect the classification it is called a Multinomial text classifier.
To make a classification decision we simply calculate the likelihood for each genre independently by taking the products of the genre dependent word probabilities. The genere with the highest likelihood is selected as the predicted class. In a more realistic implementation log-likelihoods would be used to avoid problems with small numbers. Notice that when a word is absent the probability it is absent (1 - the probability it is present) is used.
End of explanation
def predict(test_song):
scores = [likelihood(test_song, word_probs_ra),
likelihood(test_song, word_probs_ro),
likelihood(test_song, word_probs_co)]
labels = ['rap', 'rock', 'country']
return labels[np.argmax(scores)]
# predict a random country track
track_id = np.random.randint(1000)
print("Random track id", track_id)
test_song = co_rows[track_id]
print(predict(test_song))
Explanation: Using the trained classifier to predict
Now that we have a function to compute the likelihood given the parameters of a particular model in this case the model parameters are the probabilities for each word. We have three models to compare one for each genre. Given a test song we compute the three likelihoods and select the largest. We can randomly select a track from the country rows and then apply our predict function to see what it does. If you run the cell multiple times you will see that for most country tracks the prediction is correct but mistakes are made occassionally.
End of explanation
def predict_set(test_set, ground_truth_label):
score = 0
for r in test_set:
if predict(r) == ground_truth_label:
score += 1
# convert to percentage
return score / 10.0
# Let's evaluate how well our classifier does on the training set
# A more proper evaluation would utilize cross-validation
print("Rap accuracy% = ", predict_set(ra_rows, 'rap'))
print("Rock accuracy% = ", predict_set(ro_rows, 'rock'))
print("Country accuracy% = ", predict_set(co_rows, 'country'))
Explanation: Performing a simple evaluation of our classifier
We can now write a function that given a test set and associated ground truth lables runs our Bernoulli classifier and calculates the associated classification accuracy. We can now check how well the classifier does for each subset of the data corresponding to the three genres. Using the data used to trained the classifier for testing as we do here is a methodological mistake and in a more realistic scenario or application a separate dataset would be used for testing and the processing could be repeated multiple times using a scheme like k-fold cross-validation. As the purpose of this notebook is to illustrate how probabilities are used to create a Naive Bayes classifier I don't bother with that.
End of explanation |
295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data exploration
Step1: age, education_num, hours_per_week, fnlwgt seem like good candidates as features. Not much information in capital_gain, capital_loss.
Some routine stuff
Convert objects to categories
Drop duplicates
Drop NA's - we can potentially impute these values. But always try out the simpler alternative before making it too complicated
Step2: Let's clean some data
Step3: Intuition 1
Step4: Above plot shows percentage of population with respect to education and income, and it seems people with Masters and PhD tend to earn to more (more number of people are in >50K bucket).
Intuition 2
Step5: First plot shows distribution of age with respect to income <= 50K. Age is used as an proxy to experience. Assumption here is people continue to work as they age and acquire more skills in the process. As per intuition, number of people making less than 50K decreases as per age.
Second plot shows income > 50K. More interestingly, data shows a peak around 45. This indicates either there aren't enough poeple of age 45+ earning more than 50K in the data or income decreases as people approach retirement.
Feature construction
Step6: Model fitting
Step7: Model/Feature Evaluation | Python Code:
def read_data(path):
return pd.read_csv(path,
index_col=False,
skipinitialspace=True,
names=['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'sex',
'capital_gain', 'capital_loss', 'hours_per_week', 'native_country',
'income']
)
train = read_data('./data/adult/adult.data')
test = read_data('./data/adult/adult.test')
train = train.append(test)
train.head()
train.hist(figsize=(12, 9))
Explanation: Data exploration
End of explanation
# for column in train.select_dtypes(['object']).columns:
# train[column] = train[column].astype('category')
## Check for duplicates, nulls
train.drop_duplicates(inplace=True)
train.dropna(inplace=True)
print any(train.duplicated())
print train.isnull().any()
Explanation: age, education_num, hours_per_week, fnlwgt seem like good candidates as features. Not much information in capital_gain, capital_loss.
Some routine stuff
Convert objects to categories
Drop duplicates
Drop NA's - we can potentially impute these values. But always try out the simpler alternative before making it too complicated :)
End of explanation
train.income.loc[train.income == '>50K.'] = '>50K'
train.income.loc[train.income == '<=50K.'] = '<=50K'
train.income.value_counts()
Explanation: Let's clean some data
End of explanation
education_subset = train.groupby(['education_num', 'income']).size().reset_index()
education_subset.columns = ['education_num', 'income', 'count']
func = lambda x: float(x['count']) / train[train.education_num == x.education_num].count()[0]
education_subset['percentage'] = education_subset.apply(func, axis=1)
education_subset['education + income'] = education_subset.apply(lambda x: '%s, %s' % (x.education_num, x.income), axis=1)
education_subset.sort().plot(kind='barh', x='education + income', y='percentage', figsize=(12,12))
Explanation: Intuition 1:
Higher education should result in more income.
End of explanation
train.groupby('income').hist(figsize=(15,12))
Explanation: Above plot shows percentage of population with respect to education and income, and it seems people with Masters and PhD tend to earn to more (more number of people are in >50K bucket).
Intuition 2:
People earn more as they get more experience.
End of explanation
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
lencoder = LabelEncoder()
oencoder = OneHotEncoder()
features = pd.DataFrame()
features['age'] = train['age']
features['education_num'] = train['education_num']
features['hours_per_week'] = train['hours_per_week']
features['fnlwgt'] = train['fnlwgt']
features['sex'] = lencoder.fit_transform(train.sex)
features['occupation'] = lencoder.fit_transform(train.occupation)
features.income = train.income
features.income = lencoder.fit_transform(features.income)
features.head()
Explanation: First plot shows distribution of age with respect to income <= 50K. Age is used as an proxy to experience. Assumption here is people continue to work as they age and acquire more skills in the process. As per intuition, number of people making less than 50K decreases as per age.
Second plot shows income > 50K. More interestingly, data shows a peak around 45. This indicates either there aren't enough poeple of age 45+ earning more than 50K in the data or income decreases as people approach retirement.
Feature construction
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(features.drop('income'), features.income)
model = RandomForestClassifier()
model.fit(x_train, y_train)
y_hat = model.predict(x_test)
Explanation: Model fitting
End of explanation
from sklearn.metrics import confusion_matrix, accuracy_score
accuracy_score(y_test, y_hat)
confusion_matrix(y_test, y_hat)
Explanation: Model/Feature Evaluation
End of explanation |
296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Dogs vs Cats Image Classification With Image Augmentation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Data Loading
To build our image classifier, we begin by downloading the dataset. The dataset we are using is a filtered version of <a href="https
Step3: The dataset we have downloaded has following directory structure.
<pre style="font-size
Step4: Understanding our data
Let's look at how many cats and dogs images we have in our training and validation directory
Step5: Setting Model Parameters
For convenience, let us set up variables that will be used later while pre-processing our dataset and training our network.
Step6: After defining our generators for training and validation images, flow_from_directory method will load images from the disk and will apply rescaling and will resize them into required dimensions using single line of code.
Data Augmentation
Overfitting often occurs when we have a small number of training examples. One way to fix this problem is to augment our dataset so that it has sufficient number and variety of training examples. Data augmentation takes the approach of generating more training data from existing training samples, by augmenting the samples through random transformations that yield believable-looking images. The goal is that at training time, your model will never see the exact same picture twice. This exposes the model to more aspects of the data, allowing it to generalize better.
In tf.keras we can implement this using the same ImageDataGenerator class we used before. We can simply pass different transformations we would want to our dataset as a form of arguments and it will take care of applying it to the dataset during our training process.
To start off, let's define a function that can display an image, so we can see the type of augmentation that has been performed. Then, we'll look at specific augmentations that we'll use during training.
Step7: Flipping the image horizontally
We can begin by randomly applying horizontal flip augmentation to our dataset and seeing how individual images will look after the transformation. This is achieved by passing horizontal_flip=True as an argument to the ImageDataGenerator class.
Step8: To see the transformation in action, let's take one sample image from our training set and repeat it five times. The augmentation will be randomly applied (or not) to each repetition.
Step9: Rotating the image
The rotation augmentation will randomly rotate the image up to a specified number of degrees. Here, we'll set it to 45.
Step10: To see the transformation in action, let's once again take a sample image from our training set and repeat it. The augmentation will be randomly applied (or not) to each repetition.
Step11: Applying Zoom
We can also apply Zoom augmentation to our dataset, zooming images up to 50% randomly.
Step12: One more time, take a sample image from our training set and repeat it. The augmentation will be randomly applied (or not) to each repetition.
Step13: Putting it all together
We can apply all these augmentations, and even others, with just one line of code, by passing the augmentations as arguments with proper values.
Here, we have applied rescale, rotation of 45 degrees, width shift, height shift, horizontal flip, and zoom augmentation to our training images.
Step14: Let's visualize how a single image would look like five different times, when we pass these augmentations randomly to our dataset.
Step15: Creating Validation Data generator
Generally, we only apply data augmentation to our training examples, since the original images should be representative of what our model needs to manage. So, in this case we are only rescaling our validation images and converting them into batches using ImageDataGenerator.
Step16: Model Creation
Define the model
The model consists of four convolution blocks with a max pool layer in each of them.
Before the final Dense layers, we're also applying a Dropout probability of 0.5. It means that 50% of the values coming into the Dropout layer will be set to zero. This helps to prevent overfitting.
Then we have a fully connected layer with 512 units, with a relu activation function. The model will output class probabilities for two classes — dogs and cats — using softmax.
Step17: Compiling the model
As usual, we will use the adam optimizer. Since we output a softmax categorization, we'll use sparse_categorical_crossentropy as the loss function. We would also like to look at training and validation accuracy on each epoch as we train our network, so we are passing in the metrics argument.
Step18: Model Summary
Let's look at all the layers of our network using summary method.
Step19: Train the model
It's time we train our network.
Since our batches are coming from a generator (ImageDataGenerator), we'll use fit_generator instead of fit.
Step20: Visualizing results of the training
We'll now visualize the results we get after training our network. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
Explanation: Dogs vs Cats Image Classification With Image Augmentation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l05c02_dogs_vs_cats_with_augmentation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l05c02_dogs_vs_cats_with_augmentation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this tutorial, we will discuss how to classify images into pictures of cats or pictures of dogs. We'll build an image classifier using tf.keras.Sequential model and load data using tf.keras.preprocessing.image.ImageDataGenerator.
Specific concepts that will be covered:
In the process, we will build practical experience and develop intuition around the following concepts
Building data input pipelines using the tf.keras.preprocessing.image.ImageDataGenerator class — How can we efficiently work with data on disk to interface with our model?
Overfitting - what is it, how to identify it, and how can we prevent it?
Data Augmentation and Dropout - Key techniques to fight overfitting in computer vision tasks that we will incorporate into our data pipeline and image classifier model.
We will follow the general machine learning workflow:
Examine and understand data
Build an input pipeline
Build our model
Train our model
Test our model
Improve our model/Repeat the process
<hr>
Before you begin
Before running the code in this notebook, reset the runtime by going to Runtime -> Reset all runtimes in the menu above. If you have been working through several notebooks, this will help you avoid reaching Colab's memory limits.
Importing packages
Let's start by importing required packages:
os — to read files and directory structure
numpy — for some matrix math outside of TensorFlow
matplotlib.pyplot — to plot the graph and display images in our training and validation data
End of explanation
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
zip_dir = tf.keras.utils.get_file('cats_and_dogs_filterted.zip', origin=_URL, extract=True)
Explanation: Data Loading
To build our image classifier, we begin by downloading the dataset. The dataset we are using is a filtered version of <a href="https://www.kaggle.com/c/dogs-vs-cats/data" target="_blank">Dogs vs. Cats</a> dataset from Kaggle (ultimately, this dataset is provided by Microsoft Research).
In previous Colabs, we've used <a href="https://www.tensorflow.org/datasets" target="_blank">TensorFlow Datasets</a>, which is a very easy and convenient way to use datasets. In this Colab however, we will make use of the class tf.keras.preprocessing.image.ImageDataGenerator which will read data from disk. We therefore need to directly download Dogs vs. Cats from a URL and unzip it to the Colab filesystem.
End of explanation
base_dir = os.path.join(os.path.dirname(zip_dir), 'cats_and_dogs_filtered')
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
Explanation: The dataset we have downloaded has following directory structure.
<pre style="font-size: 10.0pt; font-family: Arial; line-height: 2; letter-spacing: 1.0pt;" >
<b>cats_and_dogs_filtered</b>
|__ <b>train</b>
|______ <b>cats</b>: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....]
|______ <b>dogs</b>: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]
|__ <b>validation</b>
|______ <b>cats</b>: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....]
|______ <b>dogs</b>: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...]
</pre>
We'll now assign variables with the proper file path for the training and validation sets.
End of explanation
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
Explanation: Understanding our data
Let's look at how many cats and dogs images we have in our training and validation directory
End of explanation
BATCH_SIZE = 100
IMG_SHAPE = 150 # Our training data consists of images with width of 150 pixels and height of 150 pixels
Explanation: Setting Model Parameters
For convenience, let us set up variables that will be used later while pre-processing our dataset and training our network.
End of explanation
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip(images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
Explanation: After defining our generators for training and validation images, flow_from_directory method will load images from the disk and will apply rescaling and will resize them into required dimensions using single line of code.
Data Augmentation
Overfitting often occurs when we have a small number of training examples. One way to fix this problem is to augment our dataset so that it has sufficient number and variety of training examples. Data augmentation takes the approach of generating more training data from existing training samples, by augmenting the samples through random transformations that yield believable-looking images. The goal is that at training time, your model will never see the exact same picture twice. This exposes the model to more aspects of the data, allowing it to generalize better.
In tf.keras we can implement this using the same ImageDataGenerator class we used before. We can simply pass different transformations we would want to our dataset as a form of arguments and it will take care of applying it to the dataset during our training process.
To start off, let's define a function that can display an image, so we can see the type of augmentation that has been performed. Then, we'll look at specific augmentations that we'll use during training.
End of explanation
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE))
Explanation: Flipping the image horizontally
We can begin by randomly applying horizontal flip augmentation to our dataset and seeing how individual images will look after the transformation. This is achieved by passing horizontal_flip=True as an argument to the ImageDataGenerator class.
End of explanation
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
Explanation: To see the transformation in action, let's take one sample image from our training set and repeat it five times. The augmentation will be randomly applied (or not) to each repetition.
End of explanation
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE))
Explanation: Rotating the image
The rotation augmentation will randomly rotate the image up to a specified number of degrees. Here, we'll set it to 45.
End of explanation
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
Explanation: To see the transformation in action, let's once again take a sample image from our training set and repeat it. The augmentation will be randomly applied (or not) to each repetition.
End of explanation
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
train_data_gen = image_gen.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE))
Explanation: Applying Zoom
We can also apply Zoom augmentation to our dataset, zooming images up to 50% randomly.
End of explanation
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
Explanation: One more time, take a sample image from our training set and repeat it. The augmentation will be randomly applied (or not) to each repetition.
End of explanation
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_data_gen = image_gen_train.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE),
class_mode='binary')
Explanation: Putting it all together
We can apply all these augmentations, and even others, with just one line of code, by passing the augmentations as arguments with proper values.
Here, we have applied rescale, rotation of 45 degrees, width shift, height shift, horizontal flip, and zoom augmentation to our training images.
End of explanation
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
Explanation: Let's visualize how a single image would look like five different times, when we pass these augmentations randomly to our dataset.
End of explanation
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=BATCH_SIZE,
directory=validation_dir,
target_size=(IMG_SHAPE, IMG_SHAPE),
class_mode='binary')
Explanation: Creating Validation Data generator
Generally, we only apply data augmentation to our training examples, since the original images should be representative of what our model needs to manage. So, in this case we are only rescaling our validation images and converting them into batches using ImageDataGenerator.
End of explanation
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(2)
])
Explanation: Model Creation
Define the model
The model consists of four convolution blocks with a max pool layer in each of them.
Before the final Dense layers, we're also applying a Dropout probability of 0.5. It means that 50% of the values coming into the Dropout layer will be set to zero. This helps to prevent overfitting.
Then we have a fully connected layer with 512 units, with a relu activation function. The model will output class probabilities for two classes — dogs and cats — using softmax.
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Compiling the model
As usual, we will use the adam optimizer. Since we output a softmax categorization, we'll use sparse_categorical_crossentropy as the loss function. We would also like to look at training and validation accuracy on each epoch as we train our network, so we are passing in the metrics argument.
End of explanation
model.summary()
Explanation: Model Summary
Let's look at all the layers of our network using summary method.
End of explanation
epochs=100
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(total_train / float(BATCH_SIZE))),
epochs=epochs,
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(BATCH_SIZE)))
)
Explanation: Train the model
It's time we train our network.
Since our batches are coming from a generator (ImageDataGenerator), we'll use fit_generator instead of fit.
End of explanation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
Explanation: Visualizing results of the training
We'll now visualize the results we get after training our network.
End of explanation |
297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem 1
Test whether the series is convergent or divergent
Step1: Let $a_n = \sum_{n=1}^{\infty} \frac{1}{n + 7^n}$.
Could $a_n$ be a Combination of Series (i.e. the sum of two series)? | Python Code:
import sympy as sp
from matplotlib import pyplot as plt
%matplotlib inline
# Customize figure size
plt.rcParams['figure.figsize'] = 25, 15
#plt.rcParams['lines.linewidth'] = 1
#plt.rcParams['lines.color'] = 'g'
plt.rcParams['font.family'] = 'monospace'
plt.rcParams['font.size'] = '16.0'
plt.rcParams['text.hinting'] = 'either'
f = lambda x: 1/(x + 7**x)
sp.mpmath.plot([f], xlim=[-5,25], ylim=[0,25], points=500)
# To check your work, use Sympy.mpmath.nsum()
# This gives the sum of the infinite series (if the series converges)
infty = sp.mpmath.inf
sum = sp.mpmath.nsum(f, [1, infty])
print('The sum of the series = {}'.format(sum))
Explanation: Problem 1
Test whether the series is convergent or divergent:
$$\sum_{n=1}^{\infty} \frac{1}{n+7^n}$$
End of explanation
f = lambda x: 1/x
g = lambda x: 1/7**x
sp.mpmath.plot([f,g], xlim=[-5,25], ylim=[0,25], points=500)
# Check that sum of f_n plus g_n = a_n
sum = sp.mpmath.nsum(f, [1, infty]) + sp.mpmath.nsum(g, [1, infty])
print('The sum of the series = {}'.format(sum))
Explanation: Let $a_n = \sum_{n=1}^{\infty} \frac{1}{n + 7^n}$.
Could $a_n$ be a Combination of Series (i.e. the sum of two series)?
End of explanation |
298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jump_to opening comments and overview of lesson 10
Callbacks
Callbacks as GUI events
Jump_to lesson 10 video
Step1: From the ipywidget docs
Step2: NB
Step3: Lambdas and partials
Jump_to lesson 10 video
Step4: Callbacks as callable classes
Jump_to lesson 10 video
Step5: Multiple callback funcs; *args and **kwargs
Jump_to lesson 10 video
Step6: NB
Step7: Modifying behavior
Jump_to lesson 10 video
Step8: __dunder__ thingies
Anything that looks like __this__ is, in some way, special. Python, or some library, can define some functions that they will call at certain documented times. For instance, when your class is setting up a new object, python will call __init__. These are defined as part of the python data model.
For instance, if python sees +, then it will call the special method __add__. If you try to display an object in Jupyter (or lots of other places in Python) it will call __repr__.
Jump_to lesson 10 video
Step9: Special methods you should probably know about (see data model link above) are
Step10: Oops. We can't do that. Because by definition the positives and negatives cancel out. So we can fix that in one of (at least) two ways
Step11: But the first of these is now a totally different scale, since we squared. So let's undo that at the end.
Step12: They're still different. Why?
Note that we have one outlier (18). In the version where we square everything, it makes that much bigger than everything else.
(t-m).pow(2).mean() is refered to as variance. It's a measure of how spread out the data is, and is particularly sensitive to outliers.
When we take the sqrt of the variance, we get the standard deviation. Since it's on the same kind of scale as the original data, it's generally more interpretable. However, since sqrt(1)==1, it doesn't much matter which we use when talking about unit variance for initializing neural nets.
(t-m).abs().mean() is referred to as the mean absolute deviation. It isn't used nearly as much as it deserves to be, because mathematicians don't like how awkward it is to work with. But that shouldn't stop us, because we have computers and stuff.
Here's a useful thing to note about variance
Step13: You can see why these are equal if you want to work thru the algebra. Or not.
But, what's important here is that the latter is generally much easier to work with. In particular, you only have to track two things
Step14: Let's see that in code. So now we need two vectors.
Step15: It's generally more conveniently defined like so
Step16: From now on, you're not allowed to look at an equation (or especially type it in LaTeX) without also typing it in Python and actually calculating some values. Ideally, you should also plot some values.
Finally, here is the Pearson correlation coefficient
Step17: It's just a scaled version of the same thing. Question | Python Code:
import ipywidgets as widgets
def f(o): print('hi')
Explanation: Jump_to opening comments and overview of lesson 10
Callbacks
Callbacks as GUI events
Jump_to lesson 10 video
End of explanation
w = widgets.Button(description='Click me')
w
w.on_click(f)
Explanation: From the ipywidget docs:
the button widget is used to handle mouse clicks. The on_click method of the Button can be used to register function to be called when the button is clicked
End of explanation
from time import sleep
def slow_calculation():
res = 0
for i in range(5):
res += i*i
sleep(1)
return res
slow_calculation()
def slow_calculation(cb=None):
res = 0
for i in range(5):
res += i*i
sleep(1)
if cb: cb(i)
return res
def show_progress(epoch):
print(f"Awesome! We've finished epoch {epoch}!")
slow_calculation(show_progress)
Explanation: NB: When callbacks are used in this way they are often called "events".
Did you know what you can create interactive apps in Jupyter with these widgets? Here's an example from plotly:
Creating your own callback
Jump_to lesson 10 video
End of explanation
slow_calculation(lambda o: print(f"Awesome! We've finished epoch {o}!"))
def show_progress(exclamation, epoch):
print(f"{exclamation}! We've finished epoch {epoch}!")
slow_calculation(lambda o: show_progress("OK I guess", o))
def make_show_progress(exclamation):
_inner = lambda epoch: print(f"{exclamation}! We've finished epoch {epoch}!")
return _inner
slow_calculation(make_show_progress("Nice!"))
def make_show_progress(exclamation):
# Leading "_" is generally understood to be "private"
def _inner(epoch): print(f"{exclamation}! We've finished epoch {epoch}!")
return _inner
slow_calculation(make_show_progress("Nice!"))
f2 = make_show_progress("Terrific")
slow_calculation(f2)
slow_calculation(make_show_progress("Amazing"))
from functools import partial
slow_calculation(partial(show_progress, "OK I guess"))
f2 = partial(show_progress, "OK I guess")
Explanation: Lambdas and partials
Jump_to lesson 10 video
End of explanation
class ProgressShowingCallback():
def __init__(self, exclamation="Awesome"): self.exclamation = exclamation
def __call__(self, epoch): print(f"{self.exclamation}! We've finished epoch {epoch}!")
cb = ProgressShowingCallback("Just super")
slow_calculation(cb)
Explanation: Callbacks as callable classes
Jump_to lesson 10 video
End of explanation
def f(*args, **kwargs): print(f"args: {args}; kwargs: {kwargs}")
f(3, 'a', thing1="hello")
Explanation: Multiple callback funcs; *args and **kwargs
Jump_to lesson 10 video
End of explanation
def slow_calculation(cb=None):
res = 0
for i in range(5):
if cb: cb.before_calc(i)
res += i*i
sleep(1)
if cb: cb.after_calc(i, val=res)
return res
class PrintStepCallback():
def __init__(self): pass
def before_calc(self, *args, **kwargs): print(f"About to start")
def after_calc (self, *args, **kwargs): print(f"Done step")
slow_calculation(PrintStepCallback())
class PrintStatusCallback():
def __init__(self): pass
def before_calc(self, epoch, **kwargs): print(f"About to start: {epoch}")
def after_calc (self, epoch, val, **kwargs): print(f"After {epoch}: {val}")
slow_calculation(PrintStatusCallback())
Explanation: NB: We've been guilty of over-using kwargs in fastai - it's very convenient for the developer, but is annoying for the end-user unless care is taken to ensure docs show all kwargs too. kwargs can also hide bugs (because it might not tell you about a typo in a param name). In R there's a very similar issue (R uses ... for the same thing), and matplotlib uses kwargs a lot too.
End of explanation
def slow_calculation(cb=None):
res = 0
for i in range(5):
if cb and hasattr(cb,'before_calc'): cb.before_calc(i)
res += i*i
sleep(1)
if cb and hasattr(cb,'after_calc'):
if cb.after_calc(i, res):
print("stopping early")
break
return res
class PrintAfterCallback():
def after_calc (self, epoch, val):
print(f"After {epoch}: {val}")
if val>10: return True
slow_calculation(PrintAfterCallback())
class SlowCalculator():
def __init__(self, cb=None): self.cb,self.res = cb,0
def callback(self, cb_name, *args):
if not self.cb: return
cb = getattr(self.cb,cb_name, None)
if cb: return cb(self, *args)
def calc(self):
for i in range(5):
self.callback('before_calc', i)
self.res += i*i
sleep(1)
if self.callback('after_calc', i):
print("stopping early")
break
class ModifyingCallback():
def after_calc (self, calc, epoch):
print(f"After {epoch}: {calc.res}")
if calc.res>10: return True
if calc.res<3: calc.res = calc.res*2
calculator = SlowCalculator(ModifyingCallback())
calculator.calc()
calculator.res
Explanation: Modifying behavior
Jump_to lesson 10 video
End of explanation
class SloppyAdder():
def __init__(self,o): self.o=o
def __add__(self,b): return SloppyAdder(self.o + b.o + 0.01)
def __repr__(self): return str(self.o)
a = SloppyAdder(1)
b = SloppyAdder(2)
a+b
Explanation: __dunder__ thingies
Anything that looks like __this__ is, in some way, special. Python, or some library, can define some functions that they will call at certain documented times. For instance, when your class is setting up a new object, python will call __init__. These are defined as part of the python data model.
For instance, if python sees +, then it will call the special method __add__. If you try to display an object in Jupyter (or lots of other places in Python) it will call __repr__.
Jump_to lesson 10 video
End of explanation
t = torch.tensor([1.,2.,4.,18])
m = t.mean(); m
(t-m).mean()
Explanation: Special methods you should probably know about (see data model link above) are:
__getitem__
__getattr__
__setattr__
__del__
__init__
__new__
__enter__
__exit__
__len__
__repr__
__str__
Variance and stuff
Variance
Variance is the average of how far away each data point is from the mean. E.g.:
Jump_to lesson 10 video
End of explanation
(t-m).pow(2).mean()
(t-m).abs().mean()
Explanation: Oops. We can't do that. Because by definition the positives and negatives cancel out. So we can fix that in one of (at least) two ways:
End of explanation
(t-m).pow(2).mean().sqrt()
Explanation: But the first of these is now a totally different scale, since we squared. So let's undo that at the end.
End of explanation
(t-m).pow(2).mean(), (t*t).mean() - (m*m)
Explanation: They're still different. Why?
Note that we have one outlier (18). In the version where we square everything, it makes that much bigger than everything else.
(t-m).pow(2).mean() is refered to as variance. It's a measure of how spread out the data is, and is particularly sensitive to outliers.
When we take the sqrt of the variance, we get the standard deviation. Since it's on the same kind of scale as the original data, it's generally more interpretable. However, since sqrt(1)==1, it doesn't much matter which we use when talking about unit variance for initializing neural nets.
(t-m).abs().mean() is referred to as the mean absolute deviation. It isn't used nearly as much as it deserves to be, because mathematicians don't like how awkward it is to work with. But that shouldn't stop us, because we have computers and stuff.
Here's a useful thing to note about variance:
End of explanation
t
Explanation: You can see why these are equal if you want to work thru the algebra. Or not.
But, what's important here is that the latter is generally much easier to work with. In particular, you only have to track two things: the sum of the data, and the sum of squares of the data. Whereas in the first form you actually have to go thru all the data twice (once to calculate the mean, once to calculate the differences).
Let's go steal the LaTeX from Wikipedia:
$$\operatorname{E}\left[X^2 \right] - \operatorname{E}[X]^2$$
Covariance and correlation
Here's how Wikipedia defines covariance:
$$\operatorname{cov}(X,Y) = \operatorname{E}{\big[(X - \operatorname{E}[X])(Y - \operatorname{E}[Y])\big]}$$
Jump_to lesson 10 video
End of explanation
# `u` is twice `t`, plus a bit of randomness
u = t*2
u *= torch.randn_like(t)/10+0.95
plt.scatter(t, u);
prod = (t-t.mean())*(u-u.mean()); prod
prod.mean()
v = torch.randn_like(t)
plt.scatter(t, v);
((t-t.mean())*(v-v.mean())).mean()
Explanation: Let's see that in code. So now we need two vectors.
End of explanation
cov = (t*v).mean() - t.mean()*v.mean(); cov
Explanation: It's generally more conveniently defined like so:
$$\operatorname{E}\left[X Y\right] - \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right]$$
End of explanation
cov / (t.std() * v.std())
Explanation: From now on, you're not allowed to look at an equation (or especially type it in LaTeX) without also typing it in Python and actually calculating some values. Ideally, you should also plot some values.
Finally, here is the Pearson correlation coefficient:
$$\rho_{X,Y}= \frac{\operatorname{cov}(X,Y)}{\sigma_X \sigma_Y}$$
End of explanation
def log_softmax(x): return x - x.exp().sum(-1,keepdim=True).log()
Explanation: It's just a scaled version of the same thing. Question: Why is it scaled by standard deviation, and not by variance or mean or something else?
Softmax
Here's our final logsoftmax definition:
Jump_to lesson 10 video
End of explanation |
299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Simple TFX Pipeline for Vertex Pipelines
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
If you are not on Colab, you can restart runtime with following cell.
Step3: Login in to Google for this notebook
If you are running this notebook on Colab, authenticate with your user account
Step4: If you are on AI Platform Notebooks, authenticate with Google Cloud before
running the next section, by running
sh
gcloud auth login
in the Terminal window (which you can open via File > New in the
menu). You only need to do this once per notebook instance.
Check the package versions.
Step5: Set up variables
We will set up some variables used to customize the pipelines below. Following
information is required
Step6: Set gcloud to use your project.
Step7: Prepare example data
We will use the same
Palmer Penguins dataset
as
Simple TFX Pipeline Tutorial.
There are four numeric features in this dataset which were already normalized
to have range [0,1]. We will build a classification model which predicts the
species of penguins.
We need to make our own copy of the dataset. Because TFX ExampleGen reads
inputs from a directory, we need to create a directory and copy dataset to it
on GCS.
Step8: Take a quick look at the CSV file.
Step12: Create a pipeline
TFX pipelines are defined using Python APIs. We will define a pipeline which
consists of three components, CsvExampleGen, Trainer and Pusher. The pipeline
and model definition is almost the same as
Simple TFX Pipeline Tutorial.
The only difference is that we don't need to set metadata_connection_config
which is used to locate
ML Metadata database. Because
Vertex Pipelines uses a managed metadata service, users don't need to care
of it, and we don't need to specify the parameter.
Before actually define the pipeline, we need to write a model code for the
Trainer component first.
Write model code.
We will use the same model code as in the
Simple TFX Pipeline Tutorial.
Step13: Copy the module file to GCS which can be accessed from the pipeline components.
Because model training happens on GCP, we need to upload this model definition.
Otherwise, you might want to build a container image including the module file
and use the image to run the pipeline.
Step15: Write a pipeline definition
We will define a function to create a TFX pipeline.
Step16: Run the pipeline on Vertex Pipelines.
We used LocalDagRunner which runs on local environment in
Simple TFX Pipeline Tutorial.
TFX provides multiple orchestrators to run your pipeline. In this tutorial we
will use the Vertex Pipelines together with the Kubeflow V2 dag runner.
We need to define a runner to actually run the pipeline. You will compile
your pipeline into our pipeline definition format using TFX APIs.
Step17: The generated definition file can be submitted using kfp client. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
# Use the latest version of pip.
!pip install --upgrade pip
!pip install --upgrade "tfx[kfp]<2"
Explanation: Simple TFX Pipeline for Vertex Pipelines
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
<td><a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Fgcp%252Fvertex_pipelines_simple.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Run in Google Cloud Vertex AI Workbench</a></td>
</table></div>
This notebook-based tutorial will create a simple TFX pipeline and run it using
Google Cloud Vertex Pipelines. This notebook is based on the TFX pipeline
we built in
Simple TFX Pipeline Tutorial.
If you are not familiar with TFX and you have not read that tutorial yet, you
should read it before proceeding with this notebook.
Google Cloud Vertex Pipelines helps you to automate, monitor, and govern
your ML systems by orchestrating your ML workflow in a serverless manner. You
can define your ML pipelines using Python with TFX, and then execute your
pipelines on Google Cloud. See
Vertex Pipelines introduction
to learn more about Vertex Pipelines.
This notebook is intended to be run on
Google Colab or on
AI Platform Notebooks. If you
are not using one of these, you can simply click "Run in Google Colab" button
above.
Set up
Before you run this notebook, ensure that you have following:
- A Google Cloud Platform project.
- A Google Cloud Storage bucket. See
the guide for creating buckets.
- Enable
Vertex AI and Cloud Storage API.
Please see
Vertex documentation
to configure your GCP project further.
Install python packages
We will install required Python packages including TFX and KFP to author ML
pipelines and submit jobs to Vertex Pipelines.
End of explanation
# docs_infra: no_execute
import sys
if not 'google.colab' in sys.modules:
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
If you are not on Colab, you can restart runtime with following cell.
End of explanation
import sys
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
Explanation: Login in to Google for this notebook
If you are running this notebook on Colab, authenticate with your user account:
End of explanation
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import kfp
print('KFP version: {}'.format(kfp.__version__))
Explanation: If you are on AI Platform Notebooks, authenticate with Google Cloud before
running the next section, by running
sh
gcloud auth login
in the Terminal window (which you can open via File > New in the
menu). You only need to do this once per notebook instance.
Check the package versions.
End of explanation
GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS
GOOGLE_CLOUD_REGION = '' # <--- ENTER THIS
GCS_BUCKET_NAME = '' # <--- ENTER THIS
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error('Please set all required parameters.')
Explanation: Set up variables
We will set up some variables used to customize the pipelines below. Following
information is required:
GCP Project id. See
Identifying your project id.
GCP Region to run pipelines. For more information about the regions that
Vertex Pipelines is available in, see the
Vertex AI locations guide.
Google Cloud Storage Bucket to store pipeline outputs.
Enter required values in the cell below before running it.
End of explanation
!gcloud config set project {GOOGLE_CLOUD_PROJECT}
PIPELINE_NAME = 'penguin-vertex-pipelines'
# Path to various pipeline artifact.
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' Python module.
MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for input data.
DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# This is the path where your model will be pushed for serving.
SERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
Explanation: Set gcloud to use your project.
End of explanation
!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/
Explanation: Prepare example data
We will use the same
Palmer Penguins dataset
as
Simple TFX Pipeline Tutorial.
There are four numeric features in this dataset which were already normalized
to have range [0,1]. We will build a classification model which predicts the
species of penguins.
We need to make our own copy of the dataset. Because TFX ExampleGen reads
inputs from a directory, we need to create a directory and copy dataset to it
on GCS.
End of explanation
!gsutil cat {DATA_ROOT}/penguins_processed.csv | head
Explanation: Take a quick look at the CSV file.
End of explanation
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
},
_LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int) -> tf.data.Dataset:
Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _make_keras_model() -> tf.keras.Model:
Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _make_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
Explanation: Create a pipeline
TFX pipelines are defined using Python APIs. We will define a pipeline which
consists of three components, CsvExampleGen, Trainer and Pusher. The pipeline
and model definition is almost the same as
Simple TFX Pipeline Tutorial.
The only difference is that we don't need to set metadata_connection_config
which is used to locate
ML Metadata database. Because
Vertex Pipelines uses a managed metadata service, users don't need to care
of it, and we don't need to specify the parameter.
Before actually define the pipeline, we need to write a model code for the
Trainer component first.
Write model code.
We will use the same model code as in the
Simple TFX Pipeline Tutorial.
End of explanation
!gsutil cp {_trainer_module_file} {MODULE_ROOT}/
Explanation: Copy the module file to GCS which can be accessed from the pipeline components.
Because model training happens on GCP, we need to upload this model definition.
Otherwise, you might want to build a container image including the module file
and use the image to run the pipeline.
End of explanation
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and
# slightly modified because we don't need `metadata_path` argument.
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
module_file: str, serving_model_dir: str,
) -> tfx.dsl.Pipeline:
Creates a three component penguin pipeline with TFX.
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
# Following three components will be included in the pipeline.
components = [
example_gen,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=components)
Explanation: Write a pipeline definition
We will define a function to create a TFX pipeline.
End of explanation
# docs_infra: no_execute
import os
PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'
runner = tfx.orchestration.experimental.KubeflowV2DagRunner(
config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),
output_filename=PIPELINE_DEFINITION_FILE)
# Following function will write the pipeline definition to PIPELINE_DEFINITION_FILE.
_ = runner.run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
module_file=os.path.join(MODULE_ROOT, _trainer_module_file),
serving_model_dir=SERVING_MODEL_DIR))
Explanation: Run the pipeline on Vertex Pipelines.
We used LocalDagRunner which runs on local environment in
Simple TFX Pipeline Tutorial.
TFX provides multiple orchestrators to run your pipeline. In this tutorial we
will use the Vertex Pipelines together with the Kubeflow V2 dag runner.
We need to define a runner to actually run the pipeline. You will compile
your pipeline into our pipeline definition format using TFX APIs.
End of explanation
# docs_infra: no_execute
from google.cloud import aiplatform
from google.cloud.aiplatform import pipeline_jobs
import logging
logging.getLogger().setLevel(logging.INFO)
aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)
job = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,
display_name=PIPELINE_NAME)
job.submit()
Explanation: The generated definition file can be submitted using kfp client.
End of explanation |