Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Intro to Autoencoders
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Load the dataset
To start, you will train the basic autoencoder using the Fashion MNIST dataset. Each image in this dataset is 28x28 pixels.
Step3: First example
Step4: Train the model using x_train as both the input and the target. The encoder will learn to compress the dataset from 784 dimensions to the latent space, and the decoder will learn to reconstruct the original images.
.
Step5: Now that the model is trained, let's test it by encoding and decoding images from the test set.
Step6: Second example
Step7: Adding random noise to the images
Step8: Plot the noisy images.
Step9: Define a convolutional autoencoder
In this example, you will train a convolutional autoencoder using Conv2D layers in the encoder, and Conv2DTranspose layers in the decoder.
Step10: Let's take a look at a summary of the encoder. Notice how the images are downsampled from 28x28 to 7x7.
Step11: The decoder upsamples the images back from 7x7 to 28x28.
Step12: Plotting both the noisy images and the denoised images produced by the autoencoder.
Step13: Third example
Step14: Normalize the data to [0,1].
Step15: You will train the autoencoder using only the normal rhythms, which are labeled in this dataset as 1. Separate the normal rhythms from the abnormal rhythms.
Step16: Plot a normal ECG.
Step17: Plot an anomalous ECG.
Step18: Build the model
Step19: Notice that the autoencoder is trained using only the normal ECGs, but is evaluated using the full test set.
Step20: You will soon classify an ECG as anomalous if the reconstruction error is greater than one standard deviation from the normal training examples. First, let's plot a normal ECG from the training set, the reconstruction after it's encoded and decoded by the autoencoder, and the reconstruction error.
Step21: Create a similar plot, this time for an anomalous test example.
Step22: Detect anomalies
Detect anomalies by calculating whether the reconstruction loss is greater than a fixed threshold. In this tutorial, you will calculate the mean average error for normal examples from the training set, then classify future examples as anomalous if the reconstruction error is higher than one standard deviation from the training set.
Plot the reconstruction error on normal ECGs from the training set
Step23: Choose a threshold value that is one standard deviations above the mean.
Step24: Note
Step25: Classify an ECG as an anomaly if the reconstruction error is greater than the threshold. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers, losses
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.models import Model
Explanation: Intro to Autoencoders
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/generative/autoencoder">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/autoencoder.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/autoencoder.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/autoencoder.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial introduces autoencoders with three examples: the basics, image denoising, and anomaly detection.
An autoencoder is a special type of neural network that is trained to copy its input to its output. For example, given an image of a handwritten digit, an autoencoder first encodes the image into a lower dimensional latent representation, then decodes the latent representation back to an image. An autoencoder learns to compress the data while minimizing the reconstruction error.
To learn more about autoencoders, please consider reading chapter 14 from Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville.
Import TensorFlow and other libraries
End of explanation
(x_train, _), (x_test, _) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
print (x_train.shape)
print (x_test.shape)
Explanation: Load the dataset
To start, you will train the basic autoencoder using the Fashion MNIST dataset. Each image in this dataset is 28x28 pixels.
End of explanation
latent_dim = 64
class Autoencoder(Model):
def __init__(self, latent_dim):
super(Autoencoder, self).__init__()
self.latent_dim = latent_dim
self.encoder = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(latent_dim, activation='relu'),
])
self.decoder = tf.keras.Sequential([
layers.Dense(784, activation='sigmoid'),
layers.Reshape((28, 28))
])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Autoencoder(latent_dim)
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
Explanation: First example: Basic autoencoder
Define an autoencoder with two Dense layers: an encoder, which compresses the images into a 64 dimensional latent vector, and a decoder, that reconstructs the original image from the latent space.
To define your model, use the Keras Model Subclassing API.
End of explanation
autoencoder.fit(x_train, x_train,
epochs=10,
shuffle=True,
validation_data=(x_test, x_test))
Explanation: Train the model using x_train as both the input and the target. The encoder will learn to compress the dataset from 784 dimensions to the latent space, and the decoder will learn to reconstruct the original images.
.
End of explanation
encoded_imgs = autoencoder.encoder(x_test).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i])
plt.title("original")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i])
plt.title("reconstructed")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Explanation: Now that the model is trained, let's test it by encoding and decoding images from the test set.
End of explanation
(x_train, _), (x_test, _) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
print(x_train.shape)
Explanation: Second example: Image denoising
An autoencoder can also be trained to remove noise from images. In the following section, you will create a noisy version of the Fashion MNIST dataset by applying random noise to each image. You will then train an autoencoder using the noisy image as input, and the original image as the target.
Let's reimport the dataset to omit the modifications made earlier.
End of explanation
noise_factor = 0.2
x_train_noisy = x_train + noise_factor * tf.random.normal(shape=x_train.shape)
x_test_noisy = x_test + noise_factor * tf.random.normal(shape=x_test.shape)
x_train_noisy = tf.clip_by_value(x_train_noisy, clip_value_min=0., clip_value_max=1.)
x_test_noisy = tf.clip_by_value(x_test_noisy, clip_value_min=0., clip_value_max=1.)
Explanation: Adding random noise to the images
End of explanation
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i + 1)
plt.title("original + noise")
plt.imshow(tf.squeeze(x_test_noisy[i]))
plt.gray()
plt.show()
Explanation: Plot the noisy images.
End of explanation
class Denoise(Model):
def __init__(self):
super(Denoise, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Input(shape=(28, 28, 1)),
layers.Conv2D(16, (3, 3), activation='relu', padding='same', strides=2),
layers.Conv2D(8, (3, 3), activation='relu', padding='same', strides=2)])
self.decoder = tf.keras.Sequential([
layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2D(1, kernel_size=(3, 3), activation='sigmoid', padding='same')])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Denoise()
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
autoencoder.fit(x_train_noisy, x_train,
epochs=10,
shuffle=True,
validation_data=(x_test_noisy, x_test))
Explanation: Define a convolutional autoencoder
In this example, you will train a convolutional autoencoder using Conv2D layers in the encoder, and Conv2DTranspose layers in the decoder.
End of explanation
autoencoder.encoder.summary()
Explanation: Let's take a look at a summary of the encoder. Notice how the images are downsampled from 28x28 to 7x7.
End of explanation
autoencoder.decoder.summary()
Explanation: The decoder upsamples the images back from 7x7 to 28x28.
End of explanation
encoded_imgs = autoencoder.encoder(x_test_noisy).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original + noise
ax = plt.subplot(2, n, i + 1)
plt.title("original + noise")
plt.imshow(tf.squeeze(x_test_noisy[i]))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
bx = plt.subplot(2, n, i + n + 1)
plt.title("reconstructed")
plt.imshow(tf.squeeze(decoded_imgs[i]))
plt.gray()
bx.get_xaxis().set_visible(False)
bx.get_yaxis().set_visible(False)
plt.show()
Explanation: Plotting both the noisy images and the denoised images produced by the autoencoder.
End of explanation
# Download the dataset
dataframe = pd.read_csv('http://storage.googleapis.com/download.tensorflow.org/data/ecg.csv', header=None)
raw_data = dataframe.values
dataframe.head()
# The last element contains the labels
labels = raw_data[:, -1]
# The other data points are the electrocadriogram data
data = raw_data[:, 0:-1]
train_data, test_data, train_labels, test_labels = train_test_split(
data, labels, test_size=0.2, random_state=21
)
Explanation: Third example: Anomaly detection
Overview
In this example, you will train an autoencoder to detect anomalies on the ECG5000 dataset. This dataset contains 5,000 Electrocardiograms, each with 140 data points. You will use a simplified version of the dataset, where each example has been labeled either 0 (corresponding to an abnormal rhythm), or 1 (corresponding to a normal rhythm). You are interested in identifying the abnormal rhythms.
Note: This is a labeled dataset, so you could phrase this as a supervised learning problem. The goal of this example is to illustrate anomaly detection concepts you can apply to larger datasets, where you do not have labels available (for example, if you had many thousands of normal rhythms, and only a small number of abnormal rhythms).
How will you detect anomalies using an autoencoder? Recall that an autoencoder is trained to minimize reconstruction error. You will train an autoencoder on the normal rhythms only, then use it to reconstruct all the data. Our hypothesis is that the abnormal rhythms will have higher reconstruction error. You will then classify a rhythm as an anomaly if the reconstruction error surpasses a fixed threshold.
Load ECG data
The dataset you will use is based on one from timeseriesclassification.com.
End of explanation
min_val = tf.reduce_min(train_data)
max_val = tf.reduce_max(train_data)
train_data = (train_data - min_val) / (max_val - min_val)
test_data = (test_data - min_val) / (max_val - min_val)
train_data = tf.cast(train_data, tf.float32)
test_data = tf.cast(test_data, tf.float32)
Explanation: Normalize the data to [0,1].
End of explanation
train_labels = train_labels.astype(bool)
test_labels = test_labels.astype(bool)
normal_train_data = train_data[train_labels]
normal_test_data = test_data[test_labels]
anomalous_train_data = train_data[~train_labels]
anomalous_test_data = test_data[~test_labels]
Explanation: You will train the autoencoder using only the normal rhythms, which are labeled in this dataset as 1. Separate the normal rhythms from the abnormal rhythms.
End of explanation
plt.grid()
plt.plot(np.arange(140), normal_train_data[0])
plt.title("A Normal ECG")
plt.show()
Explanation: Plot a normal ECG.
End of explanation
plt.grid()
plt.plot(np.arange(140), anomalous_train_data[0])
plt.title("An Anomalous ECG")
plt.show()
Explanation: Plot an anomalous ECG.
End of explanation
class AnomalyDetector(Model):
def __init__(self):
super(AnomalyDetector, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Dense(32, activation="relu"),
layers.Dense(16, activation="relu"),
layers.Dense(8, activation="relu")])
self.decoder = tf.keras.Sequential([
layers.Dense(16, activation="relu"),
layers.Dense(32, activation="relu"),
layers.Dense(140, activation="sigmoid")])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = AnomalyDetector()
autoencoder.compile(optimizer='adam', loss='mae')
Explanation: Build the model
End of explanation
history = autoencoder.fit(normal_train_data, normal_train_data,
epochs=20,
batch_size=512,
validation_data=(test_data, test_data),
shuffle=True)
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.legend()
Explanation: Notice that the autoencoder is trained using only the normal ECGs, but is evaluated using the full test set.
End of explanation
encoded_data = autoencoder.encoder(normal_test_data).numpy()
decoded_data = autoencoder.decoder(encoded_data).numpy()
plt.plot(normal_test_data[0], 'b')
plt.plot(decoded_data[0], 'r')
plt.fill_between(np.arange(140), decoded_data[0], normal_test_data[0], color='lightcoral')
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()
Explanation: You will soon classify an ECG as anomalous if the reconstruction error is greater than one standard deviation from the normal training examples. First, let's plot a normal ECG from the training set, the reconstruction after it's encoded and decoded by the autoencoder, and the reconstruction error.
End of explanation
encoded_data = autoencoder.encoder(anomalous_test_data).numpy()
decoded_data = autoencoder.decoder(encoded_data).numpy()
plt.plot(anomalous_test_data[0], 'b')
plt.plot(decoded_data[0], 'r')
plt.fill_between(np.arange(140), decoded_data[0], anomalous_test_data[0], color='lightcoral')
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()
Explanation: Create a similar plot, this time for an anomalous test example.
End of explanation
reconstructions = autoencoder.predict(normal_train_data)
train_loss = tf.keras.losses.mae(reconstructions, normal_train_data)
plt.hist(train_loss[None,:], bins=50)
plt.xlabel("Train loss")
plt.ylabel("No of examples")
plt.show()
Explanation: Detect anomalies
Detect anomalies by calculating whether the reconstruction loss is greater than a fixed threshold. In this tutorial, you will calculate the mean average error for normal examples from the training set, then classify future examples as anomalous if the reconstruction error is higher than one standard deviation from the training set.
Plot the reconstruction error on normal ECGs from the training set
End of explanation
threshold = np.mean(train_loss) + np.std(train_loss)
print("Threshold: ", threshold)
Explanation: Choose a threshold value that is one standard deviations above the mean.
End of explanation
reconstructions = autoencoder.predict(anomalous_test_data)
test_loss = tf.keras.losses.mae(reconstructions, anomalous_test_data)
plt.hist(test_loss[None, :], bins=50)
plt.xlabel("Test loss")
plt.ylabel("No of examples")
plt.show()
Explanation: Note: There are other strategies you could use to select a threshold value above which test examples should be classified as anomalous, the correct approach will depend on your dataset. You can learn more with the links at the end of this tutorial.
If you examine the reconstruction error for the anomalous examples in the test set, you'll notice most have greater reconstruction error than the threshold. By varing the threshold, you can adjust the precision and recall of your classifier.
End of explanation
def predict(model, data, threshold):
reconstructions = model(data)
loss = tf.keras.losses.mae(reconstructions, data)
return tf.math.less(loss, threshold)
def print_stats(predictions, labels):
print("Accuracy = {}".format(accuracy_score(labels, predictions)))
print("Precision = {}".format(precision_score(labels, predictions)))
print("Recall = {}".format(recall_score(labels, predictions)))
preds = predict(autoencoder, test_data, threshold)
print_stats(preds, test_labels)
Explanation: Classify an ECG as an anomaly if the reconstruction error is greater than the threshold.
End of explanation |
601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Lasso
Modified from the github repo
Step1: Hitters dataset
Let's load the dataset from the previous lab. | Python Code:
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from sklearn.model_selection import LeaveOneOut
from sklearn.linear_model import LinearRegression, lars_path, Lasso, LassoCV
%matplotlib inline
n=100
p=1000
X = np.random.randn(n,p)
X = scale(X)
sprob = 0.02
Sbool = np.random.rand(p) < sprob
s = np.sum(Sbool)
print("Number of non-zero's: {}".format(s))
mu = 100.
beta = np.zeros(p)
beta[Sbool] = mu * np.random.randn(s)
eps = np.random.randn(n)
y = X.dot(beta) + eps
larper = lars_path(X,y,method="lasso")
S = set(np.where(Sbool)[0])
for j in S:
_ = plt.plot(larper[0],larper[2][j,:],'r')
for j in set(range(p)) - S:
_ = plt.plot(larper[0],larper[2][j,:],'k',linewidth=.5)
_ = plt.title('Lasso path for simulated data')
_ = plt.xlabel('lambda')
_ = plt.ylabel('Coef')
Explanation: The Lasso
Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python which is based on the book by James et al. Intro to Statistical Learning.
End of explanation
# In R, I exported the dataset from package 'ISLR' to a csv file.
df = pd.read_csv('Data/Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
df.head()
dummies = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
dummies.info()
print(dummies.head())
y = df.Salary
# Drop the column with the independent variable (Salary), and columns for which we created dummy variables
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
# Define the feature set X.
X = pd.concat([X_, dummies[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X.info()
X.head(5)
Explanation: Hitters dataset
Let's load the dataset from the previous lab.
End of explanation |
602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Keras 中的权重聚类示例
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 在不使用聚类的情况下为 MNIST 训练 tf.keras 模型
Step3: 评估基准模型并保存以备稍后使用
Step4: 通过聚类微调预训练模型
将 cluster_weights() API 应用于整个预训练模型,以演示它不仅能够在应用 zip 后有效缩减模型大小,还能保持良好的准确率。有关如何以最佳方式平衡用例的准确率和压缩率,请参阅综合指南中的每层示例。
定义模型并应用聚类 API
在将模型传递给聚类 API 之前,请确保它已经过训练并表现出可接受的准确率。
Step5: 微调模型并根据基准评估准确率
使用聚类对模型进行 1 个周期的微调。
Step6: 对于本示例,与基准相比,聚类后的测试准确率损失最小。
Step7: 通过聚类创建大小缩减至六分之一的模型
<code>strip_clustering</code> 和应用标准压缩算法(例如通过 gzip)对于看到聚类压缩的好处必不可少。
首先,为 TensorFlow 创建一个可压缩模型。在这里,strip_clustering 会移除聚类仅在训练期间才需要的所有变量(例如用于存储簇形心和索引的 tf.Variable),否则这些变量会在推理期间增加模型大小。
Step8: 随后,为 TFLite 创建可压缩模型。您可以将聚类模型转换为可在目标后端上运行的格式。TensorFlow Lite 是可用于部署到移动设备的示例。
Step9: 定义一个辅助函数,通过 gzip 实际压缩模型并测量压缩后的大小。
Step10: 比较后可以发现,聚类使模型大小缩减至原来的六分之一
Step11: 通过将权重聚类与训练后量化相结合,创建一个大小缩减至八分之一的 TFLite 模型
您可以将训练后量化应用于聚类模型来获得更多好处。
Step12: 查看从 TF 到 TFLite 的准确率持久性
定义一个辅助函数,基于测试数据集评估 TFLite 模型。
Step13: 评估已被聚类和量化的模型后,您将看到从 TensorFlow 持续到 TFLite 后端的准确率。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
from tensorflow import keras
import numpy as np
import tempfile
import zipfile
import os
Explanation: Keras 中的权重聚类示例
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/model_optimization/guide/clustering/clustering_example"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/model_optimization/guide/clustering/clustering_example.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/model_optimization/guide/clustering/clustering_example.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/model_optimization/guide/clustering/clustering_example.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
欢迎阅读 TensorFlow Model Optimization Toolkit 中权重聚类的端到端示例。
其他页面
有关权重聚类的定义以及如何确定是否应使用权重聚类(包括支持的功能)的介绍,请参阅概述页面。
要快速找到您的用例(不局限于使用 16 个簇完全聚类模型)所需的 API,请参阅综合指南。
目录
在本教程中,您将:
从头开始为 MNIST 数据集训练一个 tf.keras 模型。
通过应用权重聚类 API 对模型进行微调,并查看准确率。
通过聚类创建一个大小缩减至六分之一的 TF 和 TFLite 模型。
通过将权重聚类与训练后量化相结合,创建一个大小缩减至八分之一的 TFLite 模型。
查看从 TF 到 TFLite 的准确率持久性。
设置
您可以在本地 virtualenv 或 Colab 中运行此 Jupyter 笔记本。有关设置依赖项的详细信息,请参阅安装指南。
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
Explanation: 在不使用聚类的情况下为 MNIST 训练 tf.keras 模型
End of explanation
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
Explanation: 评估基准模型并保存以备稍后使用
End of explanation
import tensorflow_model_optimization as tfmot
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
clustering_params = {
'number_of_clusters': 16,
'cluster_centroids_init': CentroidInitialization.LINEAR
}
# Cluster a whole model
clustered_model = cluster_weights(model, **clustering_params)
# Use smaller learning rate for fine-tuning clustered model
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
clustered_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
clustered_model.summary()
Explanation: 通过聚类微调预训练模型
将 cluster_weights() API 应用于整个预训练模型,以演示它不仅能够在应用 zip 后有效缩减模型大小,还能保持良好的准确率。有关如何以最佳方式平衡用例的准确率和压缩率,请参阅综合指南中的每层示例。
定义模型并应用聚类 API
在将模型传递给聚类 API 之前,请确保它已经过训练并表现出可接受的准确率。
End of explanation
# Fine-tune model
clustered_model.fit(
train_images,
train_labels,
batch_size=500,
epochs=1,
validation_split=0.1)
Explanation: 微调模型并根据基准评估准确率
使用聚类对模型进行 1 个周期的微调。
End of explanation
_, clustered_model_accuracy = clustered_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Clustered test accuracy:', clustered_model_accuracy)
Explanation: 对于本示例,与基准相比,聚类后的测试准确率损失最小。
End of explanation
final_model = tfmot.clustering.keras.strip_clustering(clustered_model)
_, clustered_keras_file = tempfile.mkstemp('.h5')
print('Saving clustered model to: ', clustered_keras_file)
tf.keras.models.save_model(final_model, clustered_keras_file,
include_optimizer=False)
Explanation: 通过聚类创建大小缩减至六分之一的模型
<code>strip_clustering</code> 和应用标准压缩算法(例如通过 gzip)对于看到聚类压缩的好处必不可少。
首先,为 TensorFlow 创建一个可压缩模型。在这里,strip_clustering 会移除聚类仅在训练期间才需要的所有变量(例如用于存储簇形心和索引的 tf.Variable),否则这些变量会在推理期间增加模型大小。
End of explanation
clustered_tflite_file = '/tmp/clustered_mnist.tflite'
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
tflite_clustered_model = converter.convert()
with open(clustered_tflite_file, 'wb') as f:
f.write(tflite_clustered_model)
print('Saved clustered TFLite model to:', clustered_tflite_file)
Explanation: 随后,为 TFLite 创建可压缩模型。您可以将聚类模型转换为可在目标后端上运行的格式。TensorFlow Lite 是可用于部署到移动设备的示例。
End of explanation
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in bytes.
import os
import zipfile
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)
Explanation: 定义一个辅助函数,通过 gzip 实际压缩模型并测量压缩后的大小。
End of explanation
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered Keras model: %.2f bytes" % (get_gzipped_model_size(clustered_keras_file)))
print("Size of gzipped clustered TFlite model: %.2f bytes" % (get_gzipped_model_size(clustered_tflite_file)))
Explanation: 比较后可以发现,聚类使模型大小缩减至原来的六分之一
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
_, quantized_and_clustered_tflite_file = tempfile.mkstemp('.tflite')
with open(quantized_and_clustered_tflite_file, 'wb') as f:
f.write(tflite_quant_model)
print('Saved quantized and clustered TFLite model to:', quantized_and_clustered_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_clustered_tflite_file)))
Explanation: 通过将权重聚类与训练后量化相结合,创建一个大小缩减至八分之一的 TFLite 模型
您可以将训练后量化应用于聚类模型来获得更多好处。
End of explanation
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
Explanation: 查看从 TF 到 TFLite 的准确率持久性
定义一个辅助函数,基于测试数据集评估 TFLite 模型。
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_quant_model)
interpreter.allocate_tensors()
test_accuracy = eval_model(interpreter)
print('Clustered and quantized TFLite test_accuracy:', test_accuracy)
print('Clustered TF test accuracy:', clustered_model_accuracy)
Explanation: 评估已被聚类和量化的模型后,您将看到从 TensorFlow 持续到 TFLite 后端的准确率。
End of explanation |
603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Groupby operations
Some imports
Step1: Recap
Step2: Using the filtering and reductions operations we have seen in the previous notebooks, we could do something like
Step3: Pandas does not only let you group by a column name. In df.groupby(grouper) can be many things
Step4: And now applying this on some real data
These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here
Step5: <div class="alert alert-success">
<b>EXERCISE</b>
Step6: <div class="alert alert-success">
<b>EXERCISE</b>
Step7: <div class="alert alert-success">
<b>EXERCISE</b>
Step8: <div class="alert alert-success">
<b>EXERCISE</b>
Step9: <div class="alert alert-success">
<b>EXERCISE</b>
Step10: <div class="alert alert-success">
<b>EXERCISE</b>
Step11: <div class="alert alert-success">
<b>EXERCISE</b>
Step12: Transforms
Sometimes you don't want to aggregate the groups, but transform the values in each group. This can be achieved with transform
Step13: <div class="alert alert-success">
<b>EXERCISE</b>
Step14: <div class="alert alert-success">
<b>EXERCISE</b>
Step15: Intermezzo
Step16: In pandas, those methods (together with some additional methods) are also available for string Series through the .str accessor
Step17: For an overview of all string methods, see
Step18: <div class="alert alert-success">
<b>EXERCISE</b>
Step19: Value counts
A useful shortcut to calculate the number of occurences of certain values is value_counts (this is somewhat equivalent to df.groupby(key).size()))
For example, what are the most occuring movie titles?
Step20: <div class="alert alert-success">
<b>EXERCISE</b>
Step21: <div class="alert alert-success">
<b>EXERCISE</b>
Step22: <div class="alert alert-success">
<b>EXERCISE</b>
Step23: <div class="alert alert-success">
<b>EXERCISE</b>
Step24: <div class="alert alert-success">
<b>EXERCISE</b>
Step25: <div class="alert alert-success">
<b>EXERCISE</b>
Step26: <div class="alert alert-success">
<b>EXERCISE</b>
Step27: <div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except ImportError:
pass
pd.options.display.max_rows = 10
Explanation: Groupby operations
Some imports:
End of explanation
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
Explanation: Recap: the groupby operation (split-apply-combine)
The "group by" concept: we want to apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets
This operation is also referred to as the "split-apply-combine" operation, involving the following steps:
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
<img src="img/splitApplyCombine.png">
Similar to SQL GROUP BY
The example of the image in pandas syntax:
End of explanation
df.groupby('key').aggregate('sum') # np.sum
df.groupby('key').sum()
Explanation: Using the filtering and reductions operations we have seen in the previous notebooks, we could do something like:
df[df['key'] == "A"].sum()
df[df['key'] == "B"].sum()
...
But pandas provides the groupby method to do this:
End of explanation
df.groupby(lambda x: x % 2).mean()
Explanation: Pandas does not only let you group by a column name. In df.groupby(grouper) can be many things:
Series (or string indicating a column in df)
function (to be applied on the index)
dict : groups by values
levels=[], names of levels in a MultiIndex
End of explanation
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
Explanation: And now applying this on some real data
These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /data folder.
cast dataset: different roles played by actors/actresses in films
title: title of the film
name: name of the actor/actress
type: actor/actress
n: the order of the role (n=1: leading role)
End of explanation
titles.groupby(titles.year // 10 * 10).size().plot(kind='bar')
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Using groupby(), plot the number of films that have been released each decade in the history of cinema.
</div>
End of explanation
hamlet = titles[titles['title'] == 'Hamlet']
hamlet.groupby(hamlet.year // 10 * 10).size().plot(kind='bar')
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Use groupby() to plot the number of "Hamlet" films made each decade.
</div>
End of explanation
cast1950 = cast[cast.year // 10 == 195]
cast1950 = cast1950[cast1950.n == 1]
cast1950.groupby(['year', 'type']).size()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?
</div>
End of explanation
cast1990 = cast[cast['year'] >= 1990]
cast1990 = cast1990[cast1990.n == 1]
cast1990.groupby('name').size().nlargest(10)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: List the 10 actors/actresses that have the most leading roles (n=1) since the 1990's.
</div>
End of explanation
c = cast
c = c[c.title == 'The Pink Panther']
c = c.groupby(['year'])[['n']].max()
c
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Use groupby() to determine how many roles are listed for each of The Pink Panther movies.
</div>
End of explanation
c = cast
c = c[c.name == 'Frank Oz']
g = c.groupby(['year', 'title']).size()
g[g > 1]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: List, in order by year, each of the films in which Frank Oz has played more than 1 role.
</div>
End of explanation
c = cast
c = c[c.name == 'Frank Oz']
g = c.groupby(['character']).size()
g[g > 1].sort_values()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: List each of the characters that Frank Oz has portrayed at least twice.
</div>
End of explanation
df
df.groupby('key').transform('mean')
def normalize(group):
return (group - group.mean()) / group.std()
df.groupby('key').transform(normalize)
df.groupby('key').transform('sum')
Explanation: Transforms
Sometimes you don't want to aggregate the groups, but transform the values in each group. This can be achieved with transform:
End of explanation
cast['n_total'] = cast.groupby('title')['n'].transform('max')
cast.head()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Add a column to the `cast` dataframe that indicates the number of roles for the film.
</div>
End of explanation
leading = cast[cast['n'] == 1]
sums_decade = leading.groupby([cast['year'] // 10 * 10, 'type']).size()
sums_decade
#sums_decade.groupby(level='year').transform(lambda x: x / x.sum())
ratios_decade = sums_decade / sums_decade.groupby(level='year').transform('sum')
ratios_decade
ratios_decade[:, 'actor'].plot()
ratios_decade[:, 'actress'].plot()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate the ratio of leading actor and actress roles to the total number of leading roles per decade.
</div>
Tip: you can to do a groupby twice in two steps, once calculating the numbers, and then the ratios.
End of explanation
s = 'Bradwurst'
s.startswith('B')
Explanation: Intermezzo: string manipulations
Python strings have a lot of useful methods available to manipulate or check the content of the string:
End of explanation
s = pd.Series(['Bradwurst', 'Kartoffelsalat', 'Sauerkraut'])
s.str.startswith('B')
Explanation: In pandas, those methods (together with some additional methods) are also available for string Series through the .str accessor:
End of explanation
hamlets = titles[titles['title'].str.contains('Hamlet')]
hamlets['title'].value_counts()
hamlets = titles[titles['title'].str.match('Hamlet')]
hamlets['title'].value_counts()
Explanation: For an overview of all string methods, see: http://pandas.pydata.org/pandas-docs/stable/api.html#string-handling
<div class="alert alert-success">
<b>EXERCISE</b>: We already plotted the number of 'Hamlet' films released each decade, but not all titles are exactly called 'Hamlet'. Give an overview of the titles that contain 'Hamlet', and that start with 'Hamlet':
</div>
End of explanation
title_longest = titles['title'].str.len().nlargest(10)
title_longest
pd.options.display.max_colwidth = 210
titles.loc[title_longest.index]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: List the 10 movie titles with the longest name.
</div>
End of explanation
titles.title.value_counts().head()
Explanation: Value counts
A useful shortcut to calculate the number of occurences of certain values is value_counts (this is somewhat equivalent to df.groupby(key).size()))
For example, what are the most occuring movie titles?
End of explanation
t = titles
t.year.value_counts().head(3)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Which years saw the most films released?
</div>
End of explanation
titles.year.value_counts().sort_index().plot()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Plot the number of released films over time
</div>
End of explanation
t = titles
t = t[t.title == 'Hamlet']
(t.year // 10 * 10).value_counts().sort_index().plot(kind='bar')
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Plot the number of "Hamlet" films made each decade.
</div>
End of explanation
cast.character.value_counts().head(11)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What are the 11 most common character names in movie history?
</div>
End of explanation
cast[cast.year == 2010].name.value_counts().head(10)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Which actors or actresses appeared in the most movies in the year 2010?
</div>
End of explanation
cast[cast.name == 'Brad Pitt'].year.value_counts().sort_index().plot()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Plot how many roles Brad Pitt has played in each year of his career.
</div>
End of explanation
c = cast
c[c.title.str.startswith('The Life')].title.value_counts().head(10)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What are the 10 most film titles roles that start with the word "The Life"?
</div>
End of explanation
c = cast
c = c[c.year // 10 == 195]
c = c[c.n == 1]
c.type.value_counts()
c = cast
c = c[c.year // 10 == 200]
c = c[c.n == 1]
c.type.value_counts()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many leading (n=1) roles were available to actors, and how many to actresses, in the 1950s? And in 2000s?
</div>
End of explanation |
604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style="width
Step1: 3. Download settings
3.1 Choose download location
The original data can either be downloaded from the original data sources as specified below or from the opsd-Server. Default option is to download from the original sources as the aim of the project is to stay as close to original sources as possible. However, if problems with downloads e.g. due to changing urls occur, you can still run the script with the original data from the opsd_server.
Step2: 4. Define functions
Functions used multiple times within this script are now located in a separate file called download_and_process_DE_functions.py
5. Downloads
5.1 Download the BNetzA power plant list
This section downloads the BNetzA power plant list and converts it to a pandas data frame
Step3: 5.2 Download the UBA Plant list
This section downloads the power plant list from the German Federal Environment Agency (UBA) and converts it to a pandas data frame.
Step4: 6. Translate contents
6.1 BNetzA Columns
A dictionary with the original column names to the new column names is created. This dictionary is used to translate the column names.
Step5: 6.2 Fuel types
Step6: 6.3 Power plant status
Step7: 6.4 CHP Capability
Step8: 6.5 EEG
Step9: 6.6 UBA Columns
Translate the UBA Column names
Step10: 7. Process data
7.1 Set index to the BNetzA power plant ID
Step11: Manual adjustments
Step12: 7.2 Merge data from UBA List
In this section a hand-researched list is used to match the power plants from the UBA list to the BNetzA list.
Step13: 7.2.1 case 1-1
Matching
Step14: 7.2.2 case n-1
Match multiple BNetza IDs to one UBA ID
Step15: 7.2.3 case 1-n
1-n Case here
Step16: 7.2.4 Merge into plantlist
Step17: 7.3 Delete fuels not in focus
Here, solar, wind onshore. and wind offshore technologies are deleted from the list, as they are handled by another datapackage. Furthermore, aggregate values are excluded as well.
Step18: 7.4 Add Columns for shutdown and retrofit
Extract the year when plants were shutdown or retrofit, using regular expressions
Step19: 7.5 Convert input colums to usable data types
Step20: 7.6 Identify generation technology
7.6.1 Process technology information from UBA list
Split uba_technology information into technology (GT, CC,...) and type (HKW, IKW, ...)
Abkürzung
Step21: 7.6.2 Identify generation technology based on BNetzA information
Step22: 7.7 Add country code
Some power plants are in Austria, Switzerland, or Luxembourg. As they are sometimes part of the German electricity system, they are included here.
Step23: 7.8 Add efficiency data
7.8.1 Efficiencies from research
This sections adds efficiency data. These values have been researched by hand.
The source of each value is given in the column "efficiency_source".
Additionally, a rating of the source has been done starting from A (e.g. website of the power plants operator) to C (e.g. Article in local newspaper).
7.8.1.1 Import data
Step24: 7.8.1.2 Plot efficiencies by year of commissioning
Step25: 7.8.2 Efficiencies from literature
Jonas Egerer, Clemens Gerbaulet, Richard Ihlenburg, Friedrich Kunz, Benjamin Reinhard, Christian von Hirschhausen, Alexander Weber, Jens Weibezahn (2014)
Step26: 7.8.2.2 Apply efficiency approximation from literature
Step27: 7.9 Add geodata and EIC Codes
The locations of power plants have been researched manually, these are now added to the output. Checking was done visually using satellite imagery and other mapping material.
Step28: 7.10 Allocate energy source levels
To enable a more readable output, the columns 'fuel' is augmented with additional information called 'energy source level'
Step29: 8. Define final output
Step30: 8.1 Round values
Step31: 8.2 Verification
8.2.1 Capacities by plant status
Step32: 8.2.2 Power plant age
Step33: 8.2.3 Block size vs year of commissioning
This chart is suitable to check outliers of commissioning years and block sizes.
In theory, there should be no unexpected values, e.g. all commissioning years should be greater than 1900.
Block sizes above 2000 MW are also unlikely.
Step34: 8.3 Logical checks
8.3.1 Every power plant needs a capacity
List all entries with zero capacity.
Step35: 8.3.2 Commissioning Dates
Step36: 8.3.3 Compare UBA and BNetzA data
Step37: 9. Result export
Write the results to file | Python Code:
# Import all functions from external file
from download_and_process_DE_functions import *
# Jupyter functions
%matplotlib inline
Explanation: <div style="width:100%; background-color: #D9EDF7; border: 1px solid #CFCFCF; text-align: left; padding: 10px;">
<b>Conventional Power Plants: Power Plants in Germany</b>
<ul>
<li><a href="main.ipynb">Main Notebook</a></li>
<li>Processing notebook for German power plant</li>
<li><a href="download_and_process_EU.ipynb">Processing notebook for European power plants</a></li>
</ul>
<br>This notebook is part of the <a href="http://data.open-power-system-data.org/DATA PACKAGE NAME HERE"> Data package name here Data Package</a> of <a href="http://open-power-system-data.org">Open Power System Data</a>.
</div>
1. Power Plants in Germany
This file covers german power plants. It downloads the power plant list from the German Federal Network Agency (BNetzA) and augments it with more information.
Table of Contents
1. Power Plants in Germany
2. Prepare the environment
3. Download settings
3.1 Choose download location
4. Define functions
5. Downloads
5.1 Download the BNetzA power plant list
5.2 Download the Uba Plant list
6. Translate contents
6.1 Columns
6.2 Fuel types
6.3 Power plant status
6.4 CHP Capability
6.5 EEG
6.6 UBA Columns
7. Process data
7.1 Set index to the BNetzA power plant ID
7.2 Merge data from UBA List
7.2.1 case 1-1
7.2.2 case n-1
7.2.3 case 1-n
7.2.4 Merge into plantlist
7.3 Delete fuels not in focus
7.4 Add Columns for shutdown and retrofit
7.5 Convert input colums to usable data types
7.6 Identify generation technology
7.6.1 Process technology information from UBA list
7.6.2 Identify generation technology based on BNetzA information
7.7 Add country code
7.8 Add efficiency data
7.8.1 Efficiencies from research
7.8.1.1 Import data
7.8.1.2 Plot efficiencies by year of commissioning
7.8.1.3 Determine least-squares approximation based on researched data (planned)
7.8.1.4 Apply efficiency approximation from least squares approximation (planned)
7.8.2 Efficiencies from literature
7.8.2.1 Import data
7.8.2.2 Apply efficiency approximation from literature
7.9 Add geodata
7.10 Allocate energy source levels
8. Define final output
8.1 Round values
8.2 Verification
8.2.1 Capacities by plant status
8.2.2 Power plant age
8.2.3 Block size vs year of commissioning
8.3 Logical checks
8.3.1 Every power plant needs a capacity
8.3.2 Commissioning Dates
8.3.3 Compare UBA and BNetzA data
8.3.3.1 Postcodes of BNetzA and UBA lists should match
8.3.3.2 Compare Installed capacities
9. Result export
2. Prepare the environment
End of explanation
download_from = 'original_sources'
#download_from = 'opsd_server'
if download_from == 'original_sources':
# BNetzA Power plant list
url_bnetza = ('http://www.bundesnetzagentur.de/SharedDocs/Downloads/DE/'
'Sachgebiete/Energie/Unternehmen_Institutionen/Versorgungssicherheit/'
'Erzeugungskapazitaeten/Kraftwerksliste/Kraftwerksliste_CSV.csv'
'?__blob=publicationFile&v=10')
# UBA Power plant list
url_uba = ('https://www.umweltbundesamt.de/sites/default/files/medien/'
'372/dokumente/kraftwerke_de_ab_100_mw_0.xls')
if download_from == 'opsd_server':
# Specify direction to original_data folder on the opsd data server
# BNetzA Power plant list
url_bnetza = 'http://data.open-power-system-data.org/conventional_power_plants/'
url_bnetza = url_bnetza + '2020-10-01'
url_bnetza = url_bnetza +'/original_data/Kraftwerksliste_CSV.csv'
# UBA Power plant list
url_uba = 'http://data.open-power-system-data.org/conventional_power_plants/'
url_uba = url_uba + '2020-10-01'
url_uba = url_uba +'/original_data/kraftwerke-de-ab-100-mw_0.xls'
Explanation: 3. Download settings
3.1 Choose download location
The original data can either be downloaded from the original data sources as specified below or from the opsd-Server. Default option is to download from the original sources as the aim of the project is to stay as close to original sources as possible. However, if problems with downloads e.g. due to changing urls occur, you can still run the script with the original data from the opsd_server.
End of explanation
plantlist = getbnetzalist(url_bnetza)
# clean unnamed columns
plantlist.drop([c for c in plantlist.columns if 'Unnamed:' in c], axis=1, inplace=True)
plantlist.head()
Explanation: 4. Define functions
Functions used multiple times within this script are now located in a separate file called download_and_process_DE_functions.py
5. Downloads
5.1 Download the BNetzA power plant list
This section downloads the BNetzA power plant list and converts it to a pandas data frame
End of explanation
plantlist_uba = getubalist(url_uba)
plantlist_uba.head()
Explanation: 5.2 Download the UBA Plant list
This section downloads the power plant list from the German Federal Environment Agency (UBA) and converts it to a pandas data frame.
End of explanation
dict_columns = {
'Kraftwerksnummer Bundesnetzagentur':
'id',
'Unternehmen':
'company',
'Kraftwerksname':
'name',
'PLZ\n(Standort Kraftwerk)':
'postcode',
'Ort\n(Standort Kraftwerk)':
'city',
'Straße und Hausnummer (Standort Kraftwerk)':
'street',
'Bundesland':
'state',
'Blockname':
'block',
('Datum der Aufnahme der kommerziellen Stromeinspeisung der Erzeugungseinheit [Datum/jahr]'):
'commissioned',
('Kraftwerksstatus \n(in Betrieb/\nvorläufig stillgelegt/\nsaisonale '
'Konservierung\nReservekraftwerk/\nSonderfall)'):
'status',
('Kraftwerksstatus \n(in Betrieb/\nvorläufig stillgelegt/\nsaisonale '
'Konservierung\nGesetzlich an Stilllegung gehindert/\nSonderfall)'):
'status',
('Kraftwerksstatus \n(in Betrieb/\nvorläufig stillgelegt/\nsaisonale '
'Konservierung\nNetzreserve/ Sicherheitsbereitschaft/\nSonderfall)'):
'status',
'Energieträger':
'fuel_basis',
('Spezifizierung "Mehrere Energieträger" und "Sonstige Energieträger" - '
'Hauptbrennstoff'): 'fuel_multiple1',
'Spezifizierung "Mehrere Energieträger" - Zusatz- / Ersatzbrennstoffe':
'fuel_multiple2',
('Auswertung\nEnergieträger (Zuordnung zu einem Hauptenergieträger bei '
'Mehreren Energieträgern)'):
'fuel',
'Förderberechtigt nach EEG\n(ja/nein)':
'eeg',
'Wärmeauskopplung (KWK)\n(ja/nein)':
'chp',
'Netto-Nennleistung (elektrische Wirkleistung) in MW':
'capacity',
('Bezeichnung Verknüpfungspunkt (Schaltanlage) mit dem Stromnetz der '
'Allgemeinen Versorgung gemäß Netzbetreiber'):
'network_node',
'Netz- oder Umspannebene des Anschlusses':
'voltage',
'Name Stromnetzbetreiber':
'network_operator',
'Kraftwerksname / Standort':
'uba_name',
'Betreiber ':
'uba_company',
'Standort-PLZ':
'uba_postcode',
'Kraftwerksstandort':
'uba_city',
'Elektrische Bruttoleistung (MW)':
'uba_capacity',
'Fernwärme-leistung (MW)':
'uba_chp_capacity',
'Inbetriebnahme (ggf. Ertüchtigung)':
'uba_commissioned',
'Anlagenart':
'uba_technology',
'Primärenergieträger':
'uba_fuel',
}
plantlist.rename(columns=dict_columns, inplace=True)
# Check if all columns have been translated
for columnnames in plantlist.columns:
# if columnnames not in dict_columns.values():
if columnnames not in dict_columns.values():
logger.error("Untranslated column: "+ columnnames)
Explanation: 6. Translate contents
6.1 BNetzA Columns
A dictionary with the original column names to the new column names is created. This dictionary is used to translate the column names.
End of explanation
# first remove line breaks
plantlist['fuel'] = plantlist['fuel'].str.replace('\n', ' ')
# Delete entries without fuel and name
plantlist = plantlist.dropna(subset = ['fuel','name'])
dict_fuels = {
'Steinkohle': 'Hard coal',
'Erdgas': 'Natural gas',
'Braunkohle': 'Lignite',
'Kernenergie': 'Nuclear',
'Pumpspeicher': 'Hydro PSP',
'Biomasse': 'Biomass and biogas',
'Mineralölprodukte': 'Oil',
'Laufwasser': 'Hydro',
'Sonstige Energieträger (nicht erneuerbar) ': 'Other fuels',
'Abfall': 'Waste',
'Speicherwasser (ohne Pumpspeicher)': 'Hydro Reservoir',
'Unbekannter Energieträger (nicht erneuerbar)': 'Other fuels',
'Sonstige Energieträger (nicht erneuerbar)': 'Other fuels',
'Mehrere Energieträger (nicht erneuerbar)': 'Mixed fossil fuels',
'Deponiegas': 'Sewage and landfill gas',
'Windenergie (Onshore-Anlage)': 'Onshore',
'Windenergie (Onshore-Anlage)neu': 'Onshore',
'Windenergie (Offshore-Anlage)': 'Offshore',
'Solare Strahlungsenergie': 'Solar',
'Klärgas': 'Sewage and landfill gas',
'Geothermie': 'Geothermal',
'Grubengas': 'Other fossil fuels',
'Sonstige Speichertechnologien': 'Storage Technologies'
}
plantlist["fuel"].replace(dict_fuels, inplace=True)
# Check if all fuels have been translated
for fuelnames in plantlist["fuel"].unique():
if fuelnames not in dict_fuels.values():
print(dict_fuels.values(), fuelnames)
logger.error("Untranslated fuel: " + fuelnames)
Explanation: 6.2 Fuel types
End of explanation
dict_plantstatus = {
'in Betrieb': 'operating',
'In Betrieb': 'operating',
'vorläufig stillgelegt': 'shutdown_temporary',
'Vorläufig stillgelegt': 'shutdown_temporary',
'Vorläufig Stillgelegt': 'shutdown_temporary',
'Sonderfall': 'special_case',
'saisonale Konservierung': 'seasonal_conservation',
'Saisonale Konservierung': 'seasonal_conservation',
'Reservekraftwerk':'reserve',
'Endgültig Stillgelegt 2011': 'shutdown_2011',
'Endgültig Stillgelegt 2012': 'shutdown_2012',
'Endgültig Stillgelegt 2013': 'shutdown_2013',
'Endgültig Stillgelegt 2014': 'shutdown_2014',
'Endgültig Stillgelegt 2015': 'shutdown_2015',
'Endgültig stillgelegt 2015': 'shutdown_2015',
'Endgültig Stillgelegt 2016': 'shutdown_2016',
'Gesetzlich an Stilllegung gehindert': 'operating',
'Endgültig Stillgelegt 2011 (ohne StA)': 'shutdown_2011',
'Endgültig Stillgelegt 2012 (ohne StA)': 'shutdown_2012',
'Endgültig Stillgelegt 2013 (mit StA)': 'shutdown_2013',
'Endgültig Stillgelegt 2013 (ohne StA)': 'shutdown_2013',
'Endgültig Stillgelegt 2014 (mit StA)': 'shutdown_2014',
'Endgültig Stillgelegt 2014 (ohne StA)': 'shutdown_2014',
'Endgültig Stillgelegt 2015 (mit StA)': 'shutdown_2015',
'Endgültig Stillgelegt 2015 (ohne StA)': 'shutdown_2015',
'Endgültig Stillgelegt 2016 (mit StA)': 'shutdown_2016',
'Sicherheitsbereitschaft': 'reserve',
'Vorläufig Stillgelegt (mit StA)': 'shutdown_temporary',
'Vorläufig Stillgelegt (ohne StA)': 'shutdown_temporary',
'Endgültig Stillgelegt 2016 (ohne StA)': 'shutdown_2016',
'Endgültig Stillgelegt 2017 (mit StA)' : 'shutdown_2017',
'Endgültig Stillgelegt 2017 (ohne StA)': 'shutdown_2017',
'Endgültig Stillgelegt 2018 (mit StA)' : 'shutdown_2018',
'Endgültig Stillgelegt 2018 (ohne StA)': 'shutdown_2018',
'Endgültig Stillgelegt 2019 (mit StA)': 'shutdown_2019',
'Endgültig Stillgelegt 2019 (ohne StA)': 'shutdown_2019',
'gesetzlich an Stilllegung gehindert' : 'operating',
'Netzreserve' : 'reserve',
'Wegfall IWA nach DE' : 'special_case',
}
plantlist['status'].replace(dict_plantstatus, inplace=True)
# Check if all fuels have been translated
for statusnames in plantlist['status'].unique():
if statusnames not in dict_plantstatus.values():
logger.error('Untranslated plant status: '+ statusnames)
Explanation: 6.3 Power plant status
End of explanation
dict_yesno ={
'Nein': 'no',
'nein': 'no',
'Ja': 'yes',
'ja': 'yes',
}
plantlist['chp'].replace(dict_yesno, inplace=True)
# Check if all fuels have been translated
for chpnames in plantlist['chp'].unique():
if (chpnames not in dict_yesno.values()) & (str(chpnames) != 'nan'):
logger.error('Untranslated chp capability: ' + str(chpnames))
Explanation: 6.4 CHP Capability
End of explanation
plantlist['eeg'].replace(dict_yesno, inplace=True)
# Check if all fuels have been translated
for eegnames in plantlist['eeg'].unique():
if (eegnames not in dict_yesno.values()) & (str(eegnames) != 'nan'):
logger.error('Untranslated EEG type: ' + str(eegnames))
Explanation: 6.5 EEG
End of explanation
dict_uba_columns = {
'Kraftwerksname / Standort': 'uba_name',
'Betreiber ': 'uba_company',
'Standort-PLZ': 'uba_postcode',
'Kraftwerksstandort': 'uba_city',
'Elektrische Bruttoleistung (MW)': 'uba_capacity',
'Fernwärme-leistung (MW)': 'uba_chp_capacity',
'Inbetriebnahme (ggf. Ertüchtigung)': 'uba_commissioned',
'Anlagenart': 'uba_technology',
'Primärenergieträger': 'uba_fuel',
'Bundesland':'uba_state',
}
plantlist_uba.rename(columns=dict_uba_columns, inplace=True)
# Check if all columns have been translated
for columnnames in plantlist_uba.columns:
if columnnames not in dict_uba_columns.values():
logger.error('Untranslated column: ' + columnnames)
# Prepare for matching
plantlist_uba['uba_id_string'] = (plantlist_uba['uba_name']
+ '_'
+ plantlist_uba['uba_fuel'])
Explanation: 6.6 UBA Columns
Translate the UBA Column names
End of explanation
# Set Index of BNetzA power plant list to Kraftwerksnummer_Bundesnetzagentur
plantlist['bnetza_id'] = plantlist['id']
plantlist = plantlist.set_index('id')
# remove line breaks in some columns
plantlist['network_node'] = plantlist['network_node'].str.replace('\n', ' ')
plantlist['company'] = plantlist['company'].str.replace('\n', ' ')
plantlist['name'] = plantlist['name'].str.replace('\n', ' ')
plantlist['fuel'] = plantlist['fuel'].str.replace('\n', ' ')
plantlist['block'] = plantlist['block'].str.replace('\n', ' ')
plantlist['network_operator'] = plantlist['network_operator'].str.replace('\n', ' ')
plantlist['street'] = plantlist['street'].str.replace('\n', ' ')
plantlist['commissioned'] = plantlist['commissioned'].str.replace('\n', ' ')
plantlist.head()
Explanation: 7. Process data
7.1 Set index to the BNetzA power plant ID
End of explanation
plantlist.loc[plantlist['bnetza_id'] == 'BNA0834', 'fuel'] = 'Natural gas'
plantlist.loc[plantlist['bnetza_id'] == 'BNA0662a', 'fuel'] = 'Hard coal'
plantlist.loc[plantlist['bnetza_id'] == 'BNA0662b', 'fuel'] = 'Hard coal'
Explanation: Manual adjustments:
End of explanation
# read matching list
matchinglist = getmatchinglist()
matchinglist.head()
Explanation: 7.2 Merge data from UBA List
In this section a hand-researched list is used to match the power plants from the UBA list to the BNetzA list.
End of explanation
match1t1 = matchinglist[
(matchinglist.duplicated(subset=['uba_id_string'], keep=False) == False)
& (matchinglist.duplicated(subset=['ID BNetzA'], keep=False) == False)]
match1t1 = pd.merge(match1t1, plantlist_uba,
left_on='uba_id_string',
right_on='uba_id_string',
how='left')
match1t1 = match1t1.set_index('ID BNetzA')
#Add comment
match1t1['merge_comment'] = ('List matching type: Single UBA power plant '
'assigned to single BNetzA power plant')
match1t1.head()
Explanation: 7.2.1 case 1-1
Matching: 1-1 One BNetzA ID to one UBA-ID
End of explanation
# Matching structure (example):
# bnetza_id uba_id
# 1 1
# 2 1
# 3 1
# 4 2
# 5 2
# Get relevant entries from the matchinglist and merge the corresponding
# UBA Data to the list.
matchnt1= matchinglist[
(matchinglist.duplicated(subset=['uba_id_string'], keep=False) == True)
& (matchinglist.duplicated(subset=['ID BNetzA'], keep=False)== False)]
matchnt1 = pd.merge(matchnt1, plantlist_uba,
left_on='uba_id_string', right_on='uba_id_string', how='left')
matchnt1 = matchnt1.set_index('ID BNetzA')
# Import BNetzA Capacities and CHP criterion into matchnt1 dataframe
plantlist_capacities = pd.DataFrame(plantlist[['capacity', 'chp']]).rename(
columns={'capacity': 'capacity_bnetza', 'chp': 'chp_bnetza'})
matchnt1 = pd.merge(matchnt1, plantlist_capacities,
left_index=True, right_index=True, how='left')
# Get sum of BNetzA Capacitites for each UBA Index and merge into matchnt1 dataframe
plantlist_uba_capacitysum = pd.DataFrame(
matchnt1.groupby('uba_id_string').sum()['capacity_bnetza']).rename(
columns={'capacity_bnetza': 'capacity_bnetza_aggregate'})
matchnt1 = pd.merge(matchnt1, plantlist_uba_capacitysum,
left_on='uba_id_string', right_index=True, how='left')
# Scale UBA Capacities based BNetzA Data
matchnt1['uba_capacity_scaled'] = (matchnt1['uba_capacity']
* matchnt1['capacity_bnetza']
/ matchnt1['capacity_bnetza_aggregate'])
# determine sum of capacities with chp capability and add to matchnt1
plantlist_uba_chp_capacities = matchnt1[(matchnt1['chp_bnetza'] == 'yes')]
plantlist_uba_chp_capacitysum = pd.DataFrame(
plantlist_uba_chp_capacities.groupby('uba_id_string')
.sum()['capacity_bnetza'])
plantlist_uba_chp_capacitysum = plantlist_uba_chp_capacitysum.rename(
columns={'capacity_bnetza': 'capacity_bnetza_with_chp'})
matchnt1 = pd.merge(matchnt1, plantlist_uba_chp_capacitysum,
left_on='uba_id_string', right_index=True, how='left',)
matchnt1['uba_chp_capacity'] = pd.to_numeric(matchnt1['uba_chp_capacity'], errors='coerce')
matchnt1['uba_chp_capacity_scaled'] = (matchnt1['uba_chp_capacity']
* matchnt1['capacity_bnetza']
/ matchnt1['capacity_bnetza_with_chp'])
# Change column names for merge later on
matchnt1['uba_chp_capacity_original'] = matchnt1['uba_chp_capacity']
matchnt1['uba_chp_capacity'] = matchnt1['uba_chp_capacity_scaled']
matchnt1['uba_capacity_original'] = matchnt1['uba_capacity']
matchnt1['uba_capacity'] = matchnt1['uba_capacity_scaled']
#Add comment
matchnt1['merge_comment'] = ('List matching type: UBA capacity distributed '
'proportionally to multiple BNetzA power plants')
matchnt1.head()
Explanation: 7.2.2 case n-1
Match multiple BNetza IDs to one UBA ID
End of explanation
# The resulting DataFrame should be called "match1tn"
# Matching structure:
# bnetza_id uba_id
# 1 1
# 1 2
# 1 3
# 2 4
# 2 5
# Get relevant entries from the matchinglist and merge the corresponding UBA Data to the list.
match1tn= matchinglist[
(matchinglist.duplicated(subset=['ID BNetzA'], keep=False) == True) &
(matchinglist.duplicated(subset=['uba_id_string'], keep=False)== False)]
match1tn = pd.merge(match1tn, plantlist_uba,
left_on='uba_id_string', right_on='uba_id_string', how='left')
match1tn = match1tn.set_index('ID BNetzA')
match1tn.head()
# Import BNetzA Capacities and CHP criterion into match1tn dataframe
plantlist_capacities = pd.DataFrame(plantlist[['capacity','chp']]).rename(
columns = {'capacity': 'capacity_bnetza', 'chp': 'chp_bnetza'})
match1tn = pd.merge(match1tn, plantlist_capacities,
left_index=True, right_index=True, how='left')
match1tn.index.names=['ID BNetzA']
match1tn.head()
# Get sum of UBA Capacitites per BNetzA Index and merge to match1tn dataframe
plantlist_bnetza_capacitysum = pd.DataFrame(
match1tn.groupby(match1tn.index).sum()['uba_capacity'])
plantlist_bnetza_capacitysum = plantlist_bnetza_capacitysum.rename(
columns={'uba_capacity':'uba_capacity_aggregate'})
match1tn = pd.merge(match1tn, plantlist_bnetza_capacitysum,
left_index=True, right_index=True, how='left')
match1tn['uba_chp_capacity'] = pd.to_numeric(match1tn['uba_chp_capacity'], errors='coerce')
match1tn
# Get sum of UBA CHP Capacities per BNetzA Index and merge to match1tn dataframe
plantlist_bnetza_chp_capacitysum = pd.DataFrame(
match1tn.groupby(match1tn.index).sum()['uba_chp_capacity'])
plantlist_bnetza_chp_capacitysum = plantlist_bnetza_chp_capacitysum.rename(
columns={'uba_chp_capacity': 'uba_chp_capacity_aggregate'})
match1tn = pd.merge(match1tn, plantlist_bnetza_chp_capacitysum,
left_index=True, right_index=True, how='left')
# Get UBA Technology for each BNetzA Index and merge into match1tn dataframe
## Option 1: Take all technologies and merge them
#match1tn['uba_technology_aggregate'] = pd.DataFrame(
# match1tn.groupby(match1tn.index)
# .transform(lambda x: ', '.join(x))['uba_technology'])
## Option 2 (currently preferred): Take technology with highest occurence
match1tn['uba_technology_aggregate'] = pd.DataFrame(
match1tn.groupby(match1tn.index)['uba_technology']
.agg(lambda x: x.value_counts().index[0]))
# Get UBA Plant name
match1tn['uba_name_aggregate'] = pd.DataFrame(
match1tn.groupby(match1tn.index).transform(lambda x: ', '.join(x))['uba_name'])
# Get UBA company name
match1tn['uba_company_aggregate'] = pd.DataFrame(
match1tn.groupby(match1tn.index)['uba_company']
.agg(lambda x:x.value_counts().index[0]))
# Change column names for merge later on
match1tn = match1tn.rename(
columns={'uba_chp_capacity': 'uba_chp_capacity_original',
'uba_capacity': 'uba_capacity_original',
'uba_chp_capacity_aggregate': 'uba_chp_capacity',
'uba_capacity_aggregate': 'uba_capacity'})
#Add comment
match1tn['merge_comment'] = ('List matching type: Multiple UBA capacities '
'aggregated to single BNetzA power plant')
# Drop duplicate rows and keep first entry
match1tn = match1tn.reset_index().drop_duplicates(subset='ID BNetzA',keep='first').set_index('ID BNetzA')
match1tn.head()
Explanation: 7.2.3 case 1-n
1-n Case here
End of explanation
# Merge the UBA DataFrames
# Merge first two dataframes
plantlist_uba_for_merge = match1t1.append(matchnt1, sort=True)
# Add third dataframe
plantlist_uba_for_merge = plantlist_uba_for_merge.append(match1tn,sort=True)
# Merge plantlist_uba_for_merge into the plantlist
plantlist = pd.merge(plantlist, plantlist_uba_for_merge,
left_index=True, right_index=True, how='left',sort=True)
plantlist.head()
Explanation: 7.2.4 Merge into plantlist
End of explanation
# Delete solar, wind onshore, and wind offshore
plantlist = plantlist[(plantlist['fuel'] != 'Solar')
& (plantlist['fuel'] != 'Onshore')
& (plantlist['fuel'] != 'Offshore')]
# Delete aggregate values
plantlist = plantlist[(plantlist['company'] != 'EEG-Anlagen < 10 MW')
& (plantlist['company'] != 'Nicht-EEG-Anlagen < 10 MW')]
Explanation: 7.3 Delete fuels not in focus
Here, solar, wind onshore. and wind offshore technologies are deleted from the list, as they are handled by another datapackage. Furthermore, aggregate values are excluded as well.
End of explanation
# Add columns with empty data
plantlist['shutdown'] = 'NaN'
plantlist['shutdown'] = pd.to_numeric(
plantlist['status'].str.extract('[\w].+(\d\d\d\d)', expand=False),
errors='coerce')
plantlist.loc[plantlist['shutdown'] > 0, 'status'] = 'shutdown'
# Fill retrofit data column
# Identify restrofit dates in UBA list
plantlist['retrofit'] = pd.to_numeric(
plantlist['uba_commissioned'].str.extract('[(.+](\d\d\d\d)', expand=False),
errors='coerce')
# Split multiple commissioning dates as listed in UBA
plantlist['uba_commissioned_1'] = pd.to_numeric(
plantlist['uba_commissioned'].str.extract('(\d\d\d\d)', expand=False),
errors='coerce')
plantlist.loc[plantlist['uba_commissioned_1'].isnull(), 'uba_commissioned_1'] = pd.to_numeric(
plantlist['uba_commissioned'].str.extract('(\d\d\d\d).+[\w]', expand=False),
errors='coerce').loc[plantlist['uba_commissioned_1'].isnull()]
plantlist['uba_commissioned_2'] = pd.to_numeric(
plantlist['uba_commissioned'].str.extract('[\w].+(\d\d\d\d).+[\w]', expand=False),
errors='coerce')
plantlist['uba_commissioned_3'] = pd.to_numeric(
plantlist['uba_commissioned'].str.extract('[\w].+(\d\d\d\d)', expand=False),
errors='coerce')
plantlist.loc[plantlist['retrofit'] == plantlist['uba_commissioned_1'], 'uba_commissioned_1'] = ''
plantlist.loc[plantlist['retrofit'] == plantlist['uba_commissioned_2'], 'uba_commissioned_2'] = ''
plantlist.loc[plantlist['retrofit'] == plantlist['uba_commissioned_3'], 'uba_commissioned_3'] = ''
# Split multiple commissioning dates as listed in BNetzA
plantlist['commissioned_1'] = pd.to_numeric(
plantlist['commissioned'].str.extract('(\d\d\d\d)', expand=False),
errors='coerce')
plantlist.loc[plantlist['commissioned_1'].isnull(), 'commissioned_1'] = pd.to_numeric(
plantlist['commissioned'].str.extract('(\d\d\d\d).+[\w]', expand=False),
errors='coerce').loc[plantlist['commissioned_1'].isnull()]
plantlist['commissioned_2'] = pd.to_numeric(
plantlist['commissioned'].str.extract('[\w].+(\d\d\d\d).+[\w]', expand=False),
errors='coerce')
plantlist['commissioned_3'] = pd.to_numeric(
plantlist['commissioned'].str.extract('[\w].+(\d\d\d\d)', expand=False),
errors='coerce')
# Show plantlist
plantlist[plantlist['status'] == 'shutdown']
Explanation: 7.4 Add Columns for shutdown and retrofit
Extract the year when plants were shutdown or retrofit, using regular expressions
End of explanation
plantlist['capacity_float'] = pd.to_numeric(
plantlist['capacity'],
errors='coerce')
plantlist['commissioned_float'] = pd.to_numeric(
plantlist[['commissioned','commissioned_1','commissioned_2','commissioned_3']].max(axis=1),
errors='coerce')
plantlist['retrofit_float'] = pd.to_numeric(
plantlist['retrofit'],
errors='coerce')
plantlist.head()
Explanation: 7.5 Convert input colums to usable data types
End of explanation
# Split uba_technology information into technology (GT, CC,...) and type (HKW, IKW, ...)
plantlist['technology'] = plantlist['uba_technology']
plantlist['type'] = plantlist['uba_technology']
dict_technology = {
'GT': 'Gas turbine',
'GT / DT': 'Combined cycle',
'DT': 'Steam turbine',
'GuD': 'Combined cycle',
'DKW': 'Steam turbine',
'LWK': 'Run-of-river',
'PSW': 'Pumped storage',
'DWR': 'Steam turbine', #Pressurized water reactor
'G/AK': 'Gas turbine', #GT with heat recovery
'SWR': 'Steam turbine', #boiling water reactor
'SWK': 'Reservoir', #storage power plant
'SSA': 'Steam turbine', #bus bar
'HKW (DT)': 'Steam turbine',
'HKW / GuD': 'Combined cycle',
'GuD / HKW': 'Combined cycle',
'IKW / GuD': 'Combined cycle',
'IKW /GuD': 'Combined cycle',
'GuD / IKW': 'Combined cycle',
'HKW / SSA': 'Steam turbine',
'IKW / SSA': 'Steam turbine',
'SSA / IKW': 'Steam turbine',
'HKW': '',
'IKW': '',
'IKW / HKW': '',
'HKW / IKW': '',
'IKW / HKW / GuD' : 'Combined cycle',
'HKW / GuD / IKW' : 'Combined cycle',
'GuD / HKW / IKW': 'Combined cycle',
}
plantlist['technology'].replace(dict_technology, inplace=True)
plantlist['technology'].unique()
# Check if all technologies have been translated
for technology in plantlist['technology'].unique():
if (technology not in dict_technology.values()) & (str(technology) != 'nan'):
logger.error('Untranslated technology: ' + str(technology))
# Translate types
dict_type = {
'HKW': 'CHP', #thermal power plant,
'HKW (DT)': 'CHP',
'IKW': 'IPP', #industrial power plant
'HKW / GuD': 'CHP',
'GuD / HKW': 'CHP',
'IKW / GuD': 'IPP',
'IKW /GuD': 'IPP',
'GuD / IKW': 'IPP',
'IKW / SSA': 'IPP',
'HKW / SSA': 'CHP',
'IKW / HKW': 'CHP',
'HKW / IKW': 'CHP',
'SSA / IKW': 'IPP',
'GT': '',
'GT / DT': '',
'DT': '',
'GuD': '',
'DKW': '',
'LWK': '',
'PSW': '',
'DWR': '', #Pressurized water reactor
'G/AK': 'CHP', #GT with heat recovery
'SWR': '', #boiling water reactor
'SWK': '', #storage power plant
'SSA': '',
'WEA': '',
'IKW / HKW / GuD' : 'CHP',
'HKW / GuD / IKW': 'CHP',
'GuD / HKW / IKW': 'CHP',
}
plantlist['type'].replace(dict_type, inplace=True)
plantlist['type'].unique()
# Check if all types have been translated
for type in plantlist['type'].unique():
if (type not in dict_type.values()) & (str(type) != 'nan'):
logger.error('Untranslated type: ' + str(type))
Explanation: 7.6 Identify generation technology
7.6.1 Process technology information from UBA list
Split uba_technology information into technology (GT, CC,...) and type (HKW, IKW, ...)
Abkürzung: Erläuterung
BoA: Braunkohlenkraftwerk mit optimierter Anlagentechnik
DKW: Dampfkraftwerk
DT: Dampfturbine
DWR: Druckwasserreaktor
G/AK: Gasturbine mit Abhitzekessel
GT: Gasturbine
GuD: Gas- und Dampfturbinenkraftwerk
HEL: Leichtes Heizöl
HKW: Heizkraftwerk
HS: Schweres Heizöl
IKW: Industriekraftwerk
LWK: Laufwasserkraftwerk
PSW: Pumpspeicherkraftwerk
PV: Photovoltaik
SSA: Sammelschienenanlage
SWK: Speicherwasserkraftwerk
SWR: Siedewasserreaktor
WEA: Windenergieanlage
Wind (L): Wind Onshore (Land)
Wind (O): Wind (Offshore)
End of explanation
# Set technology based on fuels
plantlist.loc[(plantlist['fuel'] == 'Nuclear') & ((plantlist['technology'] == '') | (
plantlist['technology'].isnull())), 'technology'] = 'Steam turbine'
plantlist.loc[(plantlist['fuel'] == 'Lignite') & ((plantlist['technology'] == '') | (
plantlist['technology'].isnull())), 'technology'] = 'Steam turbine'
plantlist.loc[(plantlist['fuel'] == 'Hard Coal') & ((plantlist['technology'] == '') | (
plantlist['technology'].isnull())), 'technology'] = 'Steam turbine'
plantlist.loc[(plantlist['fuel'] == 'Hard coal') & ((plantlist['technology'] == '') | (
plantlist['technology'].isnull())), 'technology'] = 'Steam turbine'
plantlist.loc[(plantlist['fuel'] == 'Hydro') & ((plantlist['technology'] == '') | (
plantlist['technology'].isnull())), 'technology'] = 'Run-of-river'
plantlist.loc[(plantlist['fuel'] == 'Hydro PSP') &
((plantlist['technology'] == '') | (plantlist['technology'].isnull())),
'technology'] = 'Pumped storage'
plantlist.loc[(plantlist['fuel'] == 'Hydro PSP'), 'fuel'] = 'Hydro'
plantlist.loc[(plantlist['fuel'] == 'Hydro Reservoir') &
((plantlist['technology'] == '') | (plantlist['technology'].isnull())),
'technology'] = 'RES'
plantlist.loc[(plantlist['fuel'] == 'Hydro Reservoir'), 'fuel'] = 'Hydro'
plantlist.loc[(plantlist['fuel'] == 'reservoir') & ((plantlist['technology'] == '') |
(plantlist['technology'].isnull())),
'technology'] = 'RES'
# Set technology based on name and block information combined with fuels (e.g. combined-cycle, gas turbine)
# Define technology CC as combination of GT and DT
plantlist.loc[((plantlist['name'].str.contains("GT")) | (plantlist['block'].str.contains("GT")))
& ((plantlist['name'].str.contains("DT")) | (plantlist['block'].str.contains("DT")))
& ((plantlist['technology'] == '') | (plantlist['technology'].isnull())), 'technology'] = 'Combined cycle'
# Define technology CC if specified as GuD
plantlist.loc[((plantlist['name'].str.contains("GuD")) | (plantlist['block'].str.contains("GuD"))
| (plantlist['name'].str.contains("GUD")) | (plantlist['name'].str.contains("GUD")))
& ((plantlist['technology'] == '') | (plantlist['technology'].isnull())), 'technology'] = 'Combined cycle'
# Define technology GT
plantlist.loc[((plantlist['name'].str.contains("GT"))
| (plantlist['block'].str.contains("GT"))
| (plantlist['name'].str.contains("Gasturbine"))
| (plantlist['block'].str.contains("Gasturbine")))
& ((plantlist['technology'] == '') | (plantlist['technology'].isnull())), 'technology'] = 'Gas turbine'
# Define technology ST
plantlist.loc[((plantlist['name'].str.contains("DT"))
| (plantlist['block'].str.contains("DT"))
| (plantlist['name'].str.contains("Dampfturbine"))
| (plantlist['block'].str.contains("Dampfturbine"))
| (plantlist['name'].str.contains("Dampfkraftwerk"))
| (plantlist['block'].str.contains("Dampfkraftwerk"))
| (plantlist['name'].str.contains("DKW"))
| (plantlist['block'].str.contains("DKW")))
& ((plantlist['technology'] == '') | (plantlist['technology'].isnull())), 'technology'] = 'Steam turbine'
# Define technology CB
plantlist.loc[((plantlist['name'].str.contains("motor"))
| (plantlist['block'].str.contains("motor"))
| (plantlist['name'].str.contains("Motor"))
| (plantlist['block'].str.contains("Motor")))
& ((plantlist['technology'] == '') | (plantlist['technology'].isnull())), 'technology'] = 'Combustion Engine'
# Identify stroage technologies
plantlist.loc[(plantlist['fuel'] == 'Other fuels') & ((plantlist[
'fuel_basis'] == 'Sonstige Speichertechnologien') & (plantlist['technology'].isnull())), 'technology'] = 'Storage technologies'
# Set technology ST for all technologies which could not be identified
plantlist.loc[((plantlist['technology'] == '')
| (plantlist['technology'].isnull())), 'technology'] = 'Steam turbine'
Explanation: 7.6.2 Identify generation technology based on BNetzA information
End of explanation
# Add country Code
plantlist['country_code'] = plantlist['state']
dict_state_country = {
'Brandenburg': 'DE',
'Baden-Württemberg': 'DE',
'Niedersachsen': 'DE',
'Bayern': 'DE',
'Mecklenburg-Vorpommern': 'DE',
'Sachsen-Anhalt': 'DE',
'Hessen': 'DE',
'Nordrhein-Westfalen': 'DE',
'Berlin': 'DE',
'Saarland': 'DE',
'Thüringen': 'DE',
'Sachsen': 'DE',
'Bremen': 'DE',
'Schleswig-Holstein': 'DE',
'Hamburg': 'DE',
'Rheinland-Pfalz': 'DE',
'Österreich': 'AT',
'Luxemburg': 'LU',
'Schweiz': 'CH',
}
plantlist['country_code'].replace(dict_state_country, inplace=True)
# Check if all types have been translated
for plant_type in plantlist['country_code'].unique():
if (plant_type not in dict_state_country.values()) & (str(plant_type) != 'nan'):
logger.error('Untranslated type: ' + str(plant_type))
Explanation: 7.7 Add country code
Some power plants are in Austria, Switzerland, or Luxembourg. As they are sometimes part of the German electricity system, they are included here.
End of explanation
# Efficiencies
data_efficiencies_bnetza = pd.read_csv(os.path.join('input/data/DE', 'input_efficiency_de.csv'),
sep=',', # CSV field separator, default is ','
decimal='.', # Decimal separator, default is '.')
index_col='id',
encoding='utf8')
data_efficiencies_bnetza['efficiency_net'] = pd.to_numeric(
data_efficiencies_bnetza['efficiency_net'],
errors='coerce')
data_efficiencies_bnetza = data_efficiencies_bnetza.dropna(subset=['efficiency_net'])
plantlist = pd.merge(
plantlist,
data_efficiencies_bnetza,
left_index=True,
right_index=True,
how='left')
plantlist.head()
Explanation: 7.8 Add efficiency data
7.8.1 Efficiencies from research
This sections adds efficiency data. These values have been researched by hand.
The source of each value is given in the column "efficiency_source".
Additionally, a rating of the source has been done starting from A (e.g. website of the power plants operator) to C (e.g. Article in local newspaper).
7.8.1.1 Import data
End of explanation
plantlist.iloc[:,6:-1].head()
plantlist_for_efficiency_analysis = plantlist
plantlist_for_efficiency_analysis = plantlist_for_efficiency_analysis.dropna(subset=['efficiency_net'])
# Plot efficiencies for lignite, coal, oil, and natural gas
fuel_for_plot = ['Lignite', 'Hard coal', 'Oil', 'Natural gas']
col_dict = {'Lignite': 'brown', 'Hard coal': 'grey', 'Oil': 'k', 'Natural gas': 'orange'}
fig, ax = plt.subplots(figsize=(16,8))
for fuels in fuel_for_plot:
sub_df = plantlist_for_efficiency_analysis[plantlist_for_efficiency_analysis.fuel == fuels]
if len(sub_df['efficiency_net']) > 10:
x = np.array(sub_df['commissioned_float'].astype(int))
fit = np.polyfit(x, sub_df['efficiency_net'], deg=1)
ax.plot(x, fit[0]*x + fit[1], color=col_dict[fuels])
sub_df.plot(ax=ax,
kind='scatter',
x='commissioned_float',
y='efficiency_net',
c=col_dict[fuels],
label=fuels)
Explanation: 7.8.1.2 Plot efficiencies by year of commissioning
End of explanation
data_efficiencies_literature = pd.read_csv(os.path.join('input/data/DE','input_efficiency_literature_by_fuel_technology.csv'),
sep=',', # CSV field separator, default is ','
decimal='.', # Decimal separator, default is '.')
encoding='utf8')
#data_efficiencies_literature['technology'] = data_efficiencies_literature['technology'].str.upper()
data_efficiencies_literature = data_efficiencies_literature.set_index(['fuel','technology'])
data_efficiencies_literature
Explanation: 7.8.2 Efficiencies from literature
Jonas Egerer, Clemens Gerbaulet, Richard Ihlenburg, Friedrich Kunz, Benjamin Reinhard, Christian von Hirschhausen, Alexander Weber, Jens Weibezahn (2014): Electricity Sector Data for Policy-Relevant Modeling: Data Documentation and Applications to the German and European Electricity Markets. DIW Data Documentation 72, Berlin, Germany.
7.8.2.1 Import data
For each energy source - technology combination two values are read, to be applied as a linear approximation based on the year of commissioning. Therefore, the efficiency is made up of the efficiency_intercept (the efficiency at "year zero") plus the efficiency_slope multiplied by the year of commissioning.
End of explanation
plantlist = plantlist.join(data_efficiencies_literature,on=['fuel','technology'])
plantlist['efficiency_literature'] = plantlist['efficiency_intercept'] + plantlist['efficiency_slope']*plantlist[['commissioned_float','retrofit_float']].max(axis=1)
plantlist.head()
Explanation: 7.8.2.2 Apply efficiency approximation from literature
End of explanation
data_plant_locations = pd.read_csv(os.path.join('input/data/DE','input_plant_locations_de.csv'),
sep=',', # CSV field separator, default is ','
decimal='.', # Decimal separator, default is '.')
encoding='utf8')
data_plant_locations = data_plant_locations.set_index('id')
data_plant_locations['lat'] = pd.to_numeric(data_plant_locations['lat'],
errors='coerce')
data_plant_locations['lon'] = pd.to_numeric(data_plant_locations['lon'],
errors='coerce')
plantlist = pd.merge(plantlist,
data_plant_locations,
left_index=True,
right_index=True,
how='left')
plantlist.head()
plantlist[plantlist.lat.isnull()]
Explanation: 7.9 Add geodata and EIC Codes
The locations of power plants have been researched manually, these are now added to the output. Checking was done visually using satellite imagery and other mapping material.
End of explanation
# read energy source level allocation table
energy_source_level_allocator = pd.read_csv(os.path.join('input', 'energy_source_level_allocator.csv'),
sep=',', # CSV field separator, default is ','
decimal='.', # Decimal separator, default is '.')
index_col='fuel',
encoding='utf8')
plantlist = pd.merge(energy_source_level_allocator, plantlist,
left_index = True,
right_on='fuel',
how='outer')
plantlist
Explanation: 7.10 Allocate energy source levels
To enable a more readable output, the columns 'fuel' is augmented with additional information called 'energy source level'
End of explanation
# Merge uba_name_aggregate and uba_name
plantlist.loc[plantlist['uba_name_aggregate'].isnull(), 'uba_name_aggregate'] = plantlist['uba_name'][plantlist['uba_name_aggregate'].isnull()]
# Drop columns not relevant for output
colsToDrop = ['bnetza_id',
'capacity',
'uba_name',
'uba_capacity_original',
'uba_chp_capacity_original',
'uba_city',
'uba_commissioned',
'uba_company',
'uba_company_aggregate',
'uba_fuel',
'uba_postcode',
'uba_state',
'uba_technology',
'uba_technology_aggregate',
'retrofit',
'uba_commissioned_1',
'uba_commissioned_2',
'uba_commissioned_3',
'commissioned_1',
'commissioned_2',
'commissioned_3',
'fuel_basis',
'fuel_multiple1',
'fuel_multiple2',
'efficiency_gross',
'efficiency_intercept',
'efficiency_slope',
'source_type',
'date'
]
plantlist = plantlist.drop(colsToDrop, axis=1)
# Rename columns
plantlist = plantlist.rename(columns={'commissioned': 'commissioned_original',
'commissioned_float': 'commissioned',
'retrofit_float': 'retrofit',
'capacity_float': 'capacity_net_bnetza',
'uba_capacity': 'capacity_gross_uba',
'uba_chp_capacity': 'chp_capacity_uba',
'efficiency_net': 'efficiency_data',
'efficiency_literature': 'efficiency_estimate',
'uba_name_aggregate': 'name_uba',
'name': 'name_bnetza',
'block': 'block_bnetza',
'country_code': 'country',
'fuel': 'energy_source',
})
# Sort columns
columns_sorted = [
'name_bnetza',
'block_bnetza',
'name_uba',
'company',
'street',
'postcode',
'city',
'state',
'country',
'capacity_net_bnetza',
'capacity_gross_uba',
'energy_source',
'technology',
'chp',
'chp_capacity_uba',
'commissioned',
'commissioned_original',
'retrofit',
'shutdown',
'status',
'type',
'lat',
'lon',
'eic_code_plant',
'eic_code_block',
'efficiency_data',
'efficiency_source',
'efficiency_estimate',
'energy_source_level_1',
'energy_source_level_2',
'energy_source_level_3',
'eeg',
'network_node',
'voltage',
'network_operator',
'merge_comment',
'comment']
plantlist = plantlist.reindex(columns=columns_sorted)
plantlist.head()
Explanation: 8. Define final output
End of explanation
# Round capacity values as well as the efficiency estimate to five decimals-
plantlist.capacity_net_bnetza = plantlist.capacity_net_bnetza.round(decimals=5)
plantlist.capacity_gross_uba = plantlist.capacity_gross_uba.round(decimals=5)
plantlist.efficiency_estimate = plantlist.efficiency_estimate.round(decimals=5)
Explanation: 8.1 Round values
End of explanation
pivot_status_capacity = pd.pivot_table(
plantlist,
values='capacity_net_bnetza',
columns='status',
index='energy_source',
aggfunc=np.sum
)
pivot_status_capacity.sort_values(by='operating', inplace=True, ascending=0)
pivot_status_capacity_plot=pivot_status_capacity.plot(kind='barh',
stacked=True,
legend=True,
figsize=(12, 6))
pivot_status_capacity_plot.set_xlabel("MW")
pivot_status_capacity_plot
Explanation: 8.2 Verification
8.2.1 Capacities by plant status
End of explanation
plantlist_filtered = plantlist
pivot_age_capacity = pd.pivot_table(
plantlist_filtered,
values='capacity_net_bnetza',
columns='energy_source',
index='commissioned',
aggfunc=np.sum,
dropna=True
)
pivot_age_capacity_plot=pivot_age_capacity.plot(kind='bar',
stacked=True,
legend=True,
figsize=(17, 10))
pivot_age_capacity_plot.set_ylabel("MW")
xaxis_labels = pivot_age_capacity.index.astype(int)
pivot_age_capacity_plot.set_xticklabels(xaxis_labels)
pivot_age_capacity_plot
Explanation: 8.2.2 Power plant age
End of explanation
plantlist_for_plot = plantlist.copy(deep=True)
plantlist_for_plot['capacity_float'] = pd.to_numeric(plantlist_for_plot['capacity_net_bnetza'],
errors='coerce')
plantlist_for_plot['commissioned_float'] = pd.to_numeric(plantlist_for_plot['commissioned'],
errors='coerce')
age_capacity_plot = plantlist_for_plot.plot(kind='scatter', x='commissioned_float', y='capacity_float', figsize=(17, 10))
age_capacity_plot.set_xlabel("commissioned")
age_capacity_plot.set_ylabel("MW")
age_capacity_plot
Explanation: 8.2.3 Block size vs year of commissioning
This chart is suitable to check outliers of commissioning years and block sizes.
In theory, there should be no unexpected values, e.g. all commissioning years should be greater than 1900.
Block sizes above 2000 MW are also unlikely.
End of explanation
plantlist[plantlist.capacity_net_bnetza == 0]
Explanation: 8.3 Logical checks
8.3.1 Every power plant needs a capacity
List all entries with zero capacity.
End of explanation
#Show all Plants with commisioning dates below 1900
plantlist[plantlist['commissioned'] <= 1900]
# Show all Plants with invalid commisioning dates
plantlist[plantlist['commissioned'].isnull()]
Explanation: 8.3.2 Commissioning Dates
End of explanation
# TODO: improve this comparison, it creates many false positives
capacitycomparison = pd.DataFrame(plantlist.capacity_net_bnetza / plantlist.capacity_gross_uba)
capacitycomparison['Name'] = plantlist.name_bnetza
capacitycomparison['Block'] = plantlist.block_bnetza
capacitycomparison['BnetzaCapacity'] = plantlist.capacity_net_bnetza
capacitycomparison['UBACapacity'] = plantlist.capacity_gross_uba
capacitycomparison.dropna(inplace=True)
capacitycomparison.sort_values(by=0)
Explanation: 8.3.3 Compare UBA and BNetzA data
End of explanation
output_path = 'output'
plantlist.to_csv(
os.path.join(output_path, 'conventional_power_plants_DE.csv'),
encoding='utf-8', index_label='id'
)
plantlist.to_excel(
os.path.join(output_path, 'conventional_power_plants_DE.xlsx'),
sheet_name='plants', index_label='id'
)
plantlist.to_sql(
'conventional_power_plants_DE',
sqlite3.connect(os.path.join(output_path ,'conventional_power_plants.sqlite')),
if_exists="replace", index_label='id'
)
Explanation: 9. Result export
Write the results to file
End of explanation |
605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
USDA Food Data - Preliminary Analysis
USDA Food Data is obtained from a consolidated dataset published by the Open Food Facts organization (https
Step1: Preliminary look at the USDA data
Step2: Quick look at a few of the rows
Each row contains fields that specify the value for a given nutrient. Note that only those fields with valid values are populated. The others are empty.
Step3: Quick look at ingredients
Ingredients are not broken down similar to nutrients into separate fields. Rather, all ingredients are grouped together into a single line of text.
Step4: In this step, we convert the ingredients text into a format that can be vectorized.
Step5: Cleaning up the dataset
We now look at the available data in the dataset and look for possible issues with the data that could impact our analysis.
Notice that several entries are not full populated with all available nutrition.
Going by the results, we can limit the categories that we use for the analysis to just those over 100,000 values to ensure that we avoid having to work with columns that are not sufficiently populated.
Step6: Looking for similar products based on ingredients
This section attempts to use item similarity to look for similar products based on ingredients present. We vectorize all ingredients and use the resulting vector to look for similar items. | Python Code:
# load pre-requisite imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import re
from gensim import corpora, models, similarities
# load world food data into a pandas dataframe
world_food_facts =pd.read_csv("../w209finalproject_data/data/en.openfoodfacts.org.products.tsv", sep='\t',low_memory=False)
# extract USDA data from world data
usda_import = world_food_facts[world_food_facts.creator=="usda-ndb-import"]
# save the usda data to a csv file
usda_import.to_csv("../w209finalproject_data/data/usda_imports_v2.csv")
Explanation: USDA Food Data - Preliminary Analysis
USDA Food Data is obtained from a consolidated dataset published by the Open Food Facts organization (https://world.openfoodfacts.org/) and made available on the Kaggle website (https://www.kaggle.com/openfoodfacts/world-food-facts).
Open Food Facts is a free, open, collbarative database of food products from around the world, with ingredients, allergens, nutrition facts and all the tidbits of information we can find on product labels (source: ://www.kaggle.com/openfoodfacts/world-food-facts).
Link to the available data can be found here - https://www.kaggle.com/openfoodfacts/world-food-facts/downloads/en.openfoodfacts.org.products.tsv
For the purpose of our analysis we will only be looking at USDA data and not data sourced from other countries since the USDA data appears to be the dataset that is well populated with values.
Loading the data
End of explanation
# Examining available fields
print("Number of records:",len(usda_import))
print("Number of columns:",len(list(usda_import)))
print("\nField Names:")
list(usda_import)
len(usda_import)
Explanation: Preliminary look at the USDA data
End of explanation
usda_import_subset = usda_import.head(1)
print "Code:",usda_import_subset['code'][1]
print "Product Name:",usda_import_subset['product_name'][1]
print "Ingredients:",usda_import_subset['ingredients_text'][1]
print "Sugar 100g",usda_import_subset['sugars_100g'][1]
print "Vitamin A 100g",usda_import_subset['vitamin-a_100g'][1]
Explanation: Quick look at a few of the rows
Each row contains fields that specify the value for a given nutrient. Note that only those fields with valid values are populated. The others are empty.
End of explanation
usda_import['ingredients_text'].head(5)
Explanation: Quick look at ingredients
Ingredients are not broken down similar to nutrients into separate fields. Rather, all ingredients are grouped together into a single line of text.
End of explanation
# Extracting ingredients for a particular product
pd.set_option('display.max_rows', 600)
pd.set_option('display.max_columns', 600)
print "Vectorizable ingredients text"
for x in range(3):
ingredients = re.split(',|\(|\)',usda_import['ingredients_text'].iloc[x])
ingredients = [w.strip().replace(' ','-') for w in ingredients]
print(' '.join(ingredients))
Explanation: In this step, we convert the ingredients text into a format that can be vectorized.
End of explanation
# Looking for columns that are not sufficiently populated
# display count of all rows
print("Total rows in USDA dataset are:",len(usda_import))
# display count of all non-NAN entries in each column
print("\nCount of non-NaN values in each column")
print(usda_import.count().sort_values(ascending=False))
Explanation: Cleaning up the dataset
We now look at the available data in the dataset and look for possible issues with the data that could impact our analysis.
Notice that several entries are not full populated with all available nutrition.
Going by the results, we can limit the categories that we use for the analysis to just those over 100,000 values to ensure that we avoid having to work with columns that are not sufficiently populated.
End of explanation
# load the subsample USDA data
#usda_sample_data =pd.read_csv("./data/usda_imports_20k.csv", sep=',',low_memory=False)
#usda_sample_data =pd.read_csv("./data/usda_imports_v2_1000_hdr.csv", sep=',',low_memory=False)
usda_sample_data =pd.read_csv("./data/usda_imports_v2.csv", sep=',',low_memory=False)
# add a new column that includes a modified version of ingredients list that can be vectorized
ingredients_list=[]
index = 0
for x in range(len(usda_sample_data)):
str_to_split = usda_import['ingredients_text'].iloc[x]
try:
ingredients = re.split(',|\(|\)|\[|\]',str_to_split)
except:
ingredients = re.split(',|\(|\)|\[|\]',"None")
ingredients = [w.strip().replace(' ','-') for w in ingredients]
ingredients_str = ' '.join(ingredients)
ingredients_list.append(ingredients_str)
index+=1
# add the new column to the dataframe
usda_sample_data['ingredients_list'] = ingredients_list
print(usda_sample_data['ingredients_list'])
## Generate a word cloud for the ingredients
# SK-learn libraries for feature extraction from text.
from sklearn.feature_extraction.text import *
# create a new column using a modified version of ingredients list that can be vectorized
vectorizer = CountVectorizer()
corpus_data=usda_sample_data['ingredients_list']
count_matrix = vectorizer.fit_transform(corpus_data)
# display the features/tokens
all_feature_names = vectorizer.get_feature_names()
print(" ".join(list(all_feature_names[:50])))
%matplotlib inline
# generate wordcloud
from os import path
from scipy.misc.pilutil import imread
import matplotlib.pyplot as plt
import random
from wordcloud import WordCloud, STOPWORDS
wordcloud = WordCloud(font_path='/Library/Fonts/Verdana.ttf',
relative_scaling = 1.0,
stopwords = 'to of the ,'
).generate("".join(usda_sample_data['ingredients_list']))
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
# remove common words and tokenize the ingredients_list values
documents = usda_sample_data['ingredients_list']
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
# remove words that appear only once
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1]
for text in texts]
# display first 10 entries
from pprint import pprint # pretty-printer
pprint(texts[:10])
# generate and persist the dictionary
dictionary = corpora.Dictionary(texts)
dictionary.save('./data/ingredients.dict') # store the dictionary, for future reference
# generate and persist the corpus
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize('./data/ingredients.mm', corpus) # store to disk, for later use
print(corpus[:10])
# generate and persist the index
lsi = models.LsiModel(corpus, id2word=dictionary, num_topics=-1)
index = similarities.MatrixSimilarity(lsi[corpus]) # transform corpus to LSI space and index it
index.save('./data/ingredients.index')
# load the dictionary and matrix representation of similarity and the index
dictionary = corpora.Dictionary.load('./data/ingredients.dict')
corpus = corpora.MmCorpus('./data/ingredients.mm')
# load the index
index = similarities.MatrixSimilarity.load('./data/ingredients.index')
# convert query to vector
max_count=3
def displaySimilarProducts(query):
vec_bow = dictionary.doc2bow(query.lower().split())
vec_lsi = lsi[vec_bow] # convert the query to LSI space
#print(vec_lsi)
sims = index[vec_lsi]
#print(list(enumerate(sims)))
print "\nQuery String:",query
sims_sorted = sorted(enumerate(sims), key=lambda item: -item[1])
#print(sims_sorted)
count=0
print("Top 3 matches:")
for sim in sims_sorted:
print "\nCode:",usda_sample_data['code'][sim[0]]
print "Product Name:",usda_sample_data['product_name'][sim[0]]
print "Text:",usda_sample_data['ingredients_list'][sim[0]]
print "Match:",sim[1]
if count==max_count-1:
break
else:
count+=1
query=raw_input("Enter search text:")
displaySimilarProducts(query)
Explanation: Looking for similar products based on ingredients
This section attempts to use item similarity to look for similar products based on ingredients present. We vectorize all ingredients and use the resulting vector to look for similar items.
End of explanation |
606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tinkering with Keras
The goal of this notebook is to store useful insights made through my learning process of Keras. Since I will be closely working with a colleague who uses Theano, the idea is to try to implement all the functionalities to work seamlessly with both backends.
Below are shown some of the functionalities that I found imperceptible from the official documentation.
Evaluate variables.
Step1: Feed placeholders and evaluate functions.
Step2: Compute custom gradients.
Step3: Pass a custom gradient to the optimizer.
Step4: Seamlessly pass a custom gradient to the optimizer.
Step5: Concatenate two models and propagate the gradient.
The idea is to simply do
Step6: Optimize a function by overloading get_gradients.
Step7: Optimize a function using get_updates only. | Python Code:
%reset -f
import keras.backend as K
x = K.variable(42.)
# Solution 1:
sess = K.get_session()
print sess.run(x)
# Solution 2 (seamless):
print K.eval(x)
Explanation: Tinkering with Keras
The goal of this notebook is to store useful insights made through my learning process of Keras. Since I will be closely working with a colleague who uses Theano, the idea is to try to implement all the functionalities to work seamlessly with both backends.
Below are shown some of the functionalities that I found imperceptible from the official documentation.
Evaluate variables.
End of explanation
%reset -f
import keras.backend as K
import numpy as np
x = K.placeholder(ndim=1)
y = 2 * x
feed = np.array([2])
# Solution 1:
sess = K.get_session()
print sess.run(y, { x : feed})
# Solution 2 (seamless):
f = K.function([x], [y])
print f([feed])[0]
Explanation: Feed placeholders and evaluate functions.
End of explanation
%reset -f
import numpy as np
import keras.backend as K
from keras.layers import Input, Dense, Activation
from keras.models import Model
from keras import optimizers, losses
# Define the model and generate the data.
print "Our model is y = 3 * x1 + 2 * x2"
num_data = 1000
x_train = np.random.rand(num_data, 2)
y_train = x_train * np.matrix([[3], [2]])
print "Data generated."
x = Input(shape=(2,), name='x')
y = Dense(1, activation='linear', use_bias=False)(x)
model = Model(inputs=x, outputs=y)
opt = optimizers.Adam(0.1)
loss = losses.mean_squared_error
model.compile(opt, loss)
# Retreive the handle to trainable weights, which should be equal to the dy_dx.
w = model.trainable_weights[0]
dy_dx = K.gradients(y, x)
f = K.function(inputs=[x], outputs=dy_dx)
fx = f([np.ones([1, 2])])[0] # Input is irrelevant, since we are computing the gradient of a linear function.
print "Before training:"
print "dy_dx = ", fx, ", w = ", K.eval(w).T
model.fit(x_train, y_train, epochs=500, batch_size=num_data, verbose=0)
print "\nAfter training:"
print "dy_dx = ", f([np.ones([1, 2])])[0], ", w = ", K.eval(w).T
# Calculate the gradient manually.
y_true = K.placeholder(shape=(None, 1))
dJ_dw = K.gradients(loss(y, y_true), w)
f = K.function(inputs=[x, y_true], outputs=dJ_dw)
print "\ndJ_dw (should be around 0):"
print f([x_train, y_train])[0].T
Explanation: Compute custom gradients.
End of explanation
%reset -f
import tensorflow as tf # Not seamless.
import numpy as np
import keras.backend as K
from keras.layers import Input, Dense, Activation
from keras.models import Model
from keras import losses
# Define the model and generate the data.
print "Our model is y = 3 * x1 + 2 * x2 + 5"
num_data = 1000
x_train = np.random.rand(num_data, 2)
y_train = x_train * np.matrix([[3], [2]]) + 5
print "Data generated."
x = Input(shape=(2,), name='x')
y = Dense(1, activation='linear')(x)
model = Model(inputs=x, outputs=y)
# Retreive the handle to trainable weights. In this case it is a list of variables [W, b].
w = model.trainable_weights
store_w = {l: l.get_weights() for l in model.layers}
print "Initial weights:", [K.eval(i).T for i in w]
opt = tf.train.AdamOptimizer(0.1)
# Output placeholder is needed for to define the loss.
y_true = K.placeholder(shape=(None, 1))
loss = losses.mean_squared_error(y, y_true)
grads_and_vars = zip(tf.gradients(loss, w), w)
op = opt.apply_gradients(grads_and_vars)
print "\nFirst approach:"
f = K.function(inputs=[x, y_true], outputs=[], updates=[op])
for i in range(1000):
f([x_train, y_train])
print "Optimized weights:", [K.eval(i).T for i in w]
# Restore weights from beginning:
for l in model.layers:
l.set_weights(store_w[l])
print "\nRestored weights:", [K.eval(i).T for i in w]
print "\nSecond approach:"
sess = K.get_session()
sess.run(tf.global_variables_initializer())
for i in range(1000):
sess.run(op, {x : x_train, y_true : y_train})
print "Optimized weights:", [K.eval(i).T for i in w]
Explanation: Pass a custom gradient to the optimizer.
End of explanation
%reset -f
import numpy as np
import keras.backend as K
from keras.layers import Input, Dense, Activation
from keras.models import Model
from keras import optimizers, losses
# Define the model and generate the data.
print "Our model is y = 3 * x1 + 2 * x2 + 5"
num_data = 1000
x_train = np.random.rand(num_data, 2)
y_train = x_train * np.matrix([[3], [2]]) + 5
print "Data generated."
x = Input(shape=(2,), name='x')
y = Dense(1, activation='linear')(x)
model = Model(inputs=x, outputs=y)
print "Initial weights:", [K.eval(i).T for i in model.trainable_weights]
opt = optimizers.Adam(0.1)
y_true = K.placeholder(shape=(None, 1))
loss = losses.mean_squared_error(y, y_true)
updates = opt.get_updates(model.trainable_weights, model.constraints, [loss])
train_step = K.function(inputs=[x, y_true], outputs=[], updates=updates)
for i in range(1000):
train_step([x_train, y_train])
print "\nOptimized weights:", [K.eval(i).T for i in model.trainable_weights]
Explanation: Seamlessly pass a custom gradient to the optimizer.
End of explanation
%reset -f
import keras.backend as K
from keras.layers import Input, Dense, Lambda
from keras.models import Model
import numpy as np
x1 = Input(shape=(2,))
y = Lambda(lambda x: x ** 2)(x1)
grads1 = K.gradients(y, x1)
f1 = K.function(inputs=[x1], outputs=grads1)
model1 = Model(inputs=x1, outputs=y)
x_feed = np.random.randn(2, 2)
print "1. Model: y1 = x ** 2"
print "x =\n", x_feed
print "y =\n", model1.predict_on_batch(x_feed)
print "dy_dx =\n", f1([x_feed])[0]
print "\n2. Model: y2 = 3 * x"
x2 = Input(shape=(3, ))
y = Lambda(lambda x: 3 * x)(x2)
grads2 = K.gradients(y, x2)
f2 = K.function(inputs=[x2], outputs=grads2)
model2 = Model(inputs=x2, outputs=y)
print "y2 =\n", model1.predict_on_batch(x_feed)
print "\n3. Model: z = y2 o y1"
z = model2(model1.outputs)
in1 = model1.inputs[0]
f3 = K.function(inputs=[in1], outputs=[z])
print "z =\n", f3([x_feed])[0]
grads3 = K.gradients(z, in1)
f4 = K.function(inputs=[in1], outputs=grads3)
print "dz_dx =\n", f4([x_feed])[0]
print "6x =\n", 6 * x_feed
Explanation: Concatenate two models and propagate the gradient.
The idea is to simply do:
python
in1 = ...
out2 = model2(model1.outputs)
grads = K.gradients(out2, in1)
End of explanation
%reset -f
import keras.backend as K
from keras.layers import Input, Lambda
from keras import optimizers
import numpy as np
print "Goal is to find argmin_x(x^2)"
x0 = 3 * np.ones([1, 1]) # For some reason needed as a two dimensional.
print "x0 = ", x0
x = K.variable(value=x0)
x_in = Input(shape=(1,))
y = Lambda(lambda x: x ** 2)(x_in)
opt = optimizers.Adam(0.1)
def get_gradients(*unused):
return K.gradients(y, x_in)
# This:
opt.get_gradients = get_gradients
updates = opt.get_updates([x], [], [])
# or only: updates = opt.get_updates([x], [], [y])
train_step = K.function(inputs=[x_in], outputs=[], updates=updates)
for i in range(150):
train_step([K.eval(x)])
if i % 10 == 0:
print K.eval(x)
Explanation: Optimize a function by overloading get_gradients.
End of explanation
%reset -f
import keras.backend as K
from keras import optimizers, losses
import numpy as np
print "Goal is to find argmin_x[(y - y_des)^2], where y = x^2"
x0 = -3 * np.ones([1, 1])
y_des = 4 * np.ones([1, 1])
print "x0 = ", x0, " y_des = ", y_des
x = K.variable(value=x0)
y = x ** 2
opt = optimizers.Adam(0.1)
y_var = K.placeholder(ndim=2)
loss = losses.mean_squared_error(y, y_var)
updates = opt.get_updates([x], [], [loss])
train_step = K.function(inputs=[y_var], outputs=[], updates=updates)
for i in range(150):
train_step([y_des])
if i % 10 == 0:
print K.eval(x)
Explanation: Optimize a function using get_updates only.
End of explanation |
607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We're going to start by grabbing the geometry for the Austin community area.
Step1: Now let's get the shootings data.
Step2: Now let's iterate through the shootings, generate shapely points and check to see if they're in the geometry we care about.
Step3: Let's do something similar with homicides. It's exactly the same, in fact, but a few field names are different.
Step4: Now let's see how many homicides we can associate with shootings. We'll say that if the locations are within five meters and the date and time of the shooting is within 10 minutes of the homicide, they're the same incident. | Python Code:
import requests
from shapely.geometry import shape, Point
r = requests.get('https://data.cityofchicago.org/api/geospatial/cauq-8yn6?method=export&format=GeoJSON')
for feature in r.json()['features']:
if feature['properties']['community'] == 'AUSTIN':
austin = feature
poly = shape(austin['geometry'])
Explanation: We're going to start by grabbing the geometry for the Austin community area.
End of explanation
import os
def get_data(table):
r = requests.get('%stable/json/%s' % (os.environ['NEWSROOMDB_URL'], table))
return r.json()
shootings = get_data('shootings')
homicides = get_data('homicides')
Explanation: Now let's get the shootings data.
End of explanation
shootings_ca = []
for row in shootings:
if not row['Geocode Override']:
continue
points = row['Geocode Override'][1:-1].split(',')
if len(points) != 2:
continue
point = Point(float(points[1]), float(points[0]))
row['point'] = point
if poly.contains(point):
shootings_ca.append(row)
print 'Found %d shootings in this community area' % len(shootings_ca)
for f in shootings_ca:
print f['Date'], f['Time'], f['Age'], f['Sex'], f['Shooting Location']
Explanation: Now let's iterate through the shootings, generate shapely points and check to see if they're in the geometry we care about.
End of explanation
homicides_ca = []
years = {}
for row in homicides:
if not row['Geocode Override']:
continue
points = row['Geocode Override'][1:-1].split(',')
if len(points) != 2:
continue
point = Point(float(points[1]), float(points[0]))
row['point'] = point
if poly.contains(point):
homicides_ca.append(row)
print 'Found %d homicides in this community area' % len(homicides_ca)
for f in homicides_ca:
print f['Occ Date'], f['Occ Time'], f['Age'], f['Sex'], f['Address of Occurrence']
if not f['Occ Date']:
continue
dt = datetime.strptime(f['Occ Date'], '%Y-%m-%d')
if dt.year not in years:
years[dt.year] = 0
years[dt.year] += 1
print years
Explanation: Let's do something similar with homicides. It's exactly the same, in fact, but a few field names are different.
End of explanation
import pyproj
from datetime import datetime, timedelta
geod = pyproj.Geod(ellps='WGS84')
associated = []
for homicide in homicides_ca:
if not homicide['Occ Time']:
homicide['Occ Time'] = '00:01'
if not homicide['Occ Date']:
homicide['Occ Date'] = '2000-01-01'
homicide_dt = datetime.strptime('%s %s' % (homicide['Occ Date'], homicide['Occ Time']), '%Y-%m-%d %H:%M')
for shooting in shootings_ca:
if not shooting['Time']:
shooting['Time'] = '00:01'
if not shooting['Time']:
shooting['Time'] = '2000-01-01'
shooting_dt = datetime.strptime('%s %s' % (shooting['Date'], shooting['Time']), '%Y-%m-%d %H:%M')
diff = homicide_dt - shooting_dt
seconds = divmod(diff.days * 86400 + diff.seconds, 60)[0]
if abs(seconds) <= 600:
angle1, angle2, distance = geod.inv(
homicide['point'].x, homicide['point'].y, shooting['point'].x, shooting['point'].y)
if distance < 5:
associated.append((homicide, shooting))
break
print len(associated)
years = {}
for homicide in homicides:
if not homicide['Occ Date']:
continue
dt = datetime.strptime(homicide['Occ Date'], '%Y-%m-%d')
if dt.year not in years:
years[dt.year] = 0
years[dt.year] += 1
print years
from csv import DictWriter
from ftfy import fix_text, guess_bytes
for idx, row in enumerate(shootings_ca):
if 'point' in row.keys():
del row['point']
for key in row:
#print idx, key, row[key]
if type(row[key]) is str:
#print row[key]
row[key] = fix_text(row[key].replace('\xa0', '').decode('utf8'))
for idx, row in enumerate(homicides_ca):
if 'point' in row.keys():
del row['point']
for key in row:
#print idx, key, row[key]
if type(row[key]) is str:
#print row[key]
row[key] = row[key].decode('utf8')
with open('/Users/abrahamepton/Documents/austin_shootings.csv', 'w+') as fh:
writer = DictWriter(fh, sorted(shootings_ca[0].keys()))
writer.writeheader()
for row in shootings_ca:
try:
writer.writerow(row)
except:
print row
with open('/Users/abrahamepton/Documents/austin_homicides.csv', 'w+') as fh:
writer = DictWriter(fh, sorted(homicides_ca[0].keys()))
writer.writeheader()
for row in homicides_ca:
try:
writer.writerow(row)
except:
print row
Explanation: Now let's see how many homicides we can associate with shootings. We'll say that if the locations are within five meters and the date and time of the shooting is within 10 minutes of the homicide, they're the same incident.
End of explanation |
608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bagging元估计器
Bagging是Bootstrap Aggregating的简称,意思就是再取样(Bootstrap)然后在每个样本上训练出来的模型进行集成.
通常如果目标是分类,则集成的方式是投票;如果目标是回归,则集成方式是取平均.
在集成算法中,bagging方法会在原始训练集的随机子集上构建一类黑盒估计器的多个实例,然后把这些估计器的预测结果结合起来形成最终的预测结果.
该方法通过在训练模型的过程中引入随机性,来减少基估计器的方差(例如,决策树).在多数情况下,bagging方法提供了一种非常简单的方式来对单一模型进行改进,而无需修改背后的算法.因为bagging方法可以减小过拟合(variance),所以通常在强分类器和复杂模型上使用时表现的很好.
bagging方法有很多种,其主要区别在于随机抽取训练子集的方法不同:
如果抽取的数据集的随机子集是样例的随机子集,我们叫做粘贴(Pasting).
如果样例抽取是有放回的,我们称为Bagging.
如果抽取的数据集的随机子集是特征的随机子集,我们叫做随机子空间(Random Subspaces)
如果基估计器构建在对于样本和特征抽取的子集之上时,我们叫做随机补丁(Random Patches)
bagging的另一个好处是天生的易于并行.完全可以多个机器同时训练,之后再集成起来,这样可以大大提高效率.
使用sklearn做bagging集成
sklearn提供了两个接口来做bagging
Step1: 数据预处理
Step2: 数据集拆分
Step3: 训练模型
Step4: 随机森林和sklearn中的接口
随机森林是最知名的bagging应用,利用多个随机树实例投票进行预测分类或者求平均做回归预测(cart tree使用基尼系数而非信息熵,因此可以处理连续数据).
sklearn中提供了4个随机森林接口 | Python Code:
import requests
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder,StandardScaler
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
from sklearn.ensemble import BaggingClassifier
Explanation: Bagging元估计器
Bagging是Bootstrap Aggregating的简称,意思就是再取样(Bootstrap)然后在每个样本上训练出来的模型进行集成.
通常如果目标是分类,则集成的方式是投票;如果目标是回归,则集成方式是取平均.
在集成算法中,bagging方法会在原始训练集的随机子集上构建一类黑盒估计器的多个实例,然后把这些估计器的预测结果结合起来形成最终的预测结果.
该方法通过在训练模型的过程中引入随机性,来减少基估计器的方差(例如,决策树).在多数情况下,bagging方法提供了一种非常简单的方式来对单一模型进行改进,而无需修改背后的算法.因为bagging方法可以减小过拟合(variance),所以通常在强分类器和复杂模型上使用时表现的很好.
bagging方法有很多种,其主要区别在于随机抽取训练子集的方法不同:
如果抽取的数据集的随机子集是样例的随机子集,我们叫做粘贴(Pasting).
如果样例抽取是有放回的,我们称为Bagging.
如果抽取的数据集的随机子集是特征的随机子集,我们叫做随机子空间(Random Subspaces)
如果基估计器构建在对于样本和特征抽取的子集之上时,我们叫做随机补丁(Random Patches)
bagging的另一个好处是天生的易于并行.完全可以多个机器同时训练,之后再集成起来,这样可以大大提高效率.
使用sklearn做bagging集成
sklearn提供了两个接口来做bagging:
sklearn.ensemble.BaggingClassifier 用于集成分类器
sklearn.ensemble.BaggingRegressor 用于集成回归器
他们的用法类似,下面的例子简单介绍用法,使用的数据集为iris
End of explanation
csv_content = requests.get("http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data").text
row_name = ['sepal_length','sepal_width','petal_length','petal_width','label']
csv_list = csv_content.strip().split("\n")
row_matrix = [line.strip().split(",") for line in csv_list]
dataset = pd.DataFrame(row_matrix,columns=row_name)
encs = {}
encs["feature"] = StandardScaler()
encs["feature"].fit(dataset[row_name[:-1]])
table = pd.DataFrame(encs["feature"].transform(dataset[row_name[:-1]]),columns=row_name[:-1])
encs["label"]=LabelEncoder()
encs["label"].fit(dataset["label"])
table["label"] = encs["label"].transform(dataset["label"])
table[:10]
Explanation: 数据预处理
End of explanation
train_set,validation_set = train_test_split(table)
Explanation: 数据集拆分
End of explanation
bagging = BaggingClassifier(MLPClassifier(),n_estimators=15,max_samples=0.5, max_features=0.5,n_jobs=4)
bagging.fit(train_set[row_name[:-1]], train_set["label"])
pre = bagging.predict(validation_set[row_name[:-1]])
print(classification_report(validation_set["label"],pre))
Explanation: 训练模型
End of explanation
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=1000,n_jobs=4)
rfc.fit(train_set[row_name[:-1]], train_set["label"])
pre = rfc.predict(validation_set[row_name[:-1]])
print(classification_report(validation_set["label"],pre))
Explanation: 随机森林和sklearn中的接口
随机森林是最知名的bagging应用,利用多个随机树实例投票进行预测分类或者求平均做回归预测(cart tree使用基尼系数而非信息熵,因此可以处理连续数据).
sklearn中提供了4个随机森林接口:
接口|说明
---|---
ensemble.RandomForestClassifier([…])|随机森林分类器
ensemble.RandomForestRegressor([…])|随机森林回归器
ensemble.ExtraTreesClassifier([…])|极限随机树分类器
ensemble.ExtraTreesRegressor([n_estimators, …])|极限随机树回归器
在计算分割点方法中的随机性进一步增强.
其中极限随机树是RF的一个变种,原理几乎和RF一模一样,仅有区别有:
在决策树节点的划分决策的时候,RF采用的是随机选择一部分特征来选择划分特征,而extra trees还是比较符合bagging的传统,基于所有的特征来选择划分特征.
在选定了划分特征后,RF的决策树会基于信息增益,基尼系数,均方差之类的原则,选择一个最优的特征值划分点,这和传统的决策树相同.但是extra trees比较的激进,他会随机的选择一个特征值来划分决策树.
随机森林的原理不多复述,这里主要给出利用sklearn中接口的例子:
End of explanation |
609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Uncovering Configuration and Behavior Drift
When debugging network issues, it is important to understand how the network is different today compared to yesterday or to the desired golden state. A text diff of device configs is one way to do this, but it tends to be too noisy. It will show differences that you may not care about (e.g., changes in whitespace or timestamps), and it is hard to control what is reported. More importantly, text diffs also do not tell you about the impact of change on network behavior, such as if new traffic will be permitted or if some BGP edges will go down.
Batfish parses and builds a vendor-neutral model of device configs and behavior. This model enables you to learn how two snapshots of the network differ exactly along the aspects you care about. The behavior modeling of Batfish also lets you understand the full impact of these changes. This notebook illustrates this capability.
We focus on the following differences across three categories.
Configuration settings
Node-level properties
Interface-level properties
Properties of BGP peers
Structures and references
Structures defined in device configs
Undefined references
Network behavior
BGP adjacencies
ACL lines with treat flows differently
These are examples of different types of changes that you can analyze using Batfish. You may be interested in a different aspects of your network, and you should be able to adapt the code below to suit your needs.
Text diff will help with the configuration settings category at best. The other two categories require understanding the structure of the config and the network behavior it induces. To illustrate this point, the text diff of example configs that we use in this notebook is below.
Step1: As we can see, it is difficult to grasp the nature and impact of the change from this output, not to mention that it is impossible to build automation on top of it (e.g., to alert on certain types of differences). We show next how Batfish offers a meaningful view of these differences and their impact on network behavior.
Step2: 1. Configuration settings
Let first uncover differences in configuration settings, starting with node-level properties.
1A. Node-level properties
We focus on three example properties
Step3: The output above shows all property differences for all nodes. There is a row per node. We see that on as1border1 the domain name has changed, and on as1border2 the set of NTP servers has changes. There is no other difference for any other node for the chosen properties.
This structured output can be transformed and fed into any type of automation, e.g., to alert you when an important property has changed. We can also generate readable drift reports using the helper function we defined above.
Step4: 1B. Interface-level properties
We next check if any interface-level properties have changed. We again focus on three example settings
Step5: We see that the interface GigabitEthernet0/0 on as2border2 has been shutdown and its address assignment has been eliminated. We also see that the description has been added for two interfaces on as2core1.
1C. BGP peer properties
We next check properties of BGP peers, focusing on four example properties
Step6: The output shows that a new peer has been defined on as2dept1 with remote IP address 2.34.209.3; and the peer group has changed for an an existing peer on as2dist1, which then also led to its import and export policies changing. This correlated change in import/export policies are invisible in the text diff.
2. Structures and references
Batfish models include all structures defined in device configs (e.g., ACLs, prefix-lists) and how they are referenced in other parts of the config. You can use these models to learn if structures have been defined or deleted, which represents a major change in the configuration.
2A. Structures defined in configs
The definedStructures question is the basis for learning about structures defined in the config.
Step7: The output snippet shows how Batfish captures the exact lines in each file where each structure is defined. We can process this information from the two snapshots to produce a report on all differences.
Step8: We can easily see in this output that a BGP peer group named dept2 was newly defined on as2dist1 and a prefix-list named bogons was defined on as2border1. We also see that the peer group named dept was removed from as2dist1. The peer group change is related to what we saw earlier with a peer property changing. This view shows that the entire structure has been removed and defined.
2B. Undefined structure references
References to undefined structures are symptoms of configuration errors. Using the undefinedReferences question, Batfish can help you understand if new undefined references have been introduced or old ones have been cleared.
Step9: The output shows that there are three undefined references in the snapshot. Let us find out which ones were newly introduced relative to the reference.
Step10: We thus see that, of the three undefined references that we saw earlier, two were newly introduced and one exists in both snapshots.
3. Network behavior
We now turn our attention to behavioral differences between network snapshots, starting with changes in BGP adjacencies.
3A. BGP adjacencies
The bgpEdges question of Batfish enables you to learn about all BGP adjacencines in the network, as follows.
Step11: We see that Batfish knows which BGP edges in the snapshot come up and shows key information about them. We can use the answer to this question to learn which edges exist only in the snapshot or only in the refrence.
Step12: One BGP edge exists only in the reference, that is, it disappeared in the snapshot. We can find more about this edge, like so
Step13: Do you recall the interface on as2border2 that was shut earlier? This BGP edge was removed because of that interface shutdown (which you confirm using IP of the interface---10.23.21.2/24).
3B. ACL behavior
To compute the behavior differences between ACLs, we use the compare filters question. It returns pairs of lines, one from the filter definition in each snapshot, that match the same flow(s) but treat them differently (i.e. one permits and the other denies the flow).
Step14: We see that the only difference in the ACL behaviors of the two snapshots is for ACL 105 on as2dist. Line permit ip host 3.0.3.0 host 255.255.255.0 in the snapshot permits some flows that were being denied in the reference snapshhot because of the implicit deny at the end of the ACL. Thus, we have permitted flows that were not being permitted before.
If you were paying attention to the text diff above, the result above may surprise you. The text diff (relevant snippet repeated below) showed that ACL 102 on as2dist1 changed as well. | Python Code:
# Use recursive diff, followed by some pretty printing hacks
!diff -ur networks/drift/reference networks/drift/snapshot | sed -e 's;diff.*snapshot/\(configs.*cfg\);^-----------\1---------;g' | tr '^' '\n' | grep -v networks/drift
Explanation: Uncovering Configuration and Behavior Drift
When debugging network issues, it is important to understand how the network is different today compared to yesterday or to the desired golden state. A text diff of device configs is one way to do this, but it tends to be too noisy. It will show differences that you may not care about (e.g., changes in whitespace or timestamps), and it is hard to control what is reported. More importantly, text diffs also do not tell you about the impact of change on network behavior, such as if new traffic will be permitted or if some BGP edges will go down.
Batfish parses and builds a vendor-neutral model of device configs and behavior. This model enables you to learn how two snapshots of the network differ exactly along the aspects you care about. The behavior modeling of Batfish also lets you understand the full impact of these changes. This notebook illustrates this capability.
We focus on the following differences across three categories.
Configuration settings
Node-level properties
Interface-level properties
Properties of BGP peers
Structures and references
Structures defined in device configs
Undefined references
Network behavior
BGP adjacencies
ACL lines with treat flows differently
These are examples of different types of changes that you can analyze using Batfish. You may be interested in a different aspects of your network, and you should be able to adapt the code below to suit your needs.
Text diff will help with the configuration settings category at best. The other two categories require understanding the structure of the config and the network behavior it induces. To illustrate this point, the text diff of example configs that we use in this notebook is below.
End of explanation
# Import packages, helpers, and load questions
%run startup.py
from drift_helper import diff_frames, diff_properties
bf = Session(host="localhost")
# Initialize both the snapshot and the reference that we want to use
NETWORK_NAME = "my_network"
SNAPSHOT_PATH = "networks/drift/snapshot"
REFERENCE_PATH = "networks/drift/reference"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name="snapshot", overwrite=True)
bf.init_snapshot(REFERENCE_PATH, name="reference", overwrite=True)
Explanation: As we can see, it is difficult to grasp the nature and impact of the change from this output, not to mention that it is impossible to build automation on top of it (e.g., to alert on certain types of differences). We show next how Batfish offers a meaningful view of these differences and their impact on network behavior.
End of explanation
# Properties of interest
NODE_PROPERTIES = ["NTP_Servers" , "Domain_Name", "VRFs"]
# Compute the difference across two snapshots and return a Pandas DataFrame
node_diff = bf.q.nodeProperties(
properties=",".join(NODE_PROPERTIES)
).answer(
snapshot="snapshot",
reference_snapshot="reference"
).frame()
# Print the DataFrame
show(node_diff.head())
Explanation: 1. Configuration settings
Let first uncover differences in configuration settings, starting with node-level properties.
1A. Node-level properties
We focus on three example properties: 1) NTP servers, 2) Domain name, and 3) VRFs that exist on the device. The complete list of node properties extracted by Batfish is here.
We will compute the property differences between across snapshots using Batfish questions. Batfish makes its models available via a set of questions. When questions are run in differential mode, it outputs how the answer differ across two snapshots.
End of explanation
# Print readable messages on the differences
diff_properties(node_diff, "Node", ["Node"], NODE_PROPERTIES)
Explanation: The output above shows all property differences for all nodes. There is a row per node. We see that on as1border1 the domain name has changed, and on as1border2 the set of NTP servers has changes. There is no other difference for any other node for the chosen properties.
This structured output can be transformed and fed into any type of automation, e.g., to alert you when an important property has changed. We can also generate readable drift reports using the helper function we defined above.
End of explanation
# Properties of interest
INTERFACE_PROPERTIES = ['Active', 'Description', 'Primary_Address']
# Compute the difference across two snapshots and return a Pandas DataFrame
interface_diff = bf.q.interfaceProperties(
properties=",".join(INTERFACE_PROPERTIES)
).answer(
snapshot="snapshot",
reference_snapshot="reference"
).frame()
# Print a readable version of the differences
diff_properties(interface_diff, "Interface", ["Interface"], INTERFACE_PROPERTIES)
Explanation: 1B. Interface-level properties
We next check if any interface-level properties have changed. We again focus on three example settings: 1) whether the interface is active, 2) description, and 3) primary IP address. The complete list of interface settings extracted by Batfish are here.
End of explanation
# Properties of interest
BGP_PEER_PROPERTIES = ['Remote_AS', 'Description', 'Peer_Group', 'Import_Policy', 'Export_Policy']
# Compute the difference across two snapshots and return a Pandas DataFrame
bgp_peer_diff = bf.q.bgpPeerConfiguration(
properties=",".join(BGP_PEER_PROPERTIES)
).answer(
snapshot="snapshot",
reference_snapshot="reference"
).frame()
#Print readable messages on the differences
diff_properties(bgp_peer_diff, "BgpPeer", ["Node", "VRF", "Local_Interface", "Remote_IP"], BGP_PEER_PROPERTIES)
Explanation: We see that the interface GigabitEthernet0/0 on as2border2 has been shutdown and its address assignment has been eliminated. We also see that the description has been added for two interfaces on as2core1.
1C. BGP peer properties
We next check properties of BGP peers, focusing on four example properties: 1) description, 2) peer group, 3) Import policies applied to the peer, and 4) Export policies applied to the peer. The complete list of BGP peers properties is here.
End of explanation
# Extract defined structures from both snapshots as a Pandas DataFrame
snapshot_structures = bf.q.definedStructures().answer(snapshot="snapshot").frame()
reference_structures = bf.q.definedStructures().answer(snapshot="reference").frame()
# Show me what the information looks like by printing the first few rows
show(snapshot_structures.head())
Explanation: The output shows that a new peer has been defined on as2dept1 with remote IP address 2.34.209.3; and the peer group has changed for an an existing peer on as2dist1, which then also led to its import and export policies changing. This correlated change in import/export policies are invisible in the text diff.
2. Structures and references
Batfish models include all structures defined in device configs (e.g., ACLs, prefix-lists) and how they are referenced in other parts of the config. You can use these models to learn if structures have been defined or deleted, which represents a major change in the configuration.
2A. Structures defined in configs
The definedStructures question is the basis for learning about structures defined in the config.
End of explanation
# Remove the line numbers but keep the filename. We don't care about where in the file structure are defined.
snapshot_structures_without_lines = snapshot_structures[['Structure_Type', 'Structure_Name']].assign(
File_Name=snapshot_structures["Source_Lines"].map(lambda x: x.filename))
reference_structures_without_lines = reference_structures[['Structure_Type', 'Structure_Name']].assign(
File_Name=reference_structures["Source_Lines"].map(lambda x: x.filename))
# Print a readable message on the differences
diff_frames(snapshot_structures_without_lines,
reference_structures_without_lines,
"DefinedStructure")
Explanation: The output snippet shows how Batfish captures the exact lines in each file where each structure is defined. We can process this information from the two snapshots to produce a report on all differences.
End of explanation
# Extract undefined references from both snapshots as a Pandas DataFrame
snapshot_undefined_references=bf.q.undefinedReferences().answer(snapshot="snapshot").frame()
reference_undefined_references= bf.q.undefinedReferences().answer(snapshot="reference").frame()
# Show me all undefined references in the snapshot
show(snapshot_undefined_references)
Explanation: We can easily see in this output that a BGP peer group named dept2 was newly defined on as2dist1 and a prefix-list named bogons was defined on as2border1. We also see that the peer group named dept was removed from as2dist1. The peer group change is related to what we saw earlier with a peer property changing. This view shows that the entire structure has been removed and defined.
2B. Undefined structure references
References to undefined structures are symptoms of configuration errors. Using the undefinedReferences question, Batfish can help you understand if new undefined references have been introduced or old ones have been cleared.
End of explanation
# Remove Lines since we don't care about where it was referenced
snapshot_undefined_references_without_lines = snapshot_undefined_references.drop(columns=['Lines'])
reference_undefined_references_without_lines = reference_undefined_references.drop(columns=['Lines'])
# Print a readable message on the differences
diff_frames(snapshot_undefined_references_without_lines,
reference_undefined_references_without_lines,
"UndefinedRefeference")
Explanation: The output shows that there are three undefined references in the snapshot. Let us find out which ones were newly introduced relative to the reference.
End of explanation
# Get the edges from both snapshots as Pandas DataFrames
snapshot_bgp_edges = bf.q.bgpEdges().answer(snapshot="snapshot").frame()
reference_bgp_edges = bf.q.bgpEdges().answer(snapshot="reference").frame()
# Show me the schema by printing the first few rows
show(snapshot_bgp_edges.head())
Explanation: We thus see that, of the three undefined references that we saw earlier, two were newly introduced and one exists in both snapshots.
3. Network behavior
We now turn our attention to behavioral differences between network snapshots, starting with changes in BGP adjacencies.
3A. BGP adjacencies
The bgpEdges question of Batfish enables you to learn about all BGP adjacencines in the network, as follows.
End of explanation
# Retain only columns we care about for this analysis
snapshot_bgp_edges_nodes = snapshot_bgp_edges[['Node', 'Remote_Node']]
reference_bgp_edges_nodes = reference_bgp_edges[['Node', 'Remote_Node']]
# DataFrames contain one edge per direction; keep only one direction
snapshot_bgp_bidir_edges_nodes = snapshot_bgp_edges_nodes[
snapshot_bgp_edges_nodes['Node'] < snapshot_bgp_edges_nodes['Remote_Node']
]
reference_bgp_bidir_edges_nodes = reference_bgp_edges_nodes[
reference_bgp_edges_nodes['Node'] < reference_bgp_edges_nodes['Remote_Node']
]
# Print a readable message on the differences
diff_frames(snapshot_bgp_bidir_edges_nodes,
reference_bgp_bidir_edges_nodes,
"BgpEdge")
Explanation: We see that Batfish knows which BGP edges in the snapshot come up and shows key information about them. We can use the answer to this question to learn which edges exist only in the snapshot or only in the refrence.
End of explanation
# Find the matching edge in the reference edges answer from before
missing_snapshot_edge = reference_bgp_edges[
(reference_bgp_edges['Node']=="as2border2")
& (reference_bgp_edges['Remote_Node']=="as3border1")
]
# Print the edge information
show(missing_snapshot_edge)
Explanation: One BGP edge exists only in the reference, that is, it disappeared in the snapshot. We can find more about this edge, like so:
End of explanation
# compute behavior differences between ACLs
compare_filters = bf.q.compareFilters().answer(
snapshot='snapshot',
reference_snapshot='reference'
).frame()
# print the result
show(compare_filters)
Explanation: Do you recall the interface on as2border2 that was shut earlier? This BGP edge was removed because of that interface shutdown (which you confirm using IP of the interface---10.23.21.2/24).
3B. ACL behavior
To compute the behavior differences between ACLs, we use the compare filters question. It returns pairs of lines, one from the filter definition in each snapshot, that match the same flow(s) but treat them differently (i.e. one permits and the other denies the flow).
End of explanation
!diff -ur networks/drift/reference/configs/as2dist1.cfg networks/drift/snapshot/configs/as2dist1.cfg | grep -A 7 '@@ -113,6 +113,7 @@'
Explanation: We see that the only difference in the ACL behaviors of the two snapshots is for ACL 105 on as2dist. Line permit ip host 3.0.3.0 host 255.255.255.0 in the snapshot permits some flows that were being denied in the reference snapshhot because of the implicit deny at the end of the ACL. Thus, we have permitted flows that were not being permitted before.
If you were paying attention to the text diff above, the result above may surprise you. The text diff (relevant snippet repeated below) showed that ACL 102 on as2dist1 changed as well.
End of explanation |
610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pymagicc Usage Examples
Step1: Scenarios
The four RCP scenarios are already preloaded in Pymagicc. They are loaded as MAGICCData objects with metadata attributes. metadata contains metadata
Step2: MAGICCData subclasses scmdata's ScmRun so we can access the scmdata helpers directly, e.g.
Step3: The rcp's contain the following emissions with the following units
Step4: The regions included are
Step5: A plot of four categories in RCP3PD
Step6: Fossil fuel emissions for the four RCP scenarios.
Step7: Running MAGICC
A single pymagicc run takes under a second and returns the same object as used above. If not on Windows, the very first run might be slower due to setting up Wine. Multiple runs can be faster as setup times are reduced and other options speed things up even further e.g. limiting output to the subset of interest, using binary output formats.
Step8: The default parameters are the ones that were used to produce the RCP GHG concentrations (see also http | Python Code:
# NBVAL_IGNORE_OUTPUT
from pprint import pprint
import pymagicc
from pymagicc import MAGICC6
from pymagicc.io import MAGICCData
from pymagicc.scenarios import rcp26, rcp45, rcps
%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use("ggplot")
plt.rcParams["figure.figsize"] = 16, 9
Explanation: Pymagicc Usage Examples
End of explanation
type(rcp26)
pprint(rcp26.metadata)
Explanation: Scenarios
The four RCP scenarios are already preloaded in Pymagicc. They are loaded as MAGICCData objects with metadata attributes. metadata contains metadata
End of explanation
rcp26.__class__.__bases__
rcp26.head()
Explanation: MAGICCData subclasses scmdata's ScmRun so we can access the scmdata helpers directly, e.g.
End of explanation
rcp26.meta[["variable", "unit"]].drop_duplicates()
Explanation: The rcp's contain the following emissions with the following units
End of explanation
rcp26["region"].unique()
Explanation: The regions included are
End of explanation
categories_to_plot = [
"Emissions|" + v
for v in [
"CO2|MAGICC Fossil and Industrial",
"CO2|MAGICC AFOLU",
"CH4",
"N2O",
]
]
for g in rcp26.filter(
variable=categories_to_plot, year=range(1000, 2150)
).groupby("variable"):
plt.figure(figsize=(12, 7))
g.lineplot(hue="region").set_title(g.get_unique_meta("variable", True))
Explanation: A plot of four categories in RCP3PD
End of explanation
rcps.filter(
variable="Emissions|CO2|MAGICC Fossil and Industrial", region="World"
).lineplot(x="time");
Explanation: Fossil fuel emissions for the four RCP scenarios.
End of explanation
# NBVAL_IGNORE_OUTPUT
%time results = pymagicc.run(rcp26)
def multiple_runs():
with MAGICC6() as magicc:
for name, sdf in rcps.timeseries().groupby(["scenario"]):
results = magicc.run(MAGICCData(sdf.copy()))
# NBVAL_IGNORE_OUTPUT
%time multiple_runs()
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(16, 9))
with MAGICC6() as magicc:
for name, sdf in rcps.timeseries().groupby(["scenario"]):
results = magicc.run(MAGICCData(sdf.copy()))
results.filter(
variable="Surface Temperature", region="World"
).lineplot(ax=ax, x="time");
Explanation: Running MAGICC
A single pymagicc run takes under a second and returns the same object as used above. If not on Windows, the very first run might be slower due to setting up Wine. Multiple runs can be faster as setup times are reduced and other options speed things up even further e.g. limiting output to the subset of interest, using binary output formats.
End of explanation
low = pymagicc.run(rcp45, core_climatesensitivity=1.5)
default = pymagicc.run(rcp45, core_climatesensitivity=3)
high = pymagicc.run(rcp45, core_climatesensitivity=4.5)
filtering = {
"variable": "Surface Temperature",
"region": "World",
"year": range(1850, 2101),
}
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(16, 9))
default.filter(**filtering).line_plot(x="time", ax=ax)
plt.fill_between(
low.filter(**filtering)["time"].values,
low.filter(**filtering).timeseries().values.squeeze(),
high.filter(**filtering).timeseries().values.squeeze(),
color="lightgray",
)
plt.title(
"RCP 4.5 with equilibrium climate sensitivity set to 1.5, 3, and 4.5"
)
plt.ylabel("°C");
Explanation: The default parameters are the ones that were used to produce the RCP GHG concentrations (see also http://live.magicc.org/). Of course it's easy to change them.
End of explanation |
611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolating Parameters
If parameters are changing significantly on dynamical timescales (e.g. mass transfer at pericenter on very eccentric orbits) you need a specialized numerical scheme to do that accurately.
However, there are many astrophysical settings where parameters change very slowly compared to all the dynamical timescales in the problem.
As long as this is the case and the changes are adiabatic, you can modify these parameters between calls to sim.integrate very flexibly and without loss of accuracy.
In order to provide a machine-independent way to interpolate parameters at arbitrary times, which can be shared between the C and Python versions of the code, we have implemented an interpolator object.
For example, say you want to interpolate stellar evolution data.
We show below how you can use the Interpolator structure to spline a discrete set of time-parameter values.
We begin by reading in mass and radius data of our Sun, starting roughly 4 million years before the tip of its red-giant branch (RGB), and separating them into time and value arrays.
You can populate these arrays however you want, but we load two text files (one for stellar mass, the other for stellar radius), where the first column gives the time (e.g., the Sun's age), and the second column gives the corresponding value (mass or radius). All values need to be in simulation units. If you're using AU, then your stellar radii should also be in AU.
For an example reading MESA (Modules for Experiments in Stellar Astrophysics) data output logs, see https
Step1: Next we set up the Sun-Earth system.
Step2: Now we can create an Interpolator object for each parameter set and pass the corresponding arrays as arguments.
Step3: Finally, we integrate for 4 Myr, updating the central body's mass and radius interpolated at the time between outputs. We then plot the resulting system
Step4: We see that, as the Sun loses mass along its RGB phase, the Earth has correspondingly and adiabatically expanded, as one might expect. Let's now plot the Sun's mass over time, and a comparison of the Sun's radius and Earth's semi-major axis over time, adjacent to one another. | Python Code:
import numpy as np
data = np.loadtxt('m.txt') # return (N, 2) array
mtimes = data[:, 0] # return only 1st col
masses = data[:, 1] # return only 2nd col
data = np.loadtxt('r.txt')
rtimes = data[:, 0]
Rsuns = data[:, 1] # data in Rsun units
# convert Rsun to AU
radii = np.zeros(Rsuns.size)
for i, r in enumerate(Rsuns):
radii[i] = r * 0.00465047
Explanation: Interpolating Parameters
If parameters are changing significantly on dynamical timescales (e.g. mass transfer at pericenter on very eccentric orbits) you need a specialized numerical scheme to do that accurately.
However, there are many astrophysical settings where parameters change very slowly compared to all the dynamical timescales in the problem.
As long as this is the case and the changes are adiabatic, you can modify these parameters between calls to sim.integrate very flexibly and without loss of accuracy.
In order to provide a machine-independent way to interpolate parameters at arbitrary times, which can be shared between the C and Python versions of the code, we have implemented an interpolator object.
For example, say you want to interpolate stellar evolution data.
We show below how you can use the Interpolator structure to spline a discrete set of time-parameter values.
We begin by reading in mass and radius data of our Sun, starting roughly 4 million years before the tip of its red-giant branch (RGB), and separating them into time and value arrays.
You can populate these arrays however you want, but we load two text files (one for stellar mass, the other for stellar radius), where the first column gives the time (e.g., the Sun's age), and the second column gives the corresponding value (mass or radius). All values need to be in simulation units. If you're using AU, then your stellar radii should also be in AU.
For an example reading MESA (Modules for Experiments in Stellar Astrophysics) data output logs, see https://github.com/sabaronett/REBOUNDxPaper.
End of explanation
import rebound
import reboundx
M0 = 0.8645388227818771 # initial mass of star
R0 = 0.3833838293200158 # initial radius of star
def makesim():
sim = rebound.Simulation()
sim.G = 4*np.pi**2 # use units of AU, yrs and solar masses
sim.add(m=M0, r=R0, hash='Star')
sim.add(a=1., hash='Earth')
sim.collision = 'direct' # check if RGB Sun engulfs Earth
sim.integrator = 'whfast'
sim.dt = 0.1*sim.particles[1].P
sim.move_to_com()
return sim
%matplotlib inline
sim = makesim()
ps = sim.particles
fig, ax = rebound.OrbitPlot(sim)
ax.set_xlim([-2,2])
ax.set_ylim([-2,2])
ax.grid()
Explanation: Next we set up the Sun-Earth system.
End of explanation
rebx = reboundx.Extras(sim)
starmass = reboundx.Interpolator(rebx, mtimes, masses, 'spline')
starradius = reboundx.Interpolator(rebx, rtimes, radii, 'spline')
Explanation: Now we can create an Interpolator object for each parameter set and pass the corresponding arrays as arguments.
End of explanation
%%time
Nout = 1000
mass = np.zeros(Nout)
radius = np.zeros(Nout)
a = np.zeros(Nout)
ts = np.linspace(0., 4.e6, Nout)
T0 = 1.23895e10 # Sun's age at simulation start
for i, time in enumerate(ts):
sim.integrate(time)
ps[0].m = starmass.interpolate(rebx, t=T0+sim.t)
ps[0].r = starradius.interpolate(rebx, t=T0+sim.t)
sim.move_to_com() # lost mass had momentum, so need to move back to COM frame
mass[i] = sim.particles[0].m
radius[i] = sim.particles[0].r
a[i] = sim.particles[1].a
fig, ax = rebound.OrbitPlot(sim)
ax.set_xlim([-2,2])
ax.set_ylim([-2,2])
ax.grid()
Explanation: Finally, we integrate for 4 Myr, updating the central body's mass and radius interpolated at the time between outputs. We then plot the resulting system:
End of explanation
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(15,10))
fig.subplots_adjust(hspace=0)
ax1.set_ylabel("Star's Mass ($M_{\odot}$)", fontsize=24)
ax1.plot(ts,mass, color='tab:orange')
ax1.grid()
ax2.set_xlabel('Time (yr)', fontsize=24)
ax2.ticklabel_format(axis='x', style='sci', scilimits=(0,0))
ax2.set_ylabel('Distances (AU)', fontsize=24)
ax2.plot(ts,a, label='$a_{\oplus}$')
ax2.plot(ts,radius, label='$R_{\odot}$')
ax2.legend(fontsize=24, loc='best')
ax2.grid()
Explanation: We see that, as the Sun loses mass along its RGB phase, the Earth has correspondingly and adiabatically expanded, as one might expect. Let's now plot the Sun's mass over time, and a comparison of the Sun's radius and Earth's semi-major axis over time, adjacent to one another.
End of explanation |
612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: Create the grid
We are going to build a uniform rectilinear grid with a node spacing of 10 km in the y-direction and 20 km in the x-direction on which we will solve the flexure equation.
First we need to import RasterModelGrid.
Step2: Create a rectilinear grid with a spacing of 10 km between rows and 20 km between columns. The numbers of rows and columms are provided as a tuple of (n_rows, n_cols), in the same manner as similar numpy functions. The spacing is also a tuple, (dy, dx).
Step3: Create the component
Now we create the flexure component and tell it to use our newly-created grid. First, though, we'll examine the Flexure component a bit.
Step4: The Flexure component, as with most landlab components, will require our grid to have some data that it will use. We can get the names of these data fields with the intput_var_names attribute of the component class.
Step5: We see that flexure uses just one data field
Step6: To print a more detailed description of a field, use var_help.
Step7: What about the data that Flexure provides? Use the output_var_names attribute.
Step8: Now that we understand the component a little more, create it using our grid.
Step9: Add some loading
We will add loads to the grid. As we saw above, for this component, the name of the variable that holds the applied loads is lithosphere__overlying_pressure. We add loads of random magnitude at every node of the grid.
Step10: Update the component to solve for deflection
If you have more than one processor on your machine you may want to use several of them.
Step11: As we saw above, the flexure component creates an output field (lithosphere_surface__elevation_increment) that contains surface deflections for the applied loads.
Plot the output
We now plot these deflections with the imshow method, which is available to all landlab components.
Step12: Maintain the same loading distribution but double the effective elastic thickness.
Step13: Now let's add a vertical rectangular load to the middle of the grid. We plot the load grid first to make sure we did this correctly. | Python Code:
%matplotlib inline
import numpy as np
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Using the Landlab flexure component
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
In this example we will:
* create a Landlab component that solves the two-dimensional elastic flexure equation
* apply randomly distributed point loads
* run the component
* plot some output
A bit of magic so that we can plot within this notebook.
End of explanation
from landlab import RasterModelGrid
Explanation: Create the grid
We are going to build a uniform rectilinear grid with a node spacing of 10 km in the y-direction and 20 km in the x-direction on which we will solve the flexure equation.
First we need to import RasterModelGrid.
End of explanation
grid = RasterModelGrid((200, 400), xy_spacing=(10e3, 20e3))
grid.dy, grid.dx
Explanation: Create a rectilinear grid with a spacing of 10 km between rows and 20 km between columns. The numbers of rows and columms are provided as a tuple of (n_rows, n_cols), in the same manner as similar numpy functions. The spacing is also a tuple, (dy, dx).
End of explanation
from landlab.components.flexure import Flexure
Explanation: Create the component
Now we create the flexure component and tell it to use our newly-created grid. First, though, we'll examine the Flexure component a bit.
End of explanation
Flexure.input_var_names
Explanation: The Flexure component, as with most landlab components, will require our grid to have some data that it will use. We can get the names of these data fields with the intput_var_names attribute of the component class.
End of explanation
Flexure.var_units("lithosphere__overlying_pressure_increment")
Explanation: We see that flexure uses just one data field: the change in lithospheric loading. landlab component classes can provide additional information about each of these fields. For instance, to the the units for a field, use the var_units method.
End of explanation
Flexure.var_help("lithosphere__overlying_pressure_increment")
Explanation: To print a more detailed description of a field, use var_help.
End of explanation
Flexure.output_var_names
Flexure.var_help("lithosphere_surface__elevation_increment")
Explanation: What about the data that Flexure provides? Use the output_var_names attribute.
End of explanation
grid.add_zeros("lithosphere__overlying_pressure_increment", at="node")
flex = Flexure(grid, method="flexure", n_procs=4)
Explanation: Now that we understand the component a little more, create it using our grid.
End of explanation
load = np.random.normal(0, 100 * 2650.0 * 9.81, grid.number_of_nodes)
grid.at_node["lithosphere__overlying_pressure_increment"] = load
grid.imshow(
"lithosphere__overlying_pressure_increment",
symmetric_cbar=True,
cmap="nipy_spectral",
)
Explanation: Add some loading
We will add loads to the grid. As we saw above, for this component, the name of the variable that holds the applied loads is lithosphere__overlying_pressure. We add loads of random magnitude at every node of the grid.
End of explanation
flex.update()
Explanation: Update the component to solve for deflection
If you have more than one processor on your machine you may want to use several of them.
End of explanation
grid.imshow(
"lithosphere_surface__elevation_increment",
symmetric_cbar=True,
cmap="nipy_spectral",
)
Explanation: As we saw above, the flexure component creates an output field (lithosphere_surface__elevation_increment) that contains surface deflections for the applied loads.
Plot the output
We now plot these deflections with the imshow method, which is available to all landlab components.
End of explanation
flex.eet *= 2.0
flex.update()
grid.imshow(
"lithosphere_surface__elevation_increment",
symmetric_cbar=True,
cmap="nipy_spectral",
)
Explanation: Maintain the same loading distribution but double the effective elastic thickness.
End of explanation
load[np.where(np.logical_and(grid.node_x > 3000000, grid.node_x < 5000000))] = (
load[np.where(np.logical_and(grid.node_x > 3000000, grid.node_x < 5000000))] + 1e7
)
grid.imshow(
"lithosphere__overlying_pressure_increment",
symmetric_cbar=True,
cmap="nipy_spectral",
)
flex.update()
grid.imshow(
"lithosphere_surface__elevation_increment",
symmetric_cbar=True,
cmap="nipy_spectral",
)
Explanation: Now let's add a vertical rectangular load to the middle of the grid. We plot the load grid first to make sure we did this correctly.
End of explanation |
613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Time series analysis
Load the data from "Price of Weed".
Step3: The following function takes a DataFrame of transactions and compute daily averages.
Step5: The following function returns a map from quality name to a DataFrame of daily averages.
Step6: dailies is the map from quality name to DataFrame.
Step7: The following plots the daily average price for each quality.
Step8: We can use statsmodels to run a linear model of price as a function of time.
Step9: Here's what the results look like.
Step11: Now let's plot the fitted model with the data.
Step13: The following function plots the original data and the fitted curve.
Step14: Here are results for the high quality category
Step15: Moving averages
As a simple example, I'll show the rolling average of the numbers from 1 to 10.
Step16: With a "window" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.
Step18: The following function plots the rolling mean.
Step19: Here's what it looks like for the high quality category.
Step21: The exponentially-weighted moving average gives more weight to more recent points.
Step24: We can use resampling to generate missing values with the right amount of noise.
Step25: Here's what the EWMA model looks like with missing values filled.
Step26: Serial correlation
The following function computes serial correlation with the given lag.
Step27: Before computing correlations, we'll fill missing values.
Step28: Here are the serial correlations for raw price data.
Step29: It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.
It is more interested to see whether there are still correlations after we subtract away the trends.
Step30: Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
Step31: The strongest correlation is a weekly cycle in the medium quality category.
Autocorrelation
The autocorrelation function is the serial correlation computed for all lags.
We can use it to replicate the results from the previous section.
Step33: To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
Step35: The following function plots the actual autocorrelation for lags up to 40 days.
The flag add_weekly indicates whether we should add a simulated weekly cycle.
Step37: To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
Step38: Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
Step39: Here's what it would look like if there were a weekly cycle.
Step41: Prediction
The simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.
Step42: Here's what the prediction looks like for the high quality category, using the linear model.
Step44: When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.
Step46: To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.
If add_resid is true, we add resampled residuals to the predicted values, which generates predictions that include predictive uncertainty (due to random noise) as well as modeling uncertainty (due to random sampling).
Step48: To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
Step49: Here are the results for the high quality category.
Step51: But there is one more source of uncertainty
Step53: And this function plots the results.
Step54: Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
Step56: Exercises
Exercise
Step60: Exercise
Step61: Worked example | Python Code:
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
import numpy as np
import pandas as pd
import random
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
transactions = pd.read_csv('mj-clean.csv', parse_dates=[5])
transactions.head()
Explanation: Time series analysis
Load the data from "Price of Weed".
End of explanation
def GroupByDay(transactions, func=np.mean):
Groups transactions by day and compute the daily mean ppg.
transactions: DataFrame of transactions
returns: DataFrame of daily prices
grouped = transactions[['date', 'ppg']].groupby('date')
daily = grouped.aggregate(func)
daily['date'] = daily.index
start = daily.date[0]
one_year = np.timedelta64(1, 'Y')
daily['years'] = (daily.date - start) / one_year
return daily
Explanation: The following function takes a DataFrame of transactions and compute daily averages.
End of explanation
def GroupByQualityAndDay(transactions):
Divides transactions by quality and computes mean daily price.
transaction: DataFrame of transactions
returns: map from quality to time series of ppg
groups = transactions.groupby('quality')
dailies = {}
for name, group in groups:
dailies[name] = GroupByDay(group)
return dailies
Explanation: The following function returns a map from quality name to a DataFrame of daily averages.
End of explanation
dailies = GroupByQualityAndDay(transactions)
Explanation: dailies is the map from quality name to DataFrame.
End of explanation
import matplotlib.pyplot as plt
thinkplot.PrePlot(rows=3)
for i, (name, daily) in enumerate(dailies.items()):
thinkplot.SubPlot(i+1)
title = 'Price per gram ($)' if i == 0 else ''
thinkplot.Config(ylim=[0, 20], title=title)
thinkplot.Scatter(daily.ppg, s=10, label=name)
if i == 2:
plt.xticks(rotation=30)
thinkplot.Config()
else:
thinkplot.Config(xticks=[])
Explanation: The following plots the daily average price for each quality.
End of explanation
import statsmodels.formula.api as smf
def RunLinearModel(daily):
model = smf.ols('ppg ~ years', data=daily)
results = model.fit()
return model, results
Explanation: We can use statsmodels to run a linear model of price as a function of time.
End of explanation
from IPython.display import display
for name, daily in dailies.items():
model, results = RunLinearModel(daily)
print(name)
display(results.summary())
Explanation: Here's what the results look like.
End of explanation
def PlotFittedValues(model, results, label=''):
Plots original data and fitted values.
model: StatsModel model object
results: StatsModel results object
years = model.exog[:,1]
values = model.endog
thinkplot.Scatter(years, values, s=15, label=label)
thinkplot.Plot(years, results.fittedvalues, label='model', color='#ff7f00')
Explanation: Now let's plot the fitted model with the data.
End of explanation
def PlotLinearModel(daily, name):
Plots a linear fit to a sequence of prices, and the residuals.
daily: DataFrame of daily prices
name: string
model, results = RunLinearModel(daily)
PlotFittedValues(model, results, label=name)
thinkplot.Config(title='Fitted values',
xlabel='Years',
xlim=[-0.1, 3.8],
ylabel='Price per gram ($)')
Explanation: The following function plots the original data and the fitted curve.
End of explanation
name = 'high'
daily = dailies[name]
PlotLinearModel(daily, name)
Explanation: Here are results for the high quality category:
End of explanation
series = np.arange(10)
Explanation: Moving averages
As a simple example, I'll show the rolling average of the numbers from 1 to 10.
End of explanation
pd.rolling_mean(series, 3)
Explanation: With a "window" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.
End of explanation
def PlotRollingMean(daily, name):
Plots rolling mean.
daily: DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = pd.rolling_mean(reindexed.ppg, 30)
thinkplot.Plot(roll_mean, label='rolling mean', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='price per gram ($)')
Explanation: The following function plots the rolling mean.
End of explanation
PlotRollingMean(daily, name)
Explanation: Here's what it looks like for the high quality category.
End of explanation
def PlotEWMA(daily, name):
Plots rolling mean.
daily: DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = pd.ewma(reindexed.ppg, 30)
thinkplot.Plot(roll_mean, label='EWMA', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='price per gram ($)')
PlotEWMA(daily, name)
Explanation: The exponentially-weighted moving average gives more weight to more recent points.
End of explanation
def FillMissing(daily, span=30):
Fills missing values with an exponentially weighted moving average.
Resulting DataFrame has new columns 'ewma' and 'resid'.
daily: DataFrame of daily prices
span: window size (sort of) passed to ewma
returns: new DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
ewma = pd.ewma(reindexed.ppg, span=span)
resid = (reindexed.ppg - ewma).dropna()
fake_data = ewma + thinkstats2.Resample(resid, len(reindexed))
reindexed.ppg.fillna(fake_data, inplace=True)
reindexed['ewma'] = ewma
reindexed['resid'] = reindexed.ppg - ewma
return reindexed
def PlotFilled(daily, name):
Plots the EWMA and filled data.
daily: DataFrame of daily prices
filled = FillMissing(daily, span=30)
thinkplot.Scatter(filled.ppg, s=15, alpha=0.2, label=name)
thinkplot.Plot(filled.ewma, label='EWMA', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='Price per gram ($)')
Explanation: We can use resampling to generate missing values with the right amount of noise.
End of explanation
PlotFilled(daily, name)
Explanation: Here's what the EWMA model looks like with missing values filled.
End of explanation
def SerialCorr(series, lag=1):
xs = series[lag:]
ys = series.shift(lag)[lag:]
corr = thinkstats2.Corr(xs, ys)
return corr
Explanation: Serial correlation
The following function computes serial correlation with the given lag.
End of explanation
filled_dailies = {}
for name, daily in dailies.items():
filled_dailies[name] = FillMissing(daily, span=30)
Explanation: Before computing correlations, we'll fill missing values.
End of explanation
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.ppg, lag=1)
print(name, corr)
Explanation: Here are the serial correlations for raw price data.
End of explanation
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.resid, lag=1)
print(name, corr)
Explanation: It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.
It is more interested to see whether there are still correlations after we subtract away the trends.
End of explanation
rows = []
for lag in [1, 7, 30, 365]:
print(lag, end='\t')
for name, filled in filled_dailies.items():
corr = SerialCorr(filled.resid, lag)
print('%.2g' % corr, end='\t')
print()
Explanation: Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
End of explanation
import statsmodels.tsa.stattools as smtsa
filled = filled_dailies['high']
acf = smtsa.acf(filled.resid, nlags=365, unbiased=True)
print('%0.2g, %.2g, %0.2g, %0.2g, %0.2g' %
(acf[0], acf[1], acf[7], acf[30], acf[365]))
Explanation: The strongest correlation is a weekly cycle in the medium quality category.
Autocorrelation
The autocorrelation function is the serial correlation computed for all lags.
We can use it to replicate the results from the previous section.
End of explanation
def SimulateAutocorrelation(daily, iters=1001, nlags=40):
Resample residuals, compute autocorrelation, and plot percentiles.
daily: DataFrame
iters: number of simulations to run
nlags: maximum lags to compute autocorrelation
# run simulations
t = []
for _ in range(iters):
filled = FillMissing(daily, span=30)
resid = thinkstats2.Resample(filled.resid)
acf = smtsa.acf(resid, nlags=nlags, unbiased=True)[1:]
t.append(np.abs(acf))
high = thinkstats2.PercentileRows(t, [97.5])[0]
low = -high
lags = range(1, nlags+1)
thinkplot.FillBetween(lags, low, high, alpha=0.2, color='gray')
Explanation: To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
End of explanation
def PlotAutoCorrelation(dailies, nlags=40, add_weekly=False):
Plots autocorrelation functions.
dailies: map from category name to DataFrame of daily prices
nlags: number of lags to compute
add_weekly: boolean, whether to add a simulated weekly pattern
thinkplot.PrePlot(3)
daily = dailies['high']
SimulateAutocorrelation(daily)
for name, daily in dailies.items():
if add_weekly:
daily = AddWeeklySeasonality(daily)
filled = FillMissing(daily, span=30)
acf = smtsa.acf(filled.resid, nlags=nlags, unbiased=True)
lags = np.arange(len(acf))
thinkplot.Plot(lags[1:], acf[1:], label=name)
Explanation: The following function plots the actual autocorrelation for lags up to 40 days.
The flag add_weekly indicates whether we should add a simulated weekly cycle.
End of explanation
def AddWeeklySeasonality(daily):
Adds a weekly pattern.
daily: DataFrame of daily prices
returns: new DataFrame of daily prices
fri_or_sat = (daily.index.dayofweek==4) | (daily.index.dayofweek==5)
fake = daily.copy()
fake.ppg.loc[fri_or_sat] += np.random.uniform(0, 2, fri_or_sat.sum())
return fake
Explanation: To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
End of explanation
axis = [0, 41, -0.2, 0.2]
PlotAutoCorrelation(dailies, add_weekly=False)
thinkplot.Config(axis=axis,
loc='lower right',
ylabel='correlation',
xlabel='lag (day)')
Explanation: Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
End of explanation
PlotAutoCorrelation(dailies, add_weekly=True)
thinkplot.Config(axis=axis,
loc='lower right',
xlabel='lag (days)')
Explanation: Here's what it would look like if there were a weekly cycle.
End of explanation
def GenerateSimplePrediction(results, years):
Generates a simple prediction.
results: results object
years: sequence of times (in years) to make predictions for
returns: sequence of predicted values
n = len(years)
inter = np.ones(n)
d = dict(Intercept=inter, years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict = results.predict(predict_df)
return predict
def PlotSimplePrediction(results, years):
predict = GenerateSimplePrediction(results, years)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.2, label=name)
thinkplot.plot(years, predict, color='#ff7f00')
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)',
loc='upper right')
Explanation: Prediction
The simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.
End of explanation
name = 'high'
daily = dailies[name]
_, results = RunLinearModel(daily)
years = np.linspace(0, 5, 101)
PlotSimplePrediction(results, years)
Explanation: Here's what the prediction looks like for the high quality category, using the linear model.
End of explanation
def SimulateResults(daily, iters=101, func=RunLinearModel):
Run simulations based on resampling residuals.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
_, results = func(daily)
fake = daily.copy()
result_seq = []
for _ in range(iters):
fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
Explanation: When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.
End of explanation
def GeneratePredictions(result_seq, years, add_resid=False):
Generates an array of predicted values from a list of model results.
When add_resid is False, predictions represent sampling error only.
When add_resid is True, they also include residual error (which is
more relevant to prediction).
result_seq: list of model results
years: sequence of times (in years) to make predictions for
add_resid: boolean, whether to add in resampled residuals
returns: sequence of predictions
n = len(years)
d = dict(Intercept=np.ones(n), years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict_seq = []
for fake_results in result_seq:
predict = fake_results.predict(predict_df)
if add_resid:
predict += thinkstats2.Resample(fake_results.resid, n)
predict_seq.append(predict)
return predict_seq
Explanation: To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.
If add_resid is true, we add resampled residuals to the predicted values, which generates predictions that include predictive uncertainty (due to random noise) as well as modeling uncertainty (due to random sampling).
End of explanation
def PlotPredictions(daily, years, iters=101, percent=90, func=RunLinearModel):
Plots predictions.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
result_seq = SimulateResults(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100-p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.3, color='gray')
predict_seq = GeneratePredictions(result_seq, years, add_resid=False)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.5, color='gray')
Explanation: To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
End of explanation
years = np.linspace(0, 5, 101)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotPredictions(daily, years)
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)')
Explanation: Here are the results for the high quality category.
End of explanation
def SimulateIntervals(daily, iters=101, func=RunLinearModel):
Run simulations based on different subsets of the data.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
result_seq = []
starts = np.linspace(0, len(daily), iters).astype(int)
for start in starts[:-2]:
subset = daily[start:]
_, results = func(subset)
fake = subset.copy()
for _ in range(iters):
fake.ppg = (results.fittedvalues +
thinkstats2.Resample(results.resid))
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
Explanation: But there is one more source of uncertainty: how much past data should we use to build the model?
The following function generates a sequence of models based on different amounts of past data.
End of explanation
def PlotIntervals(daily, years, iters=101, percent=90, func=RunLinearModel):
Plots predictions based on different intervals.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
result_seq = SimulateIntervals(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100-p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.2, color='gray')
Explanation: And this function plots the results.
End of explanation
name = 'high'
daily = dailies[name]
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotIntervals(daily, years)
PlotPredictions(daily, years)
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)')
Explanation: Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
End of explanation
# Solution
def RunQuadraticModel(daily):
Runs a linear model of prices versus years.
daily: DataFrame of daily prices
returns: model, results
daily['years2'] = daily.years**2
model = smf.ols('ppg ~ years + years2', data=daily)
results = model.fit()
return model, results
# Solution
name = 'high'
daily = dailies[name]
model, results = RunQuadraticModel(daily)
results.summary()
# Solution
PlotFittedValues(model, results, label=name)
thinkplot.Config(title='Fitted values',
xlabel='Years',
xlim=[-0.1, 3.8],
ylabel='price per gram ($)')
# Solution
years = np.linspace(0, 5, 101)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotPredictions(daily, years, func=RunQuadraticModel)
thinkplot.Config(title='predictions',
xlabel='Years',
xlim=[years[0]-0.1, years[-1]+0.1],
ylabel='Price per gram ($)')
Explanation: Exercises
Exercise: The linear model I used in this chapter has the obvious drawback that it is linear, and there is no reason to expect prices to change linearly over time. We can add flexibility to the model by adding a quadratic term, as we did in Section 11.3.
Use a quadratic model to fit the time series of daily prices, and use the model to generate predictions. You will have to write a version of RunLinearModel that runs that quadratic model, but after that you should be able to reuse code from the chapter to generate predictions.
End of explanation
# Solution
class SerialCorrelationTest(thinkstats2.HypothesisTest):
Tests serial correlations by permutation.
def TestStatistic(self, data):
Computes the test statistic.
data: tuple of xs and ys
series, lag = data
test_stat = abs(SerialCorr(series, lag))
return test_stat
def RunModel(self):
Run the model of the null hypothesis.
returns: simulated data
series, lag = self.data
permutation = series.reindex(np.random.permutation(series.index))
return permutation, lag
# Solution
# test the correlation between consecutive prices
name = 'high'
daily = dailies[name]
series = daily.ppg
test = SerialCorrelationTest((series, 1))
pvalue = test.PValue()
print(test.actual, pvalue)
# Solution
# test for serial correlation in residuals of the linear model
_, results = RunLinearModel(daily)
series = results.resid
test = SerialCorrelationTest((series, 1))
pvalue = test.PValue()
print(test.actual, pvalue)
# Solution
# test for serial correlation in residuals of the quadratic model
_, results = RunQuadraticModel(daily)
series = results.resid
test = SerialCorrelationTest((series, 1))
pvalue = test.PValue()
print(test.actual, pvalue)
Explanation: Exercise: Write a definition for a class named SerialCorrelationTest that extends HypothesisTest from Section 9.2. It should take a series and a lag as data, compute the serial correlation of the series with the given lag, and then compute the p-value of the observed correlation.
Use this class to test whether the serial correlation in raw price data is statistically significant. Also test the residuals of the linear model and (if you did the previous exercise), the quadratic model.
End of explanation
name = 'high'
daily = dailies[name]
filled = FillMissing(daily)
diffs = filled.ppg.diff()
thinkplot.plot(diffs)
plt.xticks(rotation=30)
thinkplot.Config(ylabel='Daily change in price per gram ($)')
filled['slope'] = pd.ewma(diffs, span=365)
thinkplot.plot(filled.slope[-365:])
plt.xticks(rotation=30)
thinkplot.Config(ylabel='EWMA of diff ($)')
# extract the last inter and the mean of the last 30 slopes
start = filled.index[-1]
inter = filled.ewma[-1]
slope = filled.slope[-30:].mean()
start, inter, slope
# reindex the DataFrame, adding a year to the end
dates = pd.date_range(filled.index.min(),
filled.index.max() + np.timedelta64(365, 'D'))
predicted = filled.reindex(dates)
# generate predicted values and add them to the end
predicted['date'] = predicted.index
one_day = np.timedelta64(1, 'D')
predicted['days'] = (predicted.date - start) / one_day
predict = inter + slope * predicted.days
predicted.ewma.fillna(predict, inplace=True)
# plot the actual values and predictions
thinkplot.Scatter(daily.ppg, alpha=0.1, label=name)
thinkplot.Plot(predicted.ewma, color='#ff7f00')
Explanation: Worked example: There are several ways to extend the EWMA model to generate predictions. One of the simplest is something like this:
Compute the EWMA of the time series and use the last point as an intercept, inter.
Compute the EWMA of differences between successive elements in the time series and use the last point as a slope, slope.
To predict values at future times, compute inter + slope * dt, where dt is the difference between the time of the prediction and the time of the last observation.
End of explanation |
614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Subselect / Unterabfragen)
Zur Durchführung einer Abfrage werden Informationen benötigt, die erst durch eine eigene Abfrage geholt werden müssen.
Sie können stehen
als Vertreter für einen Wert
als Vertreter für eine Liste
als Vertreter für eine Tabelle
als Vertreter für ein Feld
Step1:
Step2: Vertreter für Wert
Nenne alle Mitarbeiter der Abteilung „Schadensabwicklung“.
Step3: Lösung
Step4: Vertreter für Spaltenfunktionen
Die Ergebnisse von Aggregatfunktionen werden häufig in der WHERE-Klausel benötigt
Beispiel
Step5: Aufgabe
Bestimme alle Schadensfälle, die von der durchschnittlichen Schadenshöhe eines Jahres
maximal 300 € abweichen.
Lösung
Teil 1
Step6: Bemerkung
Dies ist ein Paradebeispiel dafür, wie Unterabfragen nicht benutzt werden sollen. Für jeden
einzelnen Datensatz muss in der WHERE-Bedingung eine neue Unterabfrage gestartet werden − mit eigener WHERE-Klausel und Durchschnittsberechnung. Viel besser wäre eine der JOIN-Varianten.
Weitere Lösungsmöglichkeiten (Lutz (13/14)
```mysql
select beschreibung, schadenshoehe
from schadensfall where
schadenshoehe <= (
select avg(schadenshoehe)
from schadensfall) + 300
and schadenshoehe >= (select avg(schadenshoehe)
from schadensfall) - 300
select beschreibung, schadenshoehe
from schadensfall where
schadenshoehe between (
select avg(schadenshoehe)
from schadensfall) - 300
and (select avg(schadenshoehe)
from schadensfall) + 300
select @average
Step7: Aufgabe
Gib alle Informationen zu den Schadensfällen des Jahres 2008, die von der durchschnittlichen Schadenshöhe 2008 maximal 300 € abweichen.
Lösung
Teil 1
Step8: Vertreter für eine Tabelle
Das Ergebnis einer Abfrage kann in der Hauptabfrage überall dort eingesetzt werden, wo
eine Tabelle vorgesehen ist. Die Struktur dieser Situation sieht so aus
Step9: Durch eine Gruppierung werden alle Jahreszahlen und die durchschnittlichen Schadenshöhen zusammengestellt (Teil 1 der Lösung).
Für Teil 2 der Lösung muss für jeden Schadensfall nur noch Jahr und Schadenshöhe mit dem betreffenden Eintrag in der Ergebnistabelle temp verglichen werden.
Das ist der wesentliche Unterschied und entscheidende Vorteil zu anderen Lösungen
Step10: Übungen
Welche der folgenden Feststellungen sind richtig, welche sind falsch?
Das Ergebnis einer Unterabfrage kann verwendet werden, wenn es ein einzelner Wert oder eine Liste in Form einer Tabelle ist. Andere Ergebnisse sind nicht möglich.
Ein einzelner Wert als Ergebnis kann durch eine direkte Abfrage oder durch eine Spaltenfunktion erhalten werden.
Unterabfragen sollten nicht verwendet werden, wenn die WHERE-Bedingung für jede Zeile der Hauptabfrage einen anderen Wert erhält und deshalb die Unterabfrage neu ausgeführt werden muss.
Mehrere Unterabfragen können verschachtelt werden.
Für die Arbeitsgeschwindigkeit ist es gleichgültig, ob mehrere Unterabfragen oder JOINs verwendet werden.
Eine Unterabfrage mit einer Tabelle als Ergebnis kann GROUP BY nicht sinnvoll nutzen.
Eine Unterabfrage mit einer Tabelle als Ergebnis kann ORDER BY nicht sinnvoll nutzen.
Bei einer Unterabfrage mit einer Tabelle als Ergebnis ist ein Alias-Name für die Tabelle sinnvoll, aber nicht notwendig.
Bei einer Unterabfrage mit einer Tabelle als Ergebnis sind Alias-Namen für die Spalten sinnvoll, aber nicht notwendig.
Welche Verträge (mit einigen Angaben) hat der Mitarbeiter „Braun, Christian“ abgeschlossen? Ignorieren Sie die Möglichkeit, dass es mehrere Mitarbeiter dieses Namens geben könnte.
Zeigen Sie alle Verträge, die zum Kunden 'Heckel Obsthandel GmbH' gehören. Ignorieren Sie die Möglichkeit, dass der Kunde mehrfach gespeichert sein könnte.
Ändern Sie die Lösung von Übung 3, sodass auch mehrere Kunden mit diesem Namen als Ergebnis denkbar sind.
Zeigen Sie alle Fahrzeuge, die im Jahr 2008 an einem Schadensfall beteiligt waren.
Zeigen Sie alle Fahrzeugtypen (mit ID, Bezeichnung und Name des Herstellers), die im Jahr 2008 an einem Schadensfall beteiligt waren.
Bestimmen Sie alle Fahrzeuge eines bestimmten Herstellers mit Angabe des Typs.
Zeigen Sie zu jedem Mitarbeiter der Abteilung „Vertrieb“ den ersten Vertrag (mit einigen Angaben) an, den er abgeschlossen hat. Der Mitarbeiter soll mit ID und Name/Vorname angezeigt werden.
Von der Deutschen Post AG wird eine Tabelle PLZ_Aenderung mit folgenden Inhalten geliefert | Python Code:
%load_ext sql
%sql mysql://steinam:steinam@localhost/versicherung_complete
Explanation: Subselect / Unterabfragen)
Zur Durchführung einer Abfrage werden Informationen benötigt, die erst durch eine eigene Abfrage geholt werden müssen.
Sie können stehen
als Vertreter für einen Wert
als Vertreter für eine Liste
als Vertreter für eine Tabelle
als Vertreter für ein Feld
End of explanation
% load_ext sql
Explanation:
End of explanation
%%sql
select Personalnummer, Name, Vorname
from Mitarbeiter
where Abteilung_ID =
( select ID from Abteilung
where Kuerzel = 'Schadensabwicklung' );
Explanation: Vertreter für Wert
Nenne alle Mitarbeiter der Abteilung „Schadensabwicklung“.
End of explanation
%%sql
select Personalnummer, Name, Vorname
from Mitarbeiter
where Abteilung_ID =
( select ID from Abteilung
where Kuerzel = 'ScAb' );
Explanation: Lösung
End of explanation
%%sql
SELECT ID, Datum, Ort, Schadenshoehe
from Schadensfall
where Schadenshoehe < (
select AVG(Schadenshoehe) from Schadensfall
);
Explanation: Vertreter für Spaltenfunktionen
Die Ergebnisse von Aggregatfunktionen werden häufig in der WHERE-Klausel benötigt
Beispiel:
Hole die Schadensfälle mit unterdurchschnittlicher Schadenshöhe.
Lösung
Teil 1: Berechne die durchschnittliche Schadenshöhe aller Schadensfälle.
Teil 2: Übernimm das Ergebnis als Vergleichswert in die eigentliche Abfrage.
End of explanation
%%sql
select sf.ID, sf.Datum, sf.Schadenshoehe, EXTRACT(YEAR from
sf.Datum) AS Jahr
from Schadensfall sf
where ABS(Schadenshoehe - (
select AVG(sf2.Schadenshoehe)
from Schadensfall sf2
where YEAR(sf2.Datum) = YEAR(sf.Datum)
)
) <= 300;
Explanation: Aufgabe
Bestimme alle Schadensfälle, die von der durchschnittlichen Schadenshöhe eines Jahres
maximal 300 € abweichen.
Lösung
Teil 1: Bestimme den Durchschnitt aller Schadensfälle innerhalb eines Jahres.
Teil 2: Hole alle Schadensfälle, deren Schadenshöhe im betreffenden Jahr innerhalb des Bereichs „Durchschnitt plus/minus 300“ liegen.
End of explanation
%%sql
select ID, Kennzeichen, Fahrzeugtyp_ID as TypID
from Fahrzeug
where Fahrzeugtyp_ID in(
select ID
from Fahrzeugtyp
where Hersteller_ID = (
select ID
from Fahrzeughersteller
where Name = 'Volkswagen' ) );
Explanation: Bemerkung
Dies ist ein Paradebeispiel dafür, wie Unterabfragen nicht benutzt werden sollen. Für jeden
einzelnen Datensatz muss in der WHERE-Bedingung eine neue Unterabfrage gestartet werden − mit eigener WHERE-Klausel und Durchschnittsberechnung. Viel besser wäre eine der JOIN-Varianten.
Weitere Lösungsmöglichkeiten (Lutz (13/14)
```mysql
select beschreibung, schadenshoehe
from schadensfall where
schadenshoehe <= (
select avg(schadenshoehe)
from schadensfall) + 300
and schadenshoehe >= (select avg(schadenshoehe)
from schadensfall) - 300
select beschreibung, schadenshoehe
from schadensfall where
schadenshoehe between (
select avg(schadenshoehe)
from schadensfall) - 300
and (select avg(schadenshoehe)
from schadensfall) + 300
select @average:=avg(schadenshoehe) from schadensfall;
select id from schadensfall where abs(schadenshoehe -
@average) <= 300;
```
Ergebnis als Liste mehrerer Werte
Das Ergebnis einer Abfrage kann als Filter für die eigentliche Abfrage benutzt werden.
Aufgabe
Bestimme alle Fahrzeuge eines bestimmten Herstellers.
Lösung
Teil 1: Hole die ID des gewünschten Herstellers.
Teil 2: Hole alle IDs der Tabelle Fahrzeugtyp zu dieser Hersteller-ID.
Teil 3: Hole alle Fahrzeuge, die zu dieser Liste von Fahrzeugtypen-IDs passen.
End of explanation
%%sql
select *
from Schadensfall
where ID in ( SELECT ID
from Schadensfall
where ( ABS(Schadenshoehe - (
select AVG(sf2.Schadenshoehe)
from Schadensfall sf2
where YEAR(sf2.Datum) = 2008
)
) <= 300 )
and ( YEAR(Datum) = 2008 )
);
Explanation: Aufgabe
Gib alle Informationen zu den Schadensfällen des Jahres 2008, die von der durchschnittlichen Schadenshöhe 2008 maximal 300 € abweichen.
Lösung
Teil 1: Bestimme den Durchschnitt aller Schadensfälle innerhalb von 2008.
Teil 2: Hole alle IDs von Schadensfällen, deren Schadenshöhe innerhalb des Bereichs „Durchschnitt plus/minus 300“ liegen.
Teil 3: Hole alle anderen Informationen zu diesen IDs.
End of explanation
%sql
SELECT sf.ID, sf.Datum, sf.Schadenshoehe, temp.Jahr,
temp.Durchschnitt
FROM Schadensfall sf,
( SELECT AVG(sf2.Schadenshoehe) AS Durchschnitt,
EXTRACT(YEAR FROM sf2.Datum) as Jahr
FROM Schadensfall sf2
group by EXTRACT(YEAR FROM sf2.Datum)
) temp
WHERE temp.Jahr = EXTRACT(YEAR FROM sf.Datum)
and ABS(Schadenshoehe - temp.Durchschnitt) <= 300;
Explanation: Vertreter für eine Tabelle
Das Ergebnis einer Abfrage kann in der Hauptabfrage überall dort eingesetzt werden, wo
eine Tabelle vorgesehen ist. Die Struktur dieser Situation sieht so aus:
```mysql
SELECT <spaltenliste>
FROM <haupttabelle>,
(SELECT <spaltenliste>
FROM <zusatztabellen>
<weitere Bestandteile der Unterabfrage>
) <name>
<weitere Bestandteile der Hauptabfrage>
```
Die Unterabfrage kann grundsätzlich alle SELECT-Bestandteile enthalten.
ORDER BY kann nicht sinnvoll genutzt werden, weil das Ergebnis der Unterabfrage mit der Haupttabelle oder einer
anderen Tabelle verknüpft wird wodurch eine Sortierung sowieso verlorenginge.
Es muss ein Name als Tabellen-Alias angegeben werden, der als Ergebnistabelle in der Hauptabfrage verwendet wird.
Aufgabe
Bestimme alle Schadensfälle, die von der durchschnittlichen Schadenshöhe eines Jahres maximal 300 € abweichen.
Lösung
Teil 1: Stelle alle Jahre zusammen und bestimme den Durchschnitt aller Schadensfälle innerhalb eines Jahres.
Teil 2: Hole alle Schadensfälle, deren Schadenshöhe im jeweiligen Jahr innerhalb des Bereichs „Durchschnitt plus/minus 300“ liegen.
End of explanation
%%sql
SELECT Fahrzeug.ID, Kennzeichen, Typen.ID As TYP, Typen.Bezeichnung
FROM Fahrzeug,
(SELECT ID, Bezeichnung
FROM Fahrzeugtyp
WHERE Hersteller_ID =
(SELECT ID
FROM Fahrzeughersteller
WHERE Name = 'Volkswagen' )
) Typen
WHERE Fahrzeugtyp_ID = Typen.ID;
Explanation: Durch eine Gruppierung werden alle Jahreszahlen und die durchschnittlichen Schadenshöhen zusammengestellt (Teil 1 der Lösung).
Für Teil 2 der Lösung muss für jeden Schadensfall nur noch Jahr und Schadenshöhe mit dem betreffenden Eintrag in der Ergebnistabelle temp verglichen werden.
Das ist der wesentliche Unterschied und entscheidende Vorteil zu anderen Lösungen: Die
Durchschnittswerte werden einmalig zusammengestellt und nur noch abgerufen; sie müs-
sen nicht bei jedem Datensatz neu (und ständig wiederholt) berechnet werden.
Aufgabe
Bestimme alle Fahrzeuge eines bestimmten Herstellers mit Angabe des Typs.
Teil 1: Hole die ID des gewünschten Herstellers.
Teil 2: Hole alle IDs und Bezeichnungen der Tabelle Fahrzeugtyp, die zu dieser Hersteller-ID gehören.
Teil 3: Hole alle Fahrzeuge, die zu dieser Liste von Fahrzeugtyp-IDs gehören.
End of explanation
%sql mysql://steinam:steinam@localhost/so_2016
%%sql
-- Original Roth
Select Kurs.KursID, Kursart.Bezeichnung,
Kurs.DatumUhrzeitBeginn,
((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung
from Kurs, Kursart, Kundekurs
where KundeKurs.KursID = Kurs.KursID and Kursart.KursartID = Kurs.KursartID
group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung
having Auslastung < 50;
%%sql
select kursid from kurs
where
((select teilnehmerMax from kursart where kursart.kursartId = kurs.kursartId) * 0.5)
>
(count(KundeKurs.kundenid) where KundeKurs.KursID = kurs.KursID);
%%sql
Select Kurs.KursID, Kursart.Bezeichnung,
Kurs.DatumUhrzeitBeginn,
((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung
from Kurs, Kursart, Kundekurs
where KundeKurs.KursID = Kurs.KursID and Kursart.KursartID = Kurs.KursartID
group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung
having Auslastung < 50
%%sql
Select Kurs.KursID, Kursart.Bezeichnung,
Kurs.DatumUhrzeitBeginn,
((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung
from kurs left join kundekurs
on kurs.`kursid` = kundekurs.`Kursid`
inner join kursart
on `kurs`.`kursartid` = `kursart`.`kursartid`
group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung
having Auslastung < 50
Explanation: Übungen
Welche der folgenden Feststellungen sind richtig, welche sind falsch?
Das Ergebnis einer Unterabfrage kann verwendet werden, wenn es ein einzelner Wert oder eine Liste in Form einer Tabelle ist. Andere Ergebnisse sind nicht möglich.
Ein einzelner Wert als Ergebnis kann durch eine direkte Abfrage oder durch eine Spaltenfunktion erhalten werden.
Unterabfragen sollten nicht verwendet werden, wenn die WHERE-Bedingung für jede Zeile der Hauptabfrage einen anderen Wert erhält und deshalb die Unterabfrage neu ausgeführt werden muss.
Mehrere Unterabfragen können verschachtelt werden.
Für die Arbeitsgeschwindigkeit ist es gleichgültig, ob mehrere Unterabfragen oder JOINs verwendet werden.
Eine Unterabfrage mit einer Tabelle als Ergebnis kann GROUP BY nicht sinnvoll nutzen.
Eine Unterabfrage mit einer Tabelle als Ergebnis kann ORDER BY nicht sinnvoll nutzen.
Bei einer Unterabfrage mit einer Tabelle als Ergebnis ist ein Alias-Name für die Tabelle sinnvoll, aber nicht notwendig.
Bei einer Unterabfrage mit einer Tabelle als Ergebnis sind Alias-Namen für die Spalten sinnvoll, aber nicht notwendig.
Welche Verträge (mit einigen Angaben) hat der Mitarbeiter „Braun, Christian“ abgeschlossen? Ignorieren Sie die Möglichkeit, dass es mehrere Mitarbeiter dieses Namens geben könnte.
Zeigen Sie alle Verträge, die zum Kunden 'Heckel Obsthandel GmbH' gehören. Ignorieren Sie die Möglichkeit, dass der Kunde mehrfach gespeichert sein könnte.
Ändern Sie die Lösung von Übung 3, sodass auch mehrere Kunden mit diesem Namen als Ergebnis denkbar sind.
Zeigen Sie alle Fahrzeuge, die im Jahr 2008 an einem Schadensfall beteiligt waren.
Zeigen Sie alle Fahrzeugtypen (mit ID, Bezeichnung und Name des Herstellers), die im Jahr 2008 an einem Schadensfall beteiligt waren.
Bestimmen Sie alle Fahrzeuge eines bestimmten Herstellers mit Angabe des Typs.
Zeigen Sie zu jedem Mitarbeiter der Abteilung „Vertrieb“ den ersten Vertrag (mit einigen Angaben) an, den er abgeschlossen hat. Der Mitarbeiter soll mit ID und Name/Vorname angezeigt werden.
Von der Deutschen Post AG wird eine Tabelle PLZ_Aenderung mit folgenden Inhalten geliefert:
csv
ID PLZalt Ortalt PLZneu Ortneu
1 45658 Recklinghausen 45659 Recklinghausen
2 45721 Hamm-Bossendorf 45721 Haltern OT Hamm
3 45772 Marl 45770 Marl
4 45701 Herten 45699 Herten
Ändern Sie die Tabelle Versicherungsnehmer so, dass bei allen Adressen, bei denen PLZ/Ort mit PLZalt/Ortalt
übereinstimmen, diese Angaben durch PLZneu/Ortneu geändert werden.
Hinweise: Beschränken Sie sich auf die Änderung mit der ID=3. (Die vollständige Lösung ist erst mit
SQL-Programmierung möglich.) Bei dieser Änderungsdatei handelt es sich nur um fiktive Daten, keine echten Änderungen.
Sommer 2016
End of explanation |
615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to TensorFlow, fitting point by point
In this notebook, we introduce TensorFlow by fitting a line of the form y=m*x+b point by point. This is a derivation of Jared Ostmeyer's Naked Tensor code.
Load dependencies and set seeds for reproducibility
Step1: Create a very small data set
Step2: Define variables -- the model parameters we'll learn -- and initialize them with "random" values
Step3: One single point at a time, define the error between the true label and the model's prediction of the label
Step4: Define optimizer as SSE-minimizing gradient descent
Step5: Define an operator that will initialize the graph with all available global variables
Step6: With the computational graph designed, we initialize a session to execute it
Step7: Calculate the predicted model outputs given the inputs xs | Python Code:
import numpy as np
np.random.seed(42)
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
tf.set_random_seed(42)
Explanation: Introduction to TensorFlow, fitting point by point
In this notebook, we introduce TensorFlow by fitting a line of the form y=m*x+b point by point. This is a derivation of Jared Ostmeyer's Naked Tensor code.
Load dependencies and set seeds for reproducibility
End of explanation
xs = [0., 1., 2., 3., 4., 5., 6., 7.] # feature (independent variable)
ys = [-.82, -.94, -.12, .26, .39, .64, 1.02, 1.] # labels (dependent variable)
fig, ax = plt.subplots()
_ = ax.scatter(xs, ys)
Explanation: Create a very small data set
End of explanation
m = tf.Variable(-0.5)
b = tf.Variable(1.0)
Explanation: Define variables -- the model parameters we'll learn -- and initialize them with "random" values
End of explanation
total_error = 0.0
for x,y in zip(xs, ys):
y_model = m*x + b
total_error += (y-y_model)**2
Explanation: One single point at a time, define the error between the true label and the model's prediction of the label
End of explanation
optimizer_operation = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(total_error)
Explanation: Define optimizer as SSE-minimizing gradient descent
End of explanation
initializer_op = tf.global_variables_initializer()
Explanation: Define an operator that will initialize the graph with all available global variables
End of explanation
with tf.Session() as sess:
sess.run(initializer_op)
n_epochs = 10
for iteration in range(n_epochs):
sess.run(optimizer_operation)
slope, intercept = sess.run([m, b])
slope
intercept
Explanation: With the computational graph designed, we initialize a session to execute it
End of explanation
y_hat = slope*np.array(xs) + intercept
pd.DataFrame(list(zip(ys, y_hat)), columns=['y', 'y_hat'])
fig, ax = plt.subplots()
ax.scatter(xs, ys)
x_min, x_max = ax.get_xlim()
y_min, y_max = intercept, intercept + slope*(x_max-x_min)
ax.plot([x_min, x_max], [y_min, y_max])
_ = ax.set_xlim([x_min, x_max])
Explanation: Calculate the predicted model outputs given the inputs xs
End of explanation |
616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-3', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: AWI
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
RNN for Text Generation
Generating Text (encoded variables)
We saw how to generate continuous values, now let's see how to generalize this to generate categorical sequences (such as words or letters).
Imports
Step1: Get Text Data
Step2: Encode Entire Text
Step3: One Hot Encoding
As previously discussed, we need to one-hot encode our data inorder for it to work with the network structure. Make sure to review numpy if any of these operations confuse you!
Step4: --------------
Creating Training Batches
We need to create a function that will generate batches of characters along with the next character in the sequence as a label.
-----------------
Step5: Example of generating a batch
Step6: GPU Check
Remember this will take a lot longer on CPU!
Step7: Creating the LSTM Model
Note! We will have options for GPU users and CPU users. CPU will take MUCH LONGER to train and you may encounter RAM issues depending on your hardware. If that is the case, consider using cloud services like AWS, GCP, or Azure. Note, these may cost you money to use!
Step8: Instance of the Model
Step9: Try to make the total_parameters be roughly the same magnitude as the number of characters in the text.
Step10: Optimizer and Loss
Step11: Training Data and Validation Data
Step12: Training the Network
Variables
Feel free to play around with these values!
Step13:
Step14: -------
Saving the Model
https
Step15: Load Model
Step16: Generating Predictions | Python Code:
import torch
from torch import nn
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
RNN for Text Generation
Generating Text (encoded variables)
We saw how to generate continuous values, now let's see how to generalize this to generate categorical sequences (such as words or letters).
Imports
End of explanation
with open('../Data/shakespeare.txt','r',encoding='utf8') as f:
text = f.read()
text[:1000]
print(text[:1000])
len(text)
Explanation: Get Text Data
End of explanation
all_characters = set(text)
# all_characters
decoder = dict(enumerate(all_characters))
# decoder
# decoder.items()
encoder = {char: ind for ind,char in decoder.items()}
# encoder
encoded_text = np.array([encoder[char] for char in text])
encoded_text[:500]
Explanation: Encode Entire Text
End of explanation
def one_hot_encoder(encoded_text, num_uni_chars):
'''
encoded_text : batch of encoded text
num_uni_chars = number of unique characters (len(set(text)))
'''
# METHOD FROM:
# https://stackoverflow.com/questions/29831489/convert-encoded_textay-of-indices-to-1-hot-encoded-numpy-encoded_textay
# Create a placeholder for zeros.
one_hot = np.zeros((encoded_text.size, num_uni_chars))
# Convert data type for later use with pytorch (errors if we dont!)
one_hot = one_hot.astype(np.float32)
# Using fancy indexing fill in the 1s at the correct index locations
one_hot[np.arange(one_hot.shape[0]), encoded_text.flatten()] = 1.0
# Reshape it so it matches the batch sahe
one_hot = one_hot.reshape((*encoded_text.shape, num_uni_chars))
return one_hot
one_hot_encoder(np.array([1,2,0]),3)
Explanation: One Hot Encoding
As previously discussed, we need to one-hot encode our data inorder for it to work with the network structure. Make sure to review numpy if any of these operations confuse you!
End of explanation
example_text = np.arange(10)
example_text
# If we wanted 5 batches
example_text.reshape((5,-1))
def generate_batches(encoded_text, samp_per_batch=10, seq_len=50):
'''
Generate (using yield) batches for training.
X: Encoded Text of length seq_len
Y: Encoded Text shifted by one
Example:
X:
[[1 2 3]]
Y:
[[ 2 3 4]]
encoded_text : Complete Encoded Text to make batches from
batch_size : Number of samples per batch
seq_len : Length of character sequence
'''
# Total number of characters per batch
# Example: If samp_per_batch is 2 and seq_len is 50, then 100
# characters come out per batch.
char_per_batch = samp_per_batch * seq_len
# Number of batches available to make
# Use int() to roun to nearest integer
num_batches_avail = int(len(encoded_text)/char_per_batch)
# Cut off end of encoded_text that
# won't fit evenly into a batch
encoded_text = encoded_text[:num_batches_avail * char_per_batch]
# Reshape text into rows the size of a batch
encoded_text = encoded_text.reshape((samp_per_batch, -1))
# Go through each row in array.
for n in range(0, encoded_text.shape[1], seq_len):
# Grab feature characters
x = encoded_text[:, n:n+seq_len]
# y is the target shifted over by 1
y = np.zeros_like(x)
#
try:
y[:, :-1] = x[:, 1:]
y[:, -1] = encoded_text[:, n+seq_len]
# FOR POTENTIAL INDEXING ERROR AT THE END
except:
y[:, :-1] = x[:, 1:]
y[:, -1] = encoded_text[:, 0]
yield x, y
Explanation: --------------
Creating Training Batches
We need to create a function that will generate batches of characters along with the next character in the sequence as a label.
-----------------
End of explanation
sample_text = encoded_text[:20]
sample_text
batch_generator = generate_batches(sample_text,samp_per_batch=2,seq_len=5)
# Grab first batch
x, y = next(batch_generator)
x
y
Explanation: Example of generating a batch
End of explanation
torch.cuda.is_available()
Explanation: GPU Check
Remember this will take a lot longer on CPU!
End of explanation
class CharModel(nn.Module):
def __init__(self, all_chars, num_hidden=256, num_layers=4,drop_prob=0.5,use_gpu=False):
# SET UP ATTRIBUTES
super().__init__()
self.drop_prob = drop_prob
self.num_layers = num_layers
self.num_hidden = num_hidden
self.use_gpu = use_gpu
#CHARACTER SET, ENCODER, and DECODER
self.all_chars = all_chars
self.decoder = dict(enumerate(all_chars))
self.encoder = {char: ind for ind,char in decoder.items()}
self.lstm = nn.LSTM(len(self.all_chars), num_hidden, num_layers, dropout=drop_prob, batch_first=True)
self.dropout = nn.Dropout(drop_prob)
self.fc_linear = nn.Linear(num_hidden, len(self.all_chars))
def forward(self, x, hidden):
lstm_output, hidden = self.lstm(x, hidden)
drop_output = self.dropout(lstm_output)
drop_output = drop_output.contiguous().view(-1, self.num_hidden)
final_out = self.fc_linear(drop_output)
return final_out, hidden
def hidden_state(self, batch_size):
'''
Used as separate method to account for both GPU and CPU users.
'''
if self.use_gpu:
hidden = (torch.zeros(self.num_layers,batch_size,self.num_hidden).cuda(),
torch.zeros(self.num_layers,batch_size,self.num_hidden).cuda())
else:
hidden = (torch.zeros(self.num_layers,batch_size,self.num_hidden),
torch.zeros(self.num_layers,batch_size,self.num_hidden))
return hidden
Explanation: Creating the LSTM Model
Note! We will have options for GPU users and CPU users. CPU will take MUCH LONGER to train and you may encounter RAM issues depending on your hardware. If that is the case, consider using cloud services like AWS, GCP, or Azure. Note, these may cost you money to use!
End of explanation
model = CharModel(
all_chars=all_characters,
num_hidden=512,
num_layers=3,
drop_prob=0.5,
use_gpu=True,
)
total_param = []
for p in model.parameters():
total_param.append(int(p.numel()))
Explanation: Instance of the Model
End of explanation
sum(total_param)
len(encoded_text)
Explanation: Try to make the total_parameters be roughly the same magnitude as the number of characters in the text.
End of explanation
optimizer = torch.optim.Adam(model.parameters(),lr=0.001)
criterion = nn.CrossEntropyLoss()
Explanation: Optimizer and Loss
End of explanation
# percentage of data to be used for training
train_percent = 0.1
len(encoded_text)
int(len(encoded_text) * (train_percent))
train_ind = int(len(encoded_text) * (train_percent))
train_data = encoded_text[:train_ind]
val_data = encoded_text[train_ind:]
Explanation: Training Data and Validation Data
End of explanation
## VARIABLES
# Epochs to train for
epochs = 50
# batch size
batch_size = 128
# Length of sequence
seq_len = 100
# for printing report purposes
# always start at 0
tracker = 0
# number of characters in text
num_char = max(encoded_text)+1
Explanation: Training the Network
Variables
Feel free to play around with these values!
End of explanation
# Set model to train
model.train()
# Check to see if using GPU
if model.use_gpu:
model.cuda()
for i in range(epochs):
hidden = model.hidden_state(batch_size)
for x,y in generate_batches(train_data,batch_size,seq_len):
tracker += 1
# One Hot Encode incoming data
x = one_hot_encoder(x,num_char)
# Convert Numpy Arrays to Tensor
inputs = torch.from_numpy(x)
targets = torch.from_numpy(y)
# Adjust for GPU if necessary
if model.use_gpu:
inputs = inputs.cuda()
targets = targets.cuda()
# Reset Hidden State
# If we dont' reset we would backpropagate through all training history
hidden = tuple([state.data for state in hidden])
model.zero_grad()
lstm_output, hidden = model.forward(inputs,hidden)
loss = criterion(lstm_output,targets.view(batch_size*seq_len).long())
loss.backward()
# POSSIBLE EXPLODING GRADIENT PROBLEM!
# LET"S CLIP JUST IN CASE
nn.utils.clip_grad_norm_(model.parameters(),max_norm=5)
optimizer.step()
###################################
### CHECK ON VALIDATION SET ######
#################################
if tracker % 25 == 0:
val_hidden = model.hidden_state(batch_size)
val_losses = []
model.eval()
for x,y in generate_batches(val_data,batch_size,seq_len):
# One Hot Encode incoming data
x = one_hot_encoder(x,num_char)
# Convert Numpy Arrays to Tensor
inputs = torch.from_numpy(x)
targets = torch.from_numpy(y)
# Adjust for GPU if necessary
if model.use_gpu:
inputs = inputs.cuda()
targets = targets.cuda()
# Reset Hidden State
# If we dont' reset we would backpropagate through
# all training history
val_hidden = tuple([state.data for state in val_hidden])
lstm_output, val_hidden = model.forward(inputs,val_hidden)
val_loss = criterion(lstm_output,targets.view(batch_size*seq_len).long())
val_losses.append(val_loss.item())
# Reset to training model after val for loop
model.train()
print(f"Epoch: {i} Step: {tracker} Val Loss: {val_loss.item()}")
Explanation:
End of explanation
# Be careful to overwrite our original name file!
model_name = 'example.net'
torch.save(model.state_dict(),model_name)
Explanation: -------
Saving the Model
https://pytorch.org/tutorials/beginner/saving_loading_models.html
End of explanation
# MUST MATCH THE EXACT SAME SETTINGS AS MODEL USED DURING TRAINING!
model = CharModel(
all_chars=all_characters,
num_hidden=512,
num_layers=3,
drop_prob=0.5,
use_gpu=True,
)
model.load_state_dict(torch.load(model_name))
model.eval()
Explanation: Load Model
End of explanation
def predict_next_char(model, char, hidden=None, k=1):
# Encode raw letters with model
encoded_text = model.encoder[char]
# set as numpy array for one hot encoding
# NOTE THE [[ ]] dimensions!!
encoded_text = np.array([[encoded_text]])
# One hot encoding
encoded_text = one_hot_encoder(encoded_text, len(model.all_chars))
# Convert to Tensor
inputs = torch.from_numpy(encoded_text)
# Check for CPU
if(model.use_gpu):
inputs = inputs.cuda()
# Grab hidden states
hidden = tuple([state.data for state in hidden])
# Run model and get predicted output
lstm_out, hidden = model(inputs, hidden)
# Convert lstm_out to probabilities
probs = F.softmax(lstm_out, dim=1).data
if(model.use_gpu):
# move back to CPU to use with numpy
probs = probs.cpu()
# k determines how many characters to consider
# for our probability choice.
# https://pytorch.org/docs/stable/torch.html#torch.topk
# Return k largest probabilities in tensor
probs, index_positions = probs.topk(k)
index_positions = index_positions.numpy().squeeze()
# Create array of probabilities
probs = probs.numpy().flatten()
# Convert to probabilities per index
probs = probs/probs.sum()
# randomly choose a character based on probabilities
char = np.random.choice(index_positions, p=probs)
# return the encoded value of the predicted char and the hidden state
return model.decoder[char], hidden
def generate_text(model, size, seed='The', k=1):
# CHECK FOR GPU
if(model.use_gpu):
model.cuda()
else:
model.cpu()
# Evaluation mode
model.eval()
# begin output from initial seed
output_chars = [c for c in seed]
# intiate hidden state
hidden = model.hidden_state(1)
# predict the next character for every character in seed
for char in seed:
char, hidden = predict_next_char(model, char, hidden, k=k)
# add initial characters to output
output_chars.append(char)
# Now generate for size requested
for i in range(size):
# predict based off very last letter in output_chars
char, hidden = predict_next_char(model, output_chars[-1], hidden, k=k)
# add predicted character
output_chars.append(char)
# return string of predicted text
return ''.join(output_chars)
print(generate_text(model, 1000, seed='The ', k=3))
Explanation: Generating Predictions
End of explanation |
618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manipulating the Pandas DataFrame
The iPython notebook for this demo can be found in
Step1: First I'm going to pull out a small subset to work with
Step2: I happen to like the way that's organized, but let's say that I want the have the item descriptions in columns and the mode ID's and element numbers in rows. To do that, I'll first move the element ID's up to the columns using a .unstack(level=0) and the transpose the result
Step3: unstack requires unique row indices so I can't work with CQUAD4 stresses as they're currently output, but I'll work with CHEXA stresses. Let's pull out the first two elements and first two modes
Step4: Now I want to put ElementID and the Node ID in the rows along with the Load ID, and have the items in the columns
Step5: Maybe I'd like my rows organized with the modes on the inside. I can do that by swapping levels
Step6: Alternatively I can do that by first using reset_index to move all the index columns into data, and then using set_index to define the order of columns I want as my index | Python Code:
import os
import pyNastran
pkg_path = pyNastran.__path__[0]
from pyNastran.op2.op2 import read_op2
import pandas as pd
pd.set_option('precision', 2)
op2_filename = os.path.join(pkg_path, '..', 'models', 'iSat', 'iSat_launch_100Hz.op2')
from pyNastran.op2.op2 import read_op2
isat = read_op2(op2_filename, build_dataframe=True, debug=False, skip_undefined_matrices=True)
cbar = isat.cbar_force[1].data_frame
cbar.head()
Explanation: Manipulating the Pandas DataFrame
The iPython notebook for this demo can be found in:
- docs\quick_start\demo\op2_pandas_unstack.ipynb
- https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_pandas_unstack.ipynb
This example will use pandas unstack
The unstack method on a DataFrame moves on index level from rows to columns. First let's read in some data:
End of explanation
csub = cbar.loc[3323:3324,1:2]
csub
Explanation: First I'm going to pull out a small subset to work with
End of explanation
csub.unstack(level=0).T
Explanation: I happen to like the way that's organized, but let's say that I want the have the item descriptions in columns and the mode ID's and element numbers in rows. To do that, I'll first move the element ID's up to the columns using a .unstack(level=0) and the transpose the result:
End of explanation
chs = isat.chexa_stress[1].data_frame.loc[3684:3685,1:2]
chs
Explanation: unstack requires unique row indices so I can't work with CQUAD4 stresses as they're currently output, but I'll work with CHEXA stresses. Let's pull out the first two elements and first two modes:
End of explanation
cht = chs.unstack(level=[0,1]).T
cht
Explanation: Now I want to put ElementID and the Node ID in the rows along with the Load ID, and have the items in the columns:
End of explanation
cht = cht.dropna()
cht
# mode, eigr, freq, rad, eids, nids # initial
# nids, eids, eigr, freq, rad, mode # final
cht.swaplevel(0,4).swaplevel(1,5).swaplevel(2,5).swaplevel(4, 5)
Explanation: Maybe I'd like my rows organized with the modes on the inside. I can do that by swapping levels:
We actually need to get rid of the extra rows using dropna():
End of explanation
cht.reset_index().set_index(['ElementID','NodeID','Mode','Freq']).sort_index()
Explanation: Alternatively I can do that by first using reset_index to move all the index columns into data, and then using set_index to define the order of columns I want as my index:
End of explanation |
619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<br />
<br />
Gilles Pirio @ Ripple Research & Data Team
August 20, 2015
Visualizing order books on Ripple
Ripple is a distributed ledger that is not limited to one currency. The Ripple protocol allows users or entities to define their store of value (and define what that value is as it is not only limited to fiat currencies). A native trading engine is implemented in rippled, the Ripple daemon. The Ripple protocol also defines a native currency designed to improve market liquidity
Step1: Given one currency pair, the getOrderbook() method will return the order book, either locally if cached, or by connecting to a data provider. Bitstamp is a major gateway on Ripple, and we will pull the USD@Bitstamp / XRP order book. The issuer account for Bitstamp can be found here. The showInfo() method prints basic information about the order book.
Step2: We use the plot() function to visualize the order book.
Step3: It is also possible to add a weighted average plot. The weighted average represents the global exchange rate as a function of the amount of currency exchanged.
Step4: As expected, the bigger the amount exchanged, the worst the actual exchange rate. Methods to compute weighted average are available. For instance
Step5: Note that there is simply no guarantees this exchange rates could be achieved if a real transaction were to be executed on Ripple. That is because other players on the market could also place competing buy/sell orders and alter the actual exchange rates significantly.
The role of XRP
We are now going to visualize the USD to USD direct conversion rate between two gateways on the Ripple network, Bitstamp and Gatehub.
Step6: We want to explore the crossing of the USD@Bitstamp / USD@Gethub currency pair with another currency, XRP. This method is eavily used on the Forex market, where it is often called 'triangulation'.
To do so, we pull the following two books
Step7: We can do the product of the two previous order book. The result will be a synthetic order book that simulates a trade going through the USD@Bitstamp / XRP and XRP / USD@Gatehub order books.
Step8: Both USD_Bitstamp_Gatehub_Through_XRP and USD_Bitstamp_Gatehub have the same currency pair. Only difference is that the former describe a trade going through XRP as an intermediary currency. We now plot the weighted average for both books.
Step9: First, it is noticeable that the exchange rate for USD issued by Gatehub and Bitstamp may not be exactly 1.0. One of the reason may be may be related to the amount of trust the market places in these two gateways.
Second, directly trading USD@Bitstamp to USD@Gatehub has some limitation | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from pyripple.feed import syncfeed
feed = syncfeed.SyncFeed()
Explanation: <br />
<br />
Gilles Pirio @ Ripple Research & Data Team
August 20, 2015
Visualizing order books on Ripple
Ripple is a distributed ledger that is not limited to one currency. The Ripple protocol allows users or entities to define their store of value (and define what that value is as it is not only limited to fiat currencies). A native trading engine is implemented in rippled, the Ripple daemon. The Ripple protocol also defines a native currency designed to improve market liquidity: XRP. The Ripple consensus ledger therefore includes all the building blocks necessary for a market to function and enable trading for users, gateways, market makers, financial institutions, ...
XRP has a central place in the protocol: beyond bringing security and reducing spam, it is also designed to increase the overall liquidity. XRP often acts as a bridge between the different currencies - if trading between two currencies is not possible because no (or limited) offers is available, the Ripple protocol will often be able to find a path when both currencies can be traded to/from XRP.
We introduce a tool used to visualize order books and explore how XRP is indeed increasing liquidity between currencies.
Getting and plotting order books
We start by importing useful modules from matplotlib (the plotting package) and pyripple. We then create a feed object to get data from Ripple.
End of explanation
# Bitstamp issuer address
Bitstamp = 'rvYAfWj5gh67oV6fW32ZzP3Aw4Eubs59B'
# Getting USD@Bitstamp / XRP order book
USDXRP_Bitstamp = feed.getOrderbook(('USD', Bitstamp), ('XRP', None))
# Show some information
USDXRP_Bitstamp.showInfo()
Explanation: Given one currency pair, the getOrderbook() method will return the order book, either locally if cached, or by connecting to a data provider. Bitstamp is a major gateway on Ripple, and we will pull the USD@Bitstamp / XRP order book. The issuer account for Bitstamp can be found here. The showInfo() method prints basic information about the order book.
End of explanation
USDXRP_Bitstamp.plot()
Explanation: We use the plot() function to visualize the order book.
End of explanation
# Normal orderbook plot
USDXRP_Bitstamp.plot()
# Add a weighted plot (newfig is set to False so we don't create a new plot)
USDXRP_Bitstamp.plotWeighted(10e6, newfig= False, styleask= 'b-.', stylebid='r-.', label='Weighted')
# Set y and x limit as well as the legend
plt.gca().set_ylim((100, 150))
plt.gca().set_xlim((0, 10e6))
plt.legend()
Explanation: It is also possible to add a weighted average plot. The weighted average represents the global exchange rate as a function of the amount of currency exchanged.
End of explanation
for i in [1,2,3,4,5,6,7]:
print('The gloabal echange rate to buy %8i XRP could be %f' % (10**i, USDXRP_Bitstamp.weigtedAverageA(10**i)))
for i in [1,2,3,4,5,6,7]:
print('The global echange rate to sell %8i XRP could be %f' % (10**i, USDXRP_Bitstamp.weigtedAverageB(10**i)))
Explanation: As expected, the bigger the amount exchanged, the worst the actual exchange rate. Methods to compute weighted average are available. For instance:
End of explanation
# Gatehub issuer address
Gatehub = 'rhub8VRN55s94qWKDv6jmDy1pUykJzF3wq'
# Getting USD@Bitstamp to USD@Gatehub orderbook
USD_Bitstamp_Gatehub = feed.getOrderbook(('USD', Bitstamp), ('USD', Gatehub)) # Getting the order book
# Showing some info
USD_Bitstamp_Gatehub.showInfo()
Explanation: Note that there is simply no guarantees this exchange rates could be achieved if a real transaction were to be executed on Ripple. That is because other players on the market could also place competing buy/sell orders and alter the actual exchange rates significantly.
The role of XRP
We are now going to visualize the USD to USD direct conversion rate between two gateways on the Ripple network, Bitstamp and Gatehub.
End of explanation
# Getting USD@Bitstamp to XRP orderbook
USDXRP_Bitstamp = feed.getOrderbook(('USD', Bitstamp), ('XRP', None))
# Getting XRP to USD@Gatehub
XRPUSD_Gatehub = feed.getOrderbook(('XRP', None), ('USD', Gatehub))
Explanation: We want to explore the crossing of the USD@Bitstamp / USD@Gethub currency pair with another currency, XRP. This method is eavily used on the Forex market, where it is often called 'triangulation'.
To do so, we pull the following two books: USD@Bitstamp / XRP and XRP / USD@Gatehub.
End of explanation
# The product of two order book produces a synthetic order book simulating a trade going through both initial books
USD_Bitstamp_Gatehub_Through_XRP = USDXRP_Bitstamp * XRPUSD_Gatehub
# Showing some info
USD_Bitstamp_Gatehub_Through_XRP.showInfo()
Explanation: We can do the product of the two previous order book. The result will be a synthetic order book that simulates a trade going through the USD@Bitstamp / XRP and XRP / USD@Gatehub order books.
End of explanation
# Direct orderbook in green
USD_Bitstamp_Gatehub.plotWeighted(6000, styleask='g', stylebid='g--', label='Direct')
# Through XRP in red
USD_Bitstamp_Gatehub_Through_XRP.plotWeighted(6000, stylebid='r--', styleask='r', newfig= False, label='Through XRP')
# Setting limits
plt.gca().set_ylim((0.95, 1.1))
plt.legend()
Explanation: Both USD_Bitstamp_Gatehub_Through_XRP and USD_Bitstamp_Gatehub have the same currency pair. Only difference is that the former describe a trade going through XRP as an intermediary currency. We now plot the weighted average for both books.
End of explanation
# Issuer address for Snapswap
Snapswap = 'rMwjYedjc7qqtKYVLiAccJSmCwih4LnE2q'
# Pull the 3 order books
USDXRP_Bitstamp = feed.getOrderbook(('USD', Bitstamp), ('XRP', None))
USDXRP_Snapswap = feed.getOrderbook(('USD', Snapswap), ('XRP', None))
USDXRP_Gatehub = feed.getOrderbook(('USD', Gatehub), ('XRP', None))
# Plotting the order books on the same graph
USDXRP_Bitstamp.plotWeighted(10e6, newfig= True, styleask='b', stylebid='b--', label='Bitstamp')
USDXRP_Snapswap.plotWeighted(10e6, newfig= False, styleask='g', stylebid='g--', label='Snapswap')
USDXRP_Gatehub.plotWeighted(10e6, newfig= False, styleask='r', stylebid='r--', label='Gatehub')
plt.gca().set_ylim((100,150))
plt.legend()
Explanation: First, it is noticeable that the exchange rate for USD issued by Gatehub and Bitstamp may not be exactly 1.0. One of the reason may be may be related to the amount of trust the market places in these two gateways.
Second, directly trading USD@Bitstamp to USD@Gatehub has some limitation:
* The order book is limited in size - meaning that directly selling/buying Gatehub USD for Bitstamp USD won't be possible for amount in excess of ~USD 1500.
* The rate offered by the direct order book is not always competitive. In some cases, going through XRP gives a better trade in terms of available size (payment in excess of USD 10k can easily be processed) and rate.
The good news is that on the Ripple network, exchanges will usually use the cheapest rate available. More specifically, a trade on Ripple will use the best price out of: the direct path and the path going through XRP as an intermediary currency. Alternatively, it is also possible to specify a custom path, though that would require a custom pathfinding engine.
Comparing orderbook
We are now going to visualize and compare the USDXRP books for 3 USD gateways on Ripple: Bitstamp, Snapswap and Gatehub.
End of explanation |
620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Importar GraphLab
Step1: Cargar el dataset
Step2: Los datos contienen articulos de wikipedia sobre diferentes personas.
Step3: Buscaremos al expresidente Barack Obama
Step4: Contar las palabras del articulo de Obama
Step5: Convertir el diccionario en una tabla
Step6: Ordenar las palabras más repetidas.
Step7: Las palabras más comunes no nos aportan información.
Desarrollar un algoritmo TF-IDF para resolver este problema. Aplicaremos el contador de palabra como una columna y aplicaremos a todos los articulos.
Step8: Examinar el TF-IDF del articulo de OBAMA
Step9: El algoritmo TF-IDF nos aporta más información.
Construir un modelo de nearest neighbor.
Step10: Qué persona esta más relacionada con Obama?
Step11: Otros ejemplos | Python Code:
import graphlab
Explanation: Importar GraphLab
End of explanation
people = graphlab.SFrame('people_wiki.gl/')
Explanation: Cargar el dataset
End of explanation
people.head()
len(people)
Explanation: Los datos contienen articulos de wikipedia sobre diferentes personas.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
Explanation: Buscaremos al expresidente Barack Obama
End of explanation
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
Explanation: Contar las palabras del articulo de Obama
End of explanation
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
Explanation: Convertir el diccionario en una tabla
End of explanation
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
Explanation: Ordenar las palabras más repetidas.
End of explanation
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
people['tfidf'] = graphlab.text_analytics.tf_idf(people['word_count'])
Explanation: Las palabras más comunes no nos aportan información.
Desarrollar un algoritmo TF-IDF para resolver este problema. Aplicaremos el contador de palabra como una columna y aplicaremos a todos los articulos.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
Explanation: Examinar el TF-IDF del articulo de OBAMA
End of explanation
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
Explanation: El algoritmo TF-IDF nos aporta más información.
Construir un modelo de nearest neighbor.
End of explanation
knn_model.query(obama)
Explanation: Qué persona esta más relacionada con Obama?
End of explanation
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
Explanation: Otros ejemplos
End of explanation |
621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Poster popularity by state
This notebook loads data of poster viewership at the SfN 2016 annual meeting, organized by the states that were affiliated with each poster.
We find that the posters are most popular
Import libraries and load data
Step1: 1. Summarize data by state
Step2: 2. Poster popularity vs. prevalence
Across states in the United States, we found a positive correlation between the number of posters from a state and the popularity of those posters.
Step3: 3. Permutation tests | Python Code:
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
import pandas as pd
# Load data
df = pd.DataFrame.from_csv('./posterviewers_by_state.csv')
key_N = 'Number of people'
Explanation: Poster popularity by state
This notebook loads data of poster viewership at the SfN 2016 annual meeting, organized by the states that were affiliated with each poster.
We find that the posters are most popular
Import libraries and load data
End of explanation
# 0. Count number of posters from each state
# Calculate mean poster popularity
states = df['State'].unique()
dict_state_counts = {'State':states,'count':np.zeros(len(states),dtype=int),'popularity':np.zeros(len(states))}
for i, s in enumerate(states):
dict_state_counts['count'][i] = int(sum(df['State']==s))
dict_state_counts['popularity'][i] = np.round(np.mean(df[df['State']==s][key_N]),3)
df_counts = pd.DataFrame.from_dict(dict_state_counts)
# Visualize dataframe
# count = total number of posters counted affiliated with that country
# popularity = average number of viewers at a poster affiliated with that country
df_counts.head()
Explanation: 1. Summarize data by state
End of explanation
print sp.stats.spearmanr(np.log10(df_counts['count']),df_counts['popularity'])
plt.figure(figsize=(3,3))
plt.semilogx(df_counts['count'],df_counts['popularity'],'k.')
plt.xlabel('Number of posters\nin the state')
plt.ylabel('Average number of viewers per poster')
plt.ylim((-.1,3.6))
plt.xlim((.9,1000))
Explanation: 2. Poster popularity vs. prevalence
Across states in the United States, we found a positive correlation between the number of posters from a state and the popularity of those posters.
End of explanation
# Simulate randomized data
Nperm = 100
N_posters = len(df)
rand_statepop = np.zeros((Nperm,len(states)),dtype=np.ndarray)
rand_statepopmean = np.zeros((Nperm,len(states)))
for i in range(Nperm):
# Random permutation of posters, organized by state
randperm_viewers = np.random.permutation(df[key_N].values)
for j, s in enumerate(states):
rand_statepop[i,j] = randperm_viewers[np.where(df['State']==s)[0]]
rand_statepopmean[i,j] = np.mean(randperm_viewers[np.where(df['State']==s)[0]])
# True data: Calculate all p-values for the difference between 1 state's popularity and the rest
min_N_posters = 10
states_big = states[np.where(df_counts['count']>=min_N_posters)[0]]
N_big = len(states_big)
t_true_all = np.zeros(N_big)
p_true_all = np.zeros(N_big)
for i, state in enumerate(states_big):
t_true_all[i], _ = sp.stats.ttest_ind(df[df['State']==state][key_N],df[df['State']!=state][key_N])
_, p_true_all[i] = sp.stats.mannwhitneyu(df[df['State']==state][key_N],df[df['State']!=state][key_N])
pmin_pop = np.min(p_true_all[np.where(t_true_all>0)[0]])
pmin_unpop = np.min(p_true_all[np.where(t_true_all<0)[0]])
print 'Most popular state: ', states_big[np.argmax(t_true_all)], '. p=', str(pmin_pop)
print 'Least popular state: ', states_big[np.argmin(t_true_all)], '. p=', str(pmin_unpop)
# Calculate minimum p-values for each permutation
# Calculate all p and t values
t_rand_all = np.zeros((Nperm,N_big))
p_rand_all = np.zeros((Nperm,N_big))
pmin_pop_rand = np.zeros(Nperm)
pmin_unpop_rand = np.zeros(Nperm)
for i in range(Nperm):
for j, state in enumerate(states_big):
idx_use = range(len(states_big))
idx_use.pop(j)
t_rand_all[i,j], _ = sp.stats.ttest_ind(rand_statepop[i,j],np.hstack(rand_statepop[i,idx_use]))
_, p_rand_all[i,j] = sp.stats.mannwhitneyu(rand_statepop[i,j],np.hstack(rand_statepop[i,idx_use]))
# Identify the greatest significance of a state being more popular than the rest
pmin_pop_rand[i] = np.min(p_rand_all[i][np.where(t_rand_all[i]>0)[0]])
# Identify the greatest significance of a state being less popular than the rest
pmin_unpop_rand[i] = np.min(p_rand_all[i][np.where(t_rand_all[i]<0)[0]])
# Test if most popular and least popular countries are outside of expectation
print 'Chance of a state being more distinctly popular than Minnesota: '
print sum(i < pmin_pop for i in pmin_pop_rand) / float(len(pmin_pop_rand))
print 'Chance of a state being less distinctly popular than Connecticut: '
print sum(i < pmin_unpop for i in pmin_unpop_rand) / float(len(pmin_unpop_rand))
Explanation: 3. Permutation tests: difference in popularity across states
In this code, we test if the relative popularity / unpopularity observed for any state is outside what is expected by chance
Here, the most popular and least popular states are defined by a nonparametric statiscal test between the number of viewers at posters from their country, compared to posters from all other countries.
End of explanation |
622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EAS Testing - Recents Fling on Android
The goal of this experiment is to collect frame statistics while swiping up and down tabs of recently opened applications on a Nexus N5X running Android with an EAS kernel. This process is name Recents Fling. The Analysis phase will consist in comparing EAS with other schedulers, that is comparing sched governor with
Step1: Test Environment set up
Devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run the next cell to define where your Android SDK is installed.
Step2: In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
Step4: Support Functions
This set of support functions will help us running the benchmark using different CPUFreq governors.
Step5: Run Flinger
Prepare Environment
Step6: Run workload and collect traces
Step7: UI Performance Analysis | Python Code:
import logging
reload(logging)
log_fmt = '%(asctime)-9s %(levelname)-8s: %(message)s'
logging.basicConfig(format=log_fmt)
# Change to info once the notebook runs ok
logging.getLogger().setLevel(logging.INFO)
%pylab inline
import os
from time import sleep
# Support to access the remote target
import devlib
from env import TestEnv
from devlib.utils.android import adb_command
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
Explanation: EAS Testing - Recents Fling on Android
The goal of this experiment is to collect frame statistics while swiping up and down tabs of recently opened applications on a Nexus N5X running Android with an EAS kernel. This process is name Recents Fling. The Analysis phase will consist in comparing EAS with other schedulers, that is comparing sched governor with:
- interactive
- performance
- powersave
- ondemand
For this experiment it is recommended to open many applications so that we can swipe over more recently opened applications.
End of explanation
import os
os.environ['ANDROID_HOME'] = '/ext/android-sdk-linux/'
Explanation: Test Environment set up
Devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run the next cell to define where your Android SDK is installed.
End of explanation
# Setup a target configuration
my_conf = {
# Target platform and board
"platform" : 'android',
# Device ID
# "device" : "0123456789abcdef",
# Folder where all the results will be collected
"results_dir" : "Android_RecentsFling",
# Define devlib modules to load
"modules" : [
'cpufreq' # enable CPUFreq support
],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_load_avg_cpu",
"cpu_frequency",
"cpu_capacity"
],
"buffsize" : 10 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd' ],
}
# Initialize a test environment using:
te = TestEnv(my_conf)
target = te.target
Explanation: In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
End of explanation
def set_performance():
target.cpufreq.set_all_governors('performance')
def set_powersave():
target.cpufreq.set_all_governors('powersave')
def set_interactive():
target.cpufreq.set_all_governors('interactive')
def set_sched():
target.cpufreq.set_all_governors('sched')
def set_ondemand():
target.cpufreq.set_all_governors('ondemand')
for cpu in target.list_online_cpus():
tunables = target.cpufreq.get_governor_tunables(cpu)
target.cpufreq.set_governor_tunables(
cpu,
'ondemand',
**{'sampling_rate' : tunables['sampling_rate_min']}
)
# CPUFreq configurations to test
confs = {
'performance' : {
'label' : 'prf',
'set' : set_performance,
},
#'powersave' : {
# 'label' : 'pws',
# 'set' : set_powersave,
#},
'interactive' : {
'label' : 'int',
'set' : set_interactive,
},
#'sched' : {
# 'label' : 'sch',
# 'set' : set_sched,
#},
#'ondemand' : {
# 'label' : 'odm',
# 'set' : set_ondemand,
#}
}
# The set of results for each comparison test
results = {}
def open_apps(n):
Open `n` apps on the device
:param n: number of apps to open
:type n: int
# Get a list of third-party packages
android_version = target.getprop('ro.build.version.release')
if android_version >= 'N':
packages = target.execute('cmd package list packages | cut -d: -f 2')
packages = packages.splitlines()
else:
packages = target.execute('pm list packages -3 | cut -d: -f 2')
packages = packages.splitlines()
# As a safe fallback let's use a list of standard Android AOSP apps which are always available
if len(packages) < 8:
packages = [
'com.android.messaging',
'com.android.calendar',
'com.android.settings',
'com.android.calculator2',
'com.android.email',
'com.android.music',
'com.android.deskclock',
'com.android.contacts',
]
LAUNCH_CMD = 'monkey -p {} -c android.intent.category.LAUNCHER 1 '
if n > len(packages):
n = len(packages)
logging.info('Trying to open %d apps...', n)
started = 0
for app in packages:
logging.debug(' Launching %s', app)
try:
target.execute(LAUNCH_CMD.format(app))
started = started + 1
logging.info(' %2d starting %s...', started, app)
except Exception:
pass
if started >= n:
break
# Close Recents
target.execute('input keyevent KEYCODE_HOME')
def recentsfling_run(exp_dir):
# Open Recents on the target device
target.execute('input keyevent KEYCODE_APP_SWITCH')
# Allow the activity to start
sleep(5)
# Reset framestats collection
target.execute('dumpsys gfxinfo --reset')
w, h = target.screen_resolution
x = w/2
yl = int(0.2*h)
yh = int(0.9*h)
logging.info('Start Swiping Recents')
for i in range(5):
# Simulate two fast UP and DOWN swipes
target.execute('input swipe {} {} {} {} 50'.format(x, yl, x, yh))
sleep(0.3)
target.execute('input swipe {} {} {} {} 50'.format(x, yh, x, yl))
sleep(0.7)
logging.info('Swiping Recents Completed')
# Get frame stats
framestats_file = os.path.join(exp_dir, "framestats.txt")
adb_command(target.adb_name, 'shell dumpsys gfxinfo com.android.systemui > {}'.format(framestats_file))
# Close Recents
target.execute('input keyevent KEYCODE_HOME')
return framestats_file
def experiment(governor, exp_dir):
os.system('mkdir -p {}'.format(exp_dir));
logging.info('------------------------')
logging.info('Run workload using %s governor', governor)
confs[governor]['set']()
# Start FTrace
te.ftrace.start()
### Run the benchmark ###
framestats_file = recentsfling_run(exp_dir)
# Stop FTrace
te.ftrace.stop()
# Collect and keep track of the trace
trace_file = os.path.join(exp_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
# Parse trace
tr = Trace(te.platform, exp_dir,
events=my_conf['ftrace']['events'])
# return all the experiment data
return {
'dir' : exp_dir,
'framestats_file' : framestats_file,
'trace_file' : trace_file,
'ftrace' : tr.ftrace,
'trace' : tr
}
Explanation: Support Functions
This set of support functions will help us running the benchmark using different CPUFreq governors.
End of explanation
N_APPS = 20
open_apps(N_APPS)
# Give apps enough time to open
sleep(5)
Explanation: Run Flinger
Prepare Environment
End of explanation
# Unlock device screen (assume no password required)
target.execute('input keyevent 82')
# Run the benchmark in all the configured governors
for governor in confs:
test_dir = os.path.join(te.res_dir, governor)
results[governor] = experiment(governor, test_dir)
Explanation: Run workload and collect traces
End of explanation
for governor in confs:
framestats_file = results[governor]['framestats_file']
print "Frame Statistics for {} governor".format(governor.upper())
!sed '/Stats since/,/99th/!d;/99th/q' $framestats_file
print ""
Explanation: UI Performance Analysis
End of explanation |
623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem Set 2 Exercise 5
Part 2.5.d.iv
Step1: First we need to load the training and testing data sets and shape the data to run the analysis.
Step2: Then we compute the boosting data using decision stumps, followed by doing regression using random boosting.
Step3: Now we perform the error analysis for both stump boosting and random booosting.
Step4: Finally we plot the results. Notice that the error rate for both boosting classifiers converges as the number of iterations of boosting increases. Stump boosting converges faster than random boosting. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import load_data as ld
import random_booster as rb
import stump_booster as sb
import errors_over_time as eot
Explanation: Problem Set 2 Exercise 5
Part 2.5.d.iv
End of explanation
training_data, testing_data = ld.load_dataset('boosting-train.csv', 'boosting-test.csv')
Xtrain = training_data.iloc[:, 1:].as_matrix()
ytrain = training_data.iloc[:, 0].as_matrix()
Xtest = testing_data.iloc[:, 1:].as_matrix()
ytest = testing_data.iloc[:, 0].as_matrix()
Explanation: First we need to load the training and testing data sets and shape the data to run the analysis.
End of explanation
theta, feature_indices, thresholds = sb.stump_booster(Xtrain, ytrain, 200)
theta_rnd, feature_indices_rnd, thresholds_rnd = rb.random_booster(Xtrain, ytrain, 200)
Explanation: Then we compute the boosting data using decision stumps, followed by doing regression using random boosting.
End of explanation
training_errors, testing_errors = eot.compute_errors_over_time(Xtrain, ytrain, Xtest, ytest, theta, feature_indices, thresholds)
training_errors_rnd, testing_errors_rnd = eot.compute_errors_over_time(Xtrain, ytrain, Xtest, ytest, theta_rnd, feature_indices_rnd, thresholds_rnd)
Explanation: Now we perform the error analysis for both stump boosting and random booosting.
End of explanation
fig = plt.figure(figsize=(20,8))
plot1 = plt.subplot(121)
plot1.set_title('Stump Boosting')
plot1.grid()
plt.xlabel('Iterations')
plt.ylabel('Error Rate')
plt.plot(training_errors, label='training errors')
plt.plot(testing_errors, label='testing errors')
plt.legend()
plot2 = plt.subplot(122)
plot2.set_title('Random Boosting')
plot2.grid()
plt.xlabel('Iterations')
plt.ylabel('Error Rate')
plt.plot(training_errors_rnd, label='training error')
plt.plot(testing_errors_rnd, label='testing error')
plt.legend()
plt.show()
Explanation: Finally we plot the results. Notice that the error rate for both boosting classifiers converges as the number of iterations of boosting increases. Stump boosting converges faster than random boosting.
End of explanation |
624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Step1: Import raw data
The user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.
Step2: We'll generate a list of pairs of stypes and channels for ease of use.
Step3: We can now read in all datafiles specified by the data dictionary above.
Step4: Calculate landmark bins
Based on the analysis above, we can select the optimal value of alpha bins.
Step5: Calculate landmark bins based on user input parameters and the previously specified control sample.
Step6: Calculate landmarks | Python Code:
import deltascope as ds
import deltascope.alignment as ut
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import normalize
from scipy.optimize import minimize
import os
import tqdm
import json
import time
Explanation: Introduction: Landmarks
End of explanation
# --------------------------------
# -------- User input ------------
# --------------------------------
data = {
# Specify sample type key
'wt': {
# Specify path to data directory
'path': './data/Output_wt03-09-21-29/',
# Specify which channels are in the directory and are of interest
'channels': ['AT','ZRF']
},
'you-too': {
'path': './data/Output_yot03-09-23-21/',
'channels': ['AT','ZRF']
}
}
Explanation: Import raw data
The user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.
End of explanation
data_pairs = []
for s in data.keys():
for c in data[s]['channels']:
data_pairs.append((s,c))
Explanation: We'll generate a list of pairs of stypes and channels for ease of use.
End of explanation
D = {}
for s in data.keys():
D[s] = {}
for c in data[s]['channels']:
D[s][c] = ds.read_psi_to_dict(data[s]['path'],c)
Explanation: We can now read in all datafiles specified by the data dictionary above.
End of explanation
# --------------------------------
# -------- User input ------------
# --------------------------------
# Pick an integer value for bin number based on results above
anum = 25
# Specify the percentiles which will be used to calculate landmarks
percbins = [50]
Explanation: Calculate landmark bins
Based on the analysis above, we can select the optimal value of alpha bins.
End of explanation
theta_step = np.pi/4
lm = ds.landmarks(percbins=percbins, rnull=np.nan)
lm.calc_bins(D['wt']['AT'], anum, theta_step)
print('Alpha bins')
print(lm.acbins)
print('Theta bins')
print(lm.tbins)
Explanation: Calculate landmark bins based on user input parameters and the previously specified control sample.
End of explanation
lmdf = pd.DataFrame()
# Loop through each pair of stype and channels
for s,c in tqdm.tqdm(data_pairs):
print(s,c)
# Calculate landmarks for each sample with this data pair
for k,df in tqdm.tqdm(D[s][c].items()):
lmdf = lm.calc_perc(df, k, '-'.join([s,c]), lmdf)
# Set timestamp for saving data
tstamp = time.strftime("%m-%d-%H-%M",time.localtime())
# Save completed landmarks to a csv file
lmdf.to_csv(tstamp+'_landmarks.csv')
print('Landmarks saved to csv')
# Save landmark bins to json file
bins = {
'acbins':list(lm.acbins),
'tbins':list(lm.tbins)
}
with open(tstamp+'_landmarks_bins.json', 'w') as outfile:
json.dump(bins, outfile)
print('Bins saved to json')
Explanation: Calculate landmarks
End of explanation |
625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use the astropy interface to get the location of Jupiter as the time that you want to use.
Step1: Conclusion | Python Code:
dt = 0.
# Using JPL Horizons web interface at 2017-05-19T01:34:40
horizon_ephem = SkyCoord(*[193.1535, -4.01689]*u.deg)
for orbit in orbits:
tstart = orbit[0]
tend = orbit[1]
print()
# print('Orbit duration: ', tstart.isoformat(), tend.isoformat())
on_time = (tend - tstart).total_seconds()
point_time = tstart + 0.5*(tend - tstart)
print('Time used for ephemeris: ', point_time.isoformat())
astro_time = Time(point_time)
solar_system_ephemeris.set('jpl')
jupiter = get_body('Jupiter', astro_time)
jplephem = SkyCoord(jupiter.ra.deg*u.deg, jupiter.dec.deg*u.deg)
# Switch to the built in ephemris
solar_system_ephemeris.set('builtin')
jupiter = get_body('Jupiter', astro_time)
builtin_ephem = SkyCoord(jupiter.ra.deg*u.deg, jupiter.dec.deg*u.deg)
t = ts.from_astropy(astro_time)
jupiter, earth = planets['jupiter'], planets['earth']
astrometric = earth.at(t).observe(jupiter)
ra, dec, distance = astrometric.radec()
radeg = ra.to(u.deg)
decdeg = dec.to(u.deg)
skyfield_ephem = SkyCoord(radeg, decdeg)
print()
print('Horizons offset to jplephem: ', horizon_ephem.separation(jplephem))
print()
print('Horizons offset to "built in" ephemeris: ', horizon_ephem.separation(builtin_ephem))
print()
print('Horizons offset to Skyfield ephemeris: ', horizon_ephem.separation(skyfield_ephem))
print()
break
Explanation: Use the astropy interface to get the location of Jupiter as the time that you want to use.
End of explanation
dt = 0.
for orbit in orbits:
tstart = orbit[0]
tend = orbit[1]
print()
on_time = (tend - tstart).total_seconds()
point_time = tstart + 0.5*(tend - tstart)
print('Time used for ephemeris: ', point_time.isoformat())
astro_time = Time(point_time)
solar_system_ephemeris.set('jpl')
jupiter = get_body('Jupiter', astro_time)
jplephem = SkyCoord(jupiter.ra.deg*u.deg, jupiter.dec.deg*u.deg)
# Switch to the built in ephemris
solar_system_ephemeris.set('builtin')
jupiter = get_body('Jupiter', astro_time)
builtin_ephem = SkyCoord(jupiter.ra.deg*u.deg, jupiter.dec.deg*u.deg)
t = ts.from_astropy(astro_time)
jupiter, earth = planets['jupiter'], planets['earth']
astrometric = earth.at(t).observe(jupiter)
ra, dec, distance = astrometric.radec()
radeg = ra.to(u.deg)
decdeg = dec.to(u.deg)
skyfield_ephem = SkyCoord(radeg, decdeg)
print()
print('Skyfield offset to jplephem: ', skyfield_ephem.separation(jplephem))
print()
print('Skyfield offset to "built in" ephemeris: ', skyfield_ephem.separation(builtin_ephem))
print()
Explanation: Conclusion: Use skyfield if you want to reproduce the JPL ephemerides
Use the jup310.bsp file for Jupiter. Need to confirm which of the avaiable .bsp files are approriate for inner solar system objects as well as the Sun/Moon
End of explanation |
626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-1', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: TEST-INSTITUTE-2
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:44
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Twitter data
Copyright and Licensing
You are free to use or adapt this notebook for any purpose you'd like. However, please respect the Simplified BSD License that governs its use.
Twitter API Access
Twitter implements OAuth 1.0A as its standard authentication mechanism, and in order to use it to make requests to Twitter's API, you'll need to go to https
Step1: Install the twitter package to interface with the Twitter API
Step2: Example 1. Authorizing an application to access Twitter account data
Step3: Example 2. Retrieving trends
Twitter identifies locations using the Yahoo! Where On Earth ID.
The Yahoo! Where On Earth ID for the entire world is 1.
See https
Step4: Look for the WOEID for san-diego
You can change it to another location.
Step5: Example 3. Displaying API responses as pretty-printed JSON
Step6: Example 4. Computing the intersection of two sets of trends
Step7: Example 5. Collecting search results
Set the variable q to a trending topic,
or anything else for that matter. The example query below
was a trending topic when this content was being developed
and is used throughout the remainder of this chapter
Step8: Twitter often returns duplicate results, we can filter them out checking for duplicate texts
Step9: Example 6. Extracting text, screen names, and hashtags from tweets
Step10: Example 7. Creating a basic frequency distribution from the words in tweets
Step11: Example 8. Create a prettyprint function to display tuples in a nice tabular format
Step12: Example 9. Finding the most popular retweets
Step13: We can build another prettyprint function to print entire tweets with their retweet count.
We also want to split the text of the tweet in up to 3 lines, if needed. | Python Code:
import pickle
import os
if not os.path.exists('secret_twitter_credentials.pkl'):
Twitter={}
Twitter['Consumer Key'] = ''
Twitter['Consumer Secret'] = ''
Twitter['Access Token'] = ''
Twitter['Access Token Secret'] = ''
with open('secret_twitter_credentials.pkl','wb') as f:
pickle.dump(Twitter, f)
else:
Twitter=pickle.load(open('secret_twitter_credentials.pkl','rb'))
Explanation: Twitter data
Copyright and Licensing
You are free to use or adapt this notebook for any purpose you'd like. However, please respect the Simplified BSD License that governs its use.
Twitter API Access
Twitter implements OAuth 1.0A as its standard authentication mechanism, and in order to use it to make requests to Twitter's API, you'll need to go to https://dev.twitter.com/apps and create a sample application.
Choose any name for your application, write a description and use http://google.com for the website.
Under Key and Access Tokens, there are four primary identifiers you'll need to note for an OAuth 1.0A workflow:
* consumer key,
* consumer secret,
* access token, and
* access token secret (Click on Create Access Token to create those).
Note that you will need an ordinary Twitter account in order to login, create an app, and get these credentials.
The first time you execute the notebook, add all credentials so that you can save them in the pkl file, then you can remove the secret keys from the notebook because they will just be loaded from the pkl file.
The pkl file contains sensitive information that can be used to take control of your twitter acccount, do not share it.
End of explanation
import pip
!pip install twitter
Explanation: Install the twitter package to interface with the Twitter API
End of explanation
import twitter
auth = twitter.oauth.OAuth(Twitter['Access Token'],
Twitter['Access Token Secret'],
Twitter['Consumer Key'],
Twitter['Consumer Secret'])
twitter_api = twitter.Twitter(auth=auth)
# Nothing to see by displaying twitter_api except that it's now a
# defined variable
print(twitter_api)
Explanation: Example 1. Authorizing an application to access Twitter account data
End of explanation
WORLD_WOE_ID = 1
US_WOE_ID = 23424977
Explanation: Example 2. Retrieving trends
Twitter identifies locations using the Yahoo! Where On Earth ID.
The Yahoo! Where On Earth ID for the entire world is 1.
See https://dev.twitter.com/docs/api/1.1/get/trends/place and
http://developer.yahoo.com/geo/geoplanet/
look at the BOSS placefinder here: https://developer.yahoo.com/boss/placefinder/
End of explanation
LOCAL_WOE_ID=2487889
# Prefix ID with the underscore for query string parameterization.
# Without the underscore, the twitter package appends the ID value
# to the URL itself as a special case keyword argument.
world_trends = twitter_api.trends.place(_id=WORLD_WOE_ID)
us_trends = twitter_api.trends.place(_id=US_WOE_ID)
local_trends = twitter_api.trends.place(_id=LOCAL_WOE_ID)
world_trends[:2]
trends=local_trends
print(type(trends))
print(list(trends[0].keys()))
print(trends[0]['trends'])
Explanation: Look for the WOEID for san-diego
You can change it to another location.
End of explanation
import json
print((json.dumps(us_trends[:2], indent=1)))
Explanation: Example 3. Displaying API responses as pretty-printed JSON
End of explanation
trends_set = {}
trends_set['world'] = set([trend['name']
for trend in world_trends[0]['trends']])
trends_set['us'] = set([trend['name']
for trend in us_trends[0]['trends']])
trends_set['san diego'] = set([trend['name']
for trend in local_trends[0]['trends']])
for loc in ['world','us','san diego']:
print(('-'*10,loc))
print((','.join(trends_set[loc])))
print(( '='*10,'intersection of world and us'))
print((trends_set['world'].intersection(trends_set['us'])))
print(('='*10,'intersection of us and san-diego'))
print((trends_set['san diego'].intersection(trends_set['us'])))
Explanation: Example 4. Computing the intersection of two sets of trends
End of explanation
q = '#MTVAwards'
number = 100
# See https://dev.twitter.com/docs/api/1.1/get/search/tweets
search_results = twitter_api.search.tweets(q=q, count=number)
statuses = search_results['statuses']
len(statuses)
print(statuses)
Explanation: Example 5. Collecting search results
Set the variable q to a trending topic,
or anything else for that matter. The example query below
was a trending topic when this content was being developed
and is used throughout the remainder of this chapter
End of explanation
all_text = []
filtered_statuses = []
for s in statuses:
if not s["text"] in all_text:
filtered_statuses.append(s)
all_text.append(s["text"])
statuses = filtered_statuses
len(statuses)
[s['text'] for s in search_results['statuses']]
# Show one sample search result by slicing the list...
print(json.dumps(statuses[0], indent=1))
# The result of the list comprehension is a list with only one element that
# can be accessed by its index and set to the variable t
t = statuses[0]
#[ status for status in statuses
# if status['id'] == 316948241264549888 ][0]
# Explore the variable t to get familiarized with the data structure...
print(t['retweet_count'])
print(t['retweeted'])
Explanation: Twitter often returns duplicate results, we can filter them out checking for duplicate texts:
End of explanation
status_texts = [ status['text']
for status in statuses ]
screen_names = [ user_mention['screen_name']
for status in statuses
for user_mention in status['entities']['user_mentions'] ]
hashtags = [ hashtag['text']
for status in statuses
for hashtag in status['entities']['hashtags'] ]
# Compute a collection of all words from all tweets
words = [ w
for t in status_texts
for w in t.split() ]
# Explore the first 5 items for each...
print(json.dumps(status_texts[0:5], indent=1))
print(json.dumps(screen_names[0:5], indent=1))
print(json.dumps(hashtags[0:5], indent=1))
print(json.dumps(words[0:5], indent=1))
Explanation: Example 6. Extracting text, screen names, and hashtags from tweets
End of explanation
from collections import Counter
for item in [words, screen_names, hashtags]:
c = Counter(item)
print(c.most_common()[:10]) # top 10
print()
Explanation: Example 7. Creating a basic frequency distribution from the words in tweets
End of explanation
def prettyprint_counts(label, list_of_tuples):
print("\n{:^20} | {:^6}".format(label, "Count"))
print("*"*40)
for k,v in list_of_tuples:
print("{:20} | {:>6}".format(k,v))
for label, data in (('Word', words),
('Screen Name', screen_names),
('Hashtag', hashtags)):
c = Counter(data)
prettyprint_counts(label, c.most_common()[:10])
Explanation: Example 8. Create a prettyprint function to display tuples in a nice tabular format
End of explanation
retweets = [
# Store out a tuple of these three values ...
(status['retweet_count'],
status['retweeted_status']['user']['screen_name'],
status['text'].replace("\n","\\"))
# ... for each status ...
for status in statuses
# ... so long as the status meets this condition.
if 'retweeted_status' in status
]
Explanation: Example 9. Finding the most popular retweets
End of explanation
row_template = "{:^7} | {:^15} | {:50}"
def prettyprint_tweets(list_of_tuples):
print()
print(row_template.format("Count", "Screen Name", "Text"))
print("*"*60)
for count, screen_name, text in list_of_tuples:
print(row_template.format(count, screen_name, text[:50]))
if len(text) > 50:
print(row_template.format("", "", text[50:100]))
if len(text) > 100:
print(row_template.format("", "", text[100:]))
# Slice off the first 5 from the sorted results and display each item in the tuple
prettyprint_tweets(sorted(retweets, reverse=True)[:10])
Explanation: We can build another prettyprint function to print entire tweets with their retweet count.
We also want to split the text of the tweet in up to 3 lines, if needed.
End of explanation |
628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mapping Names to Sequence Elements
Problem
You have code that accesses list or tuple elements by position, but this makes the code somewhat difficult to read at times. You’d also like to be less dependent on position in the structure, by accessing the elements by name.
Solution
collections.namedtuple() provides these benefits, while adding minimal overhead over using a normal tuple object.
Step1: namedtuple is interchangeable with a tuple and supports all of the usual tuple operations such as indexing and unpacking.
Step2: A major use case for named tuples is decoupling your code from the position of the elements it manipulates. So, if you get back a large list of tuples from a database call, then manipulate them by accessing the positional elements, your code could break if, say, you added a new column to your table. Not so if you first cast the returned tuples to namedtuples.
Here is the code using ordinary tuples
Step3: Here is a version that uses a namedtuple
Step4: Discussion
Be aware that unlike a dictionary, a namedtuple is immutable. If you need to change any of the attributes, it can be done using the _replace() method of a namedtuple instance
Step5: We can also use _replace() method to populate named tuples that have optional or missing fields. | Python Code:
from collections import namedtuple
Subscriber = namedtuple('Subscriber', ['addr', 'joined'])
sub = Subscriber('jonesy@example.com', '2012-10-19')
sub
print(sub.addr)
print(sub.joined)
Explanation: Mapping Names to Sequence Elements
Problem
You have code that accesses list or tuple elements by position, but this makes the code somewhat difficult to read at times. You’d also like to be less dependent on position in the structure, by accessing the elements by name.
Solution
collections.namedtuple() provides these benefits, while adding minimal overhead over using a normal tuple object.
End of explanation
print(len(sub))
addr, joined = sub
print(addr)
print(joined)
Explanation: namedtuple is interchangeable with a tuple and supports all of the usual tuple operations such as indexing and unpacking.
End of explanation
def compute_cost(records):
total = 0.0
for rec in records:
total += rec[1] * rec[2]
return total
Explanation: A major use case for named tuples is decoupling your code from the position of the elements it manipulates. So, if you get back a large list of tuples from a database call, then manipulate them by accessing the positional elements, your code could break if, say, you added a new column to your table. Not so if you first cast the returned tuples to namedtuples.
Here is the code using ordinary tuples
End of explanation
from collections import namedtuple
Stock = namedtuple('Stock', ['name', 'shares', 'price'])
def compute_cost(records):
total = 0.0
for rec in records:
s = Stock(*rec)
total += s.shares * s.price
return total
# Some Data
records = [
('GOOG', 100, 490.1),
('ACME', 100, 123.45),
('IBM', 50, 91.15)
]
print(compute_cost(records))
Explanation: Here is a version that uses a namedtuple
End of explanation
s = Stock('ACME', 100, 123.45)
print(s)
s.shares = 75
s = s._replace(shares=75)
s
Explanation: Discussion
Be aware that unlike a dictionary, a namedtuple is immutable. If you need to change any of the attributes, it can be done using the _replace() method of a namedtuple instance
End of explanation
from collections import namedtuple
Stock = namedtuple('Stock', ['name', 'shares', 'price', 'date', 'time'])
# Create a prototype instance
stock_prototype = Stock('', 0, 0.0, None, None)
# Function to convert a dictionary to a Stock
def dict_to_stock(s):
return stock_prototype._replace(**s)
a = {'name': 'ACME', 'shares': 100, 'price': 123.45}
dict_to_stock(a)
b = {'name': 'ACME', 'shares': 100, 'price': 123.45, 'date': '12/17/2012'}
dict_to_stock(b)
Explanation: We can also use _replace() method to populate named tuples that have optional or missing fields.
End of explanation |
629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autoregressive Moving Average (ARMA)
Step1: Sunpots Data
Step2: Does our model obey the theory?
Step3: This indicates a lack of fit.
In-sample dynamic prediction. How good does our model do? | Python Code:
%matplotlib inline
from __future__ import print_function
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.graphics.api import qqplot
Explanation: Autoregressive Moving Average (ARMA): Sunspots data
This notebook replicates the existing ARMA notebook using the statsmodels.tsa.statespace.SARIMAX class rather than the statsmodels.tsa.ARMA class.
End of explanation
print(sm.datasets.sunspots.NOTE)
dta = sm.datasets.sunspots.load_pandas().data
dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))
del dta["YEAR"]
dta.plot(figsize=(12,4));
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)
arma_mod20 = sm.tsa.statespace.SARIMAX(dta, order=(2,0,0), trend='c').fit()
print(arma_mod20.params)
arma_mod30 = sm.tsa.statespace.SARIMAX(dta, order=(3,0,0), trend='c').fit()
print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)
print(arma_mod30.params)
print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)
Explanation: Sunpots Data
End of explanation
sm.stats.durbin_watson(arma_mod30.resid)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(111)
ax = plt.plot(arma_mod30.resid)
resid = arma_mod30.resid
stats.normaltest(resid)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(111)
fig = qqplot(resid, line='q', ax=ax, fit=True)
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
r,q,p = sm.tsa.acf(resid, qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
Explanation: Does our model obey the theory?
End of explanation
predict_sunspots = arma_mod30.predict(start='1990', end='2012', dynamic=True)
fig, ax = plt.subplots(figsize=(12, 8))
dta.ix['1950':].plot(ax=ax)
predict_sunspots.plot(ax=ax, style='r');
def mean_forecast_err(y, yhat):
return y.sub(yhat).mean()
mean_forecast_err(dta.SUNACTIVITY, predict_sunspots)
Explanation: This indicates a lack of fit.
In-sample dynamic prediction. How good does our model do?
End of explanation |
630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MetPy Declarative Syntax Tutorial
The declarative syntax that is a part of the MetPy packaged is designed to aid in simple
data exploration and analysis needs by simplifying the plotting context from typical verbose
Python code. The complexity of data wrangling and plotting are hidden behind the simplified
syntax to allow a lower barrier to investigating your data.
Imports
You'll note that the number of imports is smaller due to using the declarative syntax.
There is no need to import Matplotlib or Cartopy to your code as all of that is done
behind the scenes.
Step1: Getting Data
Depending on what kind of data you are wanting to plot you'll use either Xarray (for gridded
data), Pandas (for CSV data), or the MetPy METAR parser (for METAR data).
We'll start this tutorial by reading in a gridded dataset using Xarray.
Step2: Set Datetime
Set the date/time of that you desire to plot
Step3: Subsetting Data
MetPy provides wrappers for the usual xarray indexing and selection routines that can handle
quantities with units. For DataArrays, MetPy also allows using the coordinate axis types
mentioned above as aliases for the coordinates. And so, if we wanted data to be just over
the U.S. for plotting purposes
Step4: For full details on xarray indexing/selection, see
xarray's documentation <http
Step5: Plotting
With that miniaml preparation, we are now ready to use the simplified plotting syntax to be
able to plot our data and analyze the meteorological situation.
General Structure
Set contour attributes
Set map characteristics and collect contours
Collect panels and plot
Show (or save) the results
Valid Plotting Types for Gridded Data
Step6: Now we'll set the attributes for plotting color-filled contours of wind speed at 300 hPa.
Again, the attributes that must be set include data, field, level, and time. We'll also set
a colormap and colorbar to be purposeful for wind speed. Additionally, we'll set the
attribute to change the units from m/s to knots, which is the common plotting units for
wind speed.
Step7: Once we have our contours (and any colorfill plots) set up, we will want to define the map
panel that we'll plot the data on. This is the place where we can set the view extent,
projection of our plot, add map lines like coastlines and states, set a plot title.
One of the key elements is to add the data to the map panel as a list with the plots
attribute.
Step8: Finally we'll collect all of the panels to plot on the figure, set the size of the figure,
and ultimately show or save the figure.
Step9: All of our setting now produce the following map!
Step10: That's it! What a nice looking map, with relatively simple set of code.
Adding Wind Barbs
We can easily add wind barbs to the plot we generated above by adding another plot type
and adding it to the panel. The plot type for wind barbs is BarbPlot() and has its own
set of attributes to control plotting a vector quantity.
We start with setting the attributes that we had before for our 300 hPa plot including,
Geopotential Height contours, and color-filled wind speed.
Step11: Now we'll set the attributes for plotting wind barbs, with the required attributes of data,
time, field, and level. The skip attribute is particularly useful for thining the number of
wind barbs that are plotted on the map and again we'll convert to units of knots.
Step12: Add all of our plot types to the panel, don't forget to add in the new wind barbs to our plot
list!
Step13: Plot Surface Obs
We can also plot surface (or upper-air) observations at point locations using the simplified
syntax. Whether it is surface or upper-air data, the PlotObs() class is what you would
want to use. Then you would add those observations to a map panel and then collect the panels
to plot the figure; similar to what you would do for a gridded plot.
Step14: Setting of our attributes for plotting observations is pretty straignforward and just needs
to be lists for the variables, and a comparable number of items for plot characteristics that
are specific to the individual fields. For example, the locations around a station plot, the
plot units, and any plotting formats would all meed to have the same number of items as the
fields attribute.
Plotting wind bards is done through the vector_field attribute and you can reduce the number
of points plotted (especially important for surface observations) with the reduce points
attribute.
For a very basic plot of one field, the minimum required attributes are the data, time,
fields, and location attributes.
Step15: We use the same Classes for plotting our data on a map panel and collecting all of the
panels on the figure. In this case we'll focus in on the state of Indiana for plotting. | Python Code:
from datetime import datetime, timedelta
import xarray as xr
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.io import metar
from metpy.plots.declarative import (BarbPlot, ContourPlot, FilledContourPlot, MapPanel,
PanelContainer, PlotObs)
from metpy.units import units
Explanation: MetPy Declarative Syntax Tutorial
The declarative syntax that is a part of the MetPy packaged is designed to aid in simple
data exploration and analysis needs by simplifying the plotting context from typical verbose
Python code. The complexity of data wrangling and plotting are hidden behind the simplified
syntax to allow a lower barrier to investigating your data.
Imports
You'll note that the number of imports is smaller due to using the declarative syntax.
There is no need to import Matplotlib or Cartopy to your code as all of that is done
behind the scenes.
End of explanation
# Open the netCDF file as a xarray Dataset and parse the full dataset
data = xr.open_dataset(get_test_data('GFS_test.nc', False)).metpy.parse_cf()
# View a summary of the Dataset
print(data)
Explanation: Getting Data
Depending on what kind of data you are wanting to plot you'll use either Xarray (for gridded
data), Pandas (for CSV data), or the MetPy METAR parser (for METAR data).
We'll start this tutorial by reading in a gridded dataset using Xarray.
End of explanation
plot_time = datetime(2010, 10, 26, 12)
Explanation: Set Datetime
Set the date/time of that you desire to plot
End of explanation
ds = data.metpy.sel(lat=slice(70, 10), lon=slice(360 - 150, 360 - 55))
Explanation: Subsetting Data
MetPy provides wrappers for the usual xarray indexing and selection routines that can handle
quantities with units. For DataArrays, MetPy also allows using the coordinate axis types
mentioned above as aliases for the coordinates. And so, if we wanted data to be just over
the U.S. for plotting purposes
End of explanation
ds['wind_speed'] = mpcalc.wind_speed(ds['u-component_of_wind_isobaric'],
ds['v-component_of_wind_isobaric'])
Explanation: For full details on xarray indexing/selection, see
xarray's documentation <http://xarray.pydata.org/en/stable/indexing.html>_.
Calculations
In MetPy 1.0 and later, calculation functions accept Xarray DataArray's as input and the
output a DataArray that can be easily added to an existing Dataset.
As an example, we calculate wind speed from the wind components and add it as a new variable
to our Dataset.
End of explanation
# Set attributes for contours of Geopotential Heights at 300 hPa
cntr2 = ContourPlot()
cntr2.data = ds
cntr2.field = 'Geopotential_height_isobaric'
cntr2.level = 300 * units.hPa
cntr2.time = plot_time
cntr2.contours = list(range(0, 10000, 120))
cntr2.linecolor = 'black'
cntr2.linestyle = 'solid'
cntr2.clabels = True
Explanation: Plotting
With that miniaml preparation, we are now ready to use the simplified plotting syntax to be
able to plot our data and analyze the meteorological situation.
General Structure
Set contour attributes
Set map characteristics and collect contours
Collect panels and plot
Show (or save) the results
Valid Plotting Types for Gridded Data:
ContourPlot()
FilledContourPlot()
ImagePlot()
BarbPlot()
More complete descriptions of these and other plotting types, as well as the map panel and
panel container classes are at the end of this tutorial.
Let's plot a 300-hPa map with color-filled wind speed, which we calculated and added to
our Dataset above, and geopotential heights over the CONUS.
We'll start by setting attributes for contours of Geopotential Heights at 300 hPa.
We need to set at least the data, field, level, and time attributes. We'll set a few others
to have greater control over hour the data is plotted.
End of explanation
# Set attributes for plotting color-filled contours of wind speed at 300 hPa
cfill = FilledContourPlot()
cfill.data = ds
cfill.field = 'wind_speed'
cfill.level = 300 * units.hPa
cfill.time = plot_time
cfill.contours = list(range(10, 201, 20))
cfill.colormap = 'BuPu'
cfill.colorbar = 'horizontal'
cfill.plot_units = 'knot'
Explanation: Now we'll set the attributes for plotting color-filled contours of wind speed at 300 hPa.
Again, the attributes that must be set include data, field, level, and time. We'll also set
a colormap and colorbar to be purposeful for wind speed. Additionally, we'll set the
attribute to change the units from m/s to knots, which is the common plotting units for
wind speed.
End of explanation
# Set the attributes for the map and add our data to the map
panel = MapPanel()
panel.area = [-125, -74, 20, 55]
panel.projection = 'lcc'
panel.layers = ['states', 'coastline', 'borders']
panel.title = f'{cfill.level.m}-hPa Heights and Wind Speed at {plot_time}'
panel.plots = [cfill, cntr2]
Explanation: Once we have our contours (and any colorfill plots) set up, we will want to define the map
panel that we'll plot the data on. This is the place where we can set the view extent,
projection of our plot, add map lines like coastlines and states, set a plot title.
One of the key elements is to add the data to the map panel as a list with the plots
attribute.
End of explanation
# Set the attributes for the panel and put the panel in the figure
pc = PanelContainer()
pc.size = (15, 15)
pc.panels = [panel]
Explanation: Finally we'll collect all of the panels to plot on the figure, set the size of the figure,
and ultimately show or save the figure.
End of explanation
# Show the image
pc.show()
Explanation: All of our setting now produce the following map!
End of explanation
# Set attributes for contours of Geopotential Heights at 300 hPa
cntr2 = ContourPlot()
cntr2.data = ds
cntr2.field = 'Geopotential_height_isobaric'
cntr2.level = 300 * units.hPa
cntr2.time = plot_time
cntr2.contours = list(range(0, 10000, 120))
cntr2.linecolor = 'black'
cntr2.linestyle = 'solid'
cntr2.clabels = True
# Set attributes for plotting color-filled contours of wind speed at 300 hPa
cfill = FilledContourPlot()
cfill.data = ds
cfill.field = 'wind_speed'
cfill.level = 300 * units.hPa
cfill.time = plot_time
cfill.contours = list(range(10, 201, 20))
cfill.colormap = 'BuPu'
cfill.colorbar = 'horizontal'
cfill.plot_units = 'knot'
Explanation: That's it! What a nice looking map, with relatively simple set of code.
Adding Wind Barbs
We can easily add wind barbs to the plot we generated above by adding another plot type
and adding it to the panel. The plot type for wind barbs is BarbPlot() and has its own
set of attributes to control plotting a vector quantity.
We start with setting the attributes that we had before for our 300 hPa plot including,
Geopotential Height contours, and color-filled wind speed.
End of explanation
# Set attributes for plotting wind barbs
barbs = BarbPlot()
barbs.data = ds
barbs.time = plot_time
barbs.field = ['u-component_of_wind_isobaric', 'v-component_of_wind_isobaric']
barbs.level = 300 * units.hPa
barbs.skip = (3, 3)
barbs.plot_units = 'knot'
Explanation: Now we'll set the attributes for plotting wind barbs, with the required attributes of data,
time, field, and level. The skip attribute is particularly useful for thining the number of
wind barbs that are plotted on the map and again we'll convert to units of knots.
End of explanation
# Set the attributes for the map and add our data to the map
panel = MapPanel()
panel.area = [-125, -74, 20, 55]
panel.projection = 'lcc'
panel.layers = ['states', 'coastline', 'borders']
panel.title = f'{cfill.level.m}-hPa Heights and Wind Speed at {plot_time}'
panel.plots = [cfill, cntr2, barbs]
# Set the attributes for the panel and put the panel in the figure
pc = PanelContainer()
pc.size = (15, 15)
pc.panels = [panel]
# Show the figure
pc.show()
Explanation: Add all of our plot types to the panel, don't forget to add in the new wind barbs to our plot
list!
End of explanation
df = metar.parse_metar_file(get_test_data('metar_20190701_1200.txt', False), year=2019,
month=7)
# Let's take a look at the variables that we could plot coming from our METAR observations.
print(df.keys())
# Set the observation time
obs_time = datetime(2019, 7, 1, 12)
Explanation: Plot Surface Obs
We can also plot surface (or upper-air) observations at point locations using the simplified
syntax. Whether it is surface or upper-air data, the PlotObs() class is what you would
want to use. Then you would add those observations to a map panel and then collect the panels
to plot the figure; similar to what you would do for a gridded plot.
End of explanation
# Plot desired data
obs = PlotObs()
obs.data = df
obs.time = obs_time
obs.time_window = timedelta(minutes=15)
obs.level = None
obs.fields = ['cloud_coverage', 'air_temperature', 'dew_point_temperature',
'air_pressure_at_sea_level', 'current_wx1_symbol']
obs.plot_units = [None, 'degF', 'degF', None, None]
obs.locations = ['C', 'NW', 'SW', 'NE', 'W']
obs.formats = ['sky_cover', None, None, lambda v: format(v * 10, '.0f')[-3:],
'current_weather']
obs.reduce_points = 0.75
obs.vector_field = ['eastward_wind', 'northward_wind']
Explanation: Setting of our attributes for plotting observations is pretty straignforward and just needs
to be lists for the variables, and a comparable number of items for plot characteristics that
are specific to the individual fields. For example, the locations around a station plot, the
plot units, and any plotting formats would all meed to have the same number of items as the
fields attribute.
Plotting wind bards is done through the vector_field attribute and you can reduce the number
of points plotted (especially important for surface observations) with the reduce points
attribute.
For a very basic plot of one field, the minimum required attributes are the data, time,
fields, and location attributes.
End of explanation
# Panel for plot with Map features
panel = MapPanel()
panel.layout = (1, 1, 1)
panel.projection = 'lcc'
panel.area = 'in'
panel.layers = ['states']
panel.title = f'Surface plot for {obs_time}'
panel.plots = [obs]
# Bringing it all together
pc = PanelContainer()
pc.size = (10, 10)
pc.panels = [panel]
pc.show()
Explanation: We use the same Classes for plotting our data on a map panel and collecting all of the
panels on the figure. In this case we'll focus in on the state of Indiana for plotting.
End of explanation |
631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Centralities
In this section, I'm going to learn how Centrality works and try to interpret the data based on small real dataset. I'm using Facebook DataSet from SNAP https
Step1: Now let's find the celebrities. The most basic centrality is Degree Centrality which is the sum of all in and out nodes (in the case of directed graph).
Step2: We can see that the node 107 has the highest degree centrality which means node 107 has the highest number of connected nodes. We can prove this by getting the degree of node 107 to see how many friends of node 107 has
Step3: Node 107 has 1045 friends and we can divide that by number of nodes to get the normalized degree centrality
Step4: Degree centrality might be the easiest number to calculate but it only shows the number of nodes connected which in real social network it might not be very useful as you might have a million followers but if the majority of them is bots then the number is not telling anything new.
Now let's try Betweenness which count all of the shortest path going throw each now. This might mean that if you have the highest shortest path going through you, you might be considered as bridge of your entire network.
Nodes with high betweenness are important in communication and information diffusion
We will be using multiprocessing so we can parallel the computation and distribute the load.
Step5: Let's try with multiprocesser.
Step6: Now let's try with just one processor
Step7: Page rank
We're going to try PageRank algorithm. This is very similar to Google's PageRank which they use incoming links to determine the "popularity"
Step8: We can see that now the score is different as node 3437 is more popular than node 107.
Who is a "Gray Cardinal"
There's another metric that we can measure most influential node. It's called eigenvector centrality. To put it simply it means that if you're well connected to a lot of important people that means you're important or most influential as well.
Step9: Now we get quite a different result. This would mean that node 1912 is connected to more important people in the entire network that means that node is more influential than the rest of the network.
iGraph with SNAP Facebook Dataset
Networkx is easy to install and great to start with. However, as it's written in Python it's quite slow. I'm going to try iGraph which is C based. I'm hoping that this would yield the same result but faster.
Step10: Betweenness
Step11: Closeness
Step12: Eigen Value
Step13: PageRank
Step14: We can see that iGraph yields similar result from networkx but it's a lot quicker in the same machine.
Graph_tool with SNAP Facebook Dataset
I'm going to try another library which is supposed to be the fastest than networkx and igraph. Graph_tool is also C based which it has OpenMP enabled so a lot of algorithms is multiprocessing.
Step15: Betweeness
Step16: Closeness
Step17: Eigenvalue
Step18: Page Rank
Step19: Information diffusion modelling
I'm going to information diffusion model to simulate how information travels in the graph.
Step21: Networkx Independent Cascade Model | Python Code:
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
import operator
import timeit
g_fb = nx.read_edgelist('facebook_combined.txt', create_using = nx.Graph(), nodetype = int)
print nx.info(g_fb)
print nx.is_directed(g_fb)
Explanation: Centralities
In this section, I'm going to learn how Centrality works and try to interpret the data based on small real dataset. I'm using Facebook DataSet from SNAP https://snap.stanford.edu/data/egonets-Facebook.html. The data is included in this repository for easier access. The data is in EdgeList format (source, target).
I'm going to use Networkx, iGraph and graph_tool to find all the centralities.
End of explanation
dg_centrality = nx.degree_centrality(g_fb)
sorted_dg_centrality = sorted(dg_centrality.items(), key=operator.itemgetter(1), reverse=True)
sorted_dg_centrality[:10]
Explanation: Now let's find the celebrities. The most basic centrality is Degree Centrality which is the sum of all in and out nodes (in the case of directed graph).
End of explanation
nx.degree(g_fb, [107])
Explanation: We can see that the node 107 has the highest degree centrality which means node 107 has the highest number of connected nodes. We can prove this by getting the degree of node 107 to see how many friends of node 107 has
End of explanation
float(nx.degree(g_fb, [107]).values()[0]) / g_fb.number_of_nodes()
Explanation: Node 107 has 1045 friends and we can divide that by number of nodes to get the normalized degree centrality
End of explanation
from multiprocessing import Pool
import itertools
def partitions(nodes, n):
"Partitions the nodes into n subsets"
nodes_iter = iter(nodes)
while True:
partition = tuple(itertools.islice(nodes_iter,n))
if not partition:
return
yield partition
def btwn_pool(G_tuple):
return nx.betweenness_centrality_source(*G_tuple)
def between_parallel(G, processes = None):
p = Pool(processes=processes)
part_generator = 4*len(p._pool)
node_partitions = list(partitions(G.nodes(), int(len(G)/part_generator)))
num_partitions = len(node_partitions)
bet_map = p.map(btwn_pool,
zip([G]*num_partitions,
[True]*num_partitions,
[None]*num_partitions,
node_partitions))
bt_c = bet_map[0]
for bt in bet_map[1:]:
for n in bt:
bt_c[n] += bt[n]
return bt_c
Explanation: Degree centrality might be the easiest number to calculate but it only shows the number of nodes connected which in real social network it might not be very useful as you might have a million followers but if the majority of them is bots then the number is not telling anything new.
Now let's try Betweenness which count all of the shortest path going throw each now. This might mean that if you have the highest shortest path going through you, you might be considered as bridge of your entire network.
Nodes with high betweenness are important in communication and information diffusion
We will be using multiprocessing so we can parallel the computation and distribute the load.
End of explanation
start = timeit.default_timer()
bt = between_parallel(g_fb)
stop = timeit.default_timer()
top = 10
max_nodes = sorted(bt.iteritems(), key = lambda v: -v[1])[:top]
bt_values = [5]*len(g_fb.nodes())
bt_colors = [0]*len(g_fb.nodes())
for max_key, max_val in max_nodes:
bt_values[max_key] = 150
bt_colors[max_key] = 2
print 'It takes {} seconds to finish'.format(stop - start)
print max_nodes
Explanation: Let's try with multiprocesser.
End of explanation
start = timeit.default_timer()
bt = nx.betweenness_centrality(g_fb)
stop = timeit.default_timer()
top = 10
max_nodes = sorted(bt.iteritems(), key = lambda v: -v[1])[:top]
bt_values = [5]*len(g_fb.nodes())
bt_colors = [0]*len(g_fb.nodes())
for max_key, max_val in max_nodes:
bt_values[max_key] = 150
bt_colors[max_key] = 2
print 'It takes {} seconds to finish'.format(stop - start)
print max_nodes
Explanation: Now let's try with just one processor
End of explanation
g_fb_pr = nx.pagerank(g_fb)
top = 10
max_pagerank = sorted(g_fb_pr.iteritems(), key = lambda v: -v[1])[:top]
max_pagerank
Explanation: Page rank
We're going to try PageRank algorithm. This is very similar to Google's PageRank which they use incoming links to determine the "popularity"
End of explanation
g_fb_eg = nx.eigenvector_centrality(g_fb)
top = 10
max_eg = sorted(g_fb_eg.iteritems(), key = lambda v: -v[1])[:top]
max_eg
Explanation: We can see that now the score is different as node 3437 is more popular than node 107.
Who is a "Gray Cardinal"
There's another metric that we can measure most influential node. It's called eigenvector centrality. To put it simply it means that if you're well connected to a lot of important people that means you're important or most influential as well.
End of explanation
from igraph import *
import timeit
igraph_fb = Graph.Read_Edgelist('facebook_combined.txt', directed=False)
print igraph_fb.summary()
Explanation: Now we get quite a different result. This would mean that node 1912 is connected to more important people in the entire network that means that node is more influential than the rest of the network.
iGraph with SNAP Facebook Dataset
Networkx is easy to install and great to start with. However, as it's written in Python it's quite slow. I'm going to try iGraph which is C based. I'm hoping that this would yield the same result but faster.
End of explanation
def betweenness_centralization(G):
vnum = G.vcount()
if vnum < 3:
raise ValueError("graph must have at least three vertices")
denom = (vnum-1)*(vnum-2)
temparr = [2*i/denom for i in G.betweenness()]
return temparr
start = timeit.default_timer()
igraph_betweenness = betweenness_centralization(igraph_fb)
stop = timeit.default_timer()
print 'It takes {} seconds to finish'.format(stop - start)
igraph_betweenness.sort(reverse=True)
print igraph_betweenness[:10]
Explanation: Betweenness
End of explanation
start = timeit.default_timer()
igraph_closeness = igraph_fb.closeness()
stop = timeit.default_timer()
print 'It takes {} seconds to finish'.format(stop - start)
igraph_closeness.sort(reverse=True)
print igraph_closeness[:10]
Explanation: Closeness
End of explanation
start = timeit.default_timer()
igraph_eg = igraph_fb.evcent()
stop = timeit.default_timer()
print 'It takes {} seconds to finish'.format(stop - start)
igraph_eg.sort(reverse=True)
print igraph_eg[:10]
Explanation: Eigen Value
End of explanation
start = timeit.default_timer()
igraph_pr = igraph_fb.pagerank()
stop = timeit.default_timer()
print 'It takes {} seconds to finish'.format(stop - start)
igraph_pr.sort(reverse=True)
print igraph_pr[:10]
Explanation: PageRank
End of explanation
import sys
from graph_tool.all import *
import timeit
show_config()
graph_tool_fb = Graph(directed=False)
with open('facebook_combined.txt', 'r') as f:
for line in f:
edge_list = line.split()
source, target = tuple(edge_list)
graph_tool_fb.add_edge(source, target)
print graph_tool_fb.num_vertices()
print graph_tool_fb.num_edges()
Explanation: We can see that iGraph yields similar result from networkx but it's a lot quicker in the same machine.
Graph_tool with SNAP Facebook Dataset
I'm going to try another library which is supposed to be the fastest than networkx and igraph. Graph_tool is also C based which it has OpenMP enabled so a lot of algorithms is multiprocessing.
End of explanation
start = timeit.default_timer()
vertext_betweenness, edge_betweenness = betweenness(graph_tool_fb)
stop = timeit.default_timer()
print 'It takes {} seconds to finish'.format(stop - start)
vertext_betweenness.a[107]
Explanation: Betweeness
End of explanation
start = timeit.default_timer()
v_closeness = closeness(graph_tool_fb)
stop = timeit.default_timer()
print 'It takes {} seconds to finish'.format(stop - start)
v_closeness.a[107]
Explanation: Closeness
End of explanation
start = timeit.default_timer()
v_closeness = eigenvector(graph_tool_fb)
stop = timeit.default_timer()
print 'It takes {} seconds to finish'.format(stop - start)
Explanation: Eigenvalue
End of explanation
start = timeit.default_timer()
v_closeness = pagerank(graph_tool_fb)
stop = timeit.default_timer()
print 'It takes {} seconds to finish'.format(stop - start)
Explanation: Page Rank
End of explanation
%matplotlib inline
import random as r
import networkx as nx
import matplotlib.pyplot as plot
class Person(object):
def __init__(self, id):
#Start with a single initial preference
self.id = id
self.i = r.random()
self.a = self.i
# we value initial opinion and subsequent information equally
self.alpha = 0.8
def __str__(self):
return (str(self.id))
def step(self):
# loop through the neighbors and aggregate their preferences
neighbors = g[self]
# all nodes in the list of neighbors are equally weighted, including self
w = 1/float((len(neighbors) + 1 ))
s = w * self.a
for node in neighbors:
s += w * node.a
# update my beliefs = initial belief plus sum of all influences
self.a = (1 - self.alpha) * self.i + self.alpha * s
density = 0.9
g = nx.Graph()
## create a network of Person objects
for i in range(10):
p = Person(i)
g.add_node(p)
## this will be a simple random graph, every pair of nodes has an
## equal probability of connection
for x in g.nodes():
for y in g.nodes():
if r.random() <= density:
g.add_edge(x,y)
## draw the resulting graph and color the nodes by their value
col = [n.a for n in g.nodes()]
pos = nx.spring_layout(g)
nx.draw_networkx(g, pos=pos, node_color=col)
## repeat for 30 times periods
for i in range(30):
## iterate through all nodes in the network and tell them to make a step
for node in g.nodes():
node.step()
## collect new attitude data, print it to the terminal and plot it.
col = [n.a for n in g.nodes()]
print col
plot.plot(col)
class Influencer(Person):
def __ini__(self, id):
self.id = id
self.i = r.random()
self.a = 1 ## opinion is strong and immovable
def step(self):
pass
influencers = 2
connections = 4
## add the influencers to the network and connect each to 3 other nodes
for i in range(influencers):
inf = Influencer("Inf" + str(i))
for x in range(connections):
g.add_edge(r.choice(g.nodes()), inf)
## repeat for 30 time periods
for i in range(30):
## iterate through all nodes in the network and tell them to make a step
for node in g.nodes():
node.step()
## collect new attitude data, print it to the terminal and plot it.
col = [n.a for n in g.nodes()]
#print col
plot.plot(col)
Explanation: Information diffusion modelling
I'm going to information diffusion model to simulate how information travels in the graph.
End of explanation
import copy
import networkx as nx
import random
def independent_cascade(G, seeds, steps = 0):
"Return the active nodes of each diffusion step by the independent cascade
model
Parameters
-- -- -- -- -- -
G: graph
A NetworkX graph
seeds: list of nodes
The seed nodes for diffusion
steps: integer
The number of steps to diffuse.If steps <= 0, the diffusion runs until
no more nodes can be activated.If steps > 0, the diffusion runs for at
most "steps" rounds
Returns
-- -- -- -
layer_i_nodes: list of list of activated nodes
layer_i_nodes[0]: the seeds
layer_i_nodes[k]: the nodes activated at the kth diffusion step
Notes
-- -- -
When node v in G becomes active, it has a * single * chance of activating
each currently inactive neighbor w with probability p_ {
vw
}
Examples
-- -- -- --
>>> DG = nx.DiGraph() >>> DG.add_edges_from([(1, 2), (1, 3), (1, 5), (2, 1), (3, 2), (4, 2), (4, 3), \ >>> (4, 6), (5, 3), (5, 4), (5, 6), (6, 4), (6, 5)], act_prob = 0.2) >>> H = nx.independent_cascade(DG, [6])
References
-- -- -- -- --[1] David Kempe, Jon Kleinberg, and Eva Tardos.
Influential nodes in a diffusion model
for social networks.
In Automata, Languages and Programming, 2005.
if type(G) == nx.MultiGraph or type(G) == nx.MultiDiGraph:
raise Exception(\
"independent_cascade() is not defined for graphs with multiedges.")
# make sure the seeds are in the graph
for s in seeds:
if s not in G.nodes():
raise Exception("seed", s, "is not in graph")
# change to directed graph
if not G.is_directed():
DG = G.to_directed()
else:
DG = copy.deepcopy(G)
# init activation probabilities
for e in DG.edges():
if 'act_prob' not in DG[e[0]][e[1]]:
DG[e[0]][e[1]]['act_prob'] = 0.1
elif DG[e[0]][e[1]]['act_prob'] > 1:
raise Exception("edge activation probability:", DG[e[0]][e[1]]['act_prob'], "cannot be larger than 1")
# perform diffusion
A = copy.deepcopy(seeds)# prevent side effect
if steps <= 0: #perform diffusion until no more nodes can be activated
return _diffuse_all(DG, A)# perform diffusion for at most "steps" rounds
return _diffuse_k_rounds(DG, A, steps)
def _diffuse_all(G, A):
tried_edges = set()
layer_i_nodes = [ ]
layer_i_nodes.append([i for i in A]) # prevent side effect
while True:
len_old = len(A)
(A, activated_nodes_of_this_round, cur_tried_edges) = _diffuse_one_round(G, A, tried_edges)
layer_i_nodes.append(activated_nodes_of_this_round)
tried_edges = tried_edges.union(cur_tried_edges)
if len(A) == len_old:
break
return layer_i_nodes
def _diffuse_k_rounds(G, A, steps):
tried_edges = set()
layer_i_nodes = [ ]
layer_i_nodes.append([i for i in A])
while steps > 0 and len(A) < len(G):
len_old = len(A)
(A, activated_nodes_of_this_round, cur_tried_edges) = _diffuse_one_round(G, A, tried_edges)
layer_i_nodes.append(activated_nodes_of_this_round)
tried_edges = tried_edges.union(cur_tried_edges)
if len(A) == len_old:
break
steps -= 1
return layer_i_nodes
def _diffuse_one_round(G, A, tried_edges):
activated_nodes_of_this_round = set()
cur_tried_edges = set()
for s in A:
for nb in G.successors(s):
if nb in A or (s, nb) in tried_edges or (s, nb) in cur_tried_edges:
continue
if _prop_success(G, s, nb):
activated_nodes_of_this_round.add(nb)
cur_tried_edges.add((s, nb))
activated_nodes_of_this_round = list(activated_nodes_of_this_round)
A.extend(activated_nodes_of_this_round)
return A, activated_nodes_of_this_round, cur_tried_edges
def _prop_success(G, src, dest):
return random.random() <= G[src][dest]['act_prob']
run_times = 10
G = nx.DiGraph()
G.add_edge(1,2,act_prob=.5)
G.add_edge(2,1,act_prob=.5)
G.add_edge(1,3,act_prob=.2)
G.add_edge(3,1,act_prob=.2)
G.add_edge(2,3,act_prob=.3)
G.add_edge(2,4,act_prob=.5)
G.add_edge(3,4,act_prob=.1)
G.add_edge(3,5,act_prob=.2)
G.add_edge(4,5,act_prob=.2)
G.add_edge(5,6,act_prob=.6)
G.add_edge(6,5,act_prob=.6)
G.add_edge(6,4,act_prob=.3)
G.add_edge(6,2,act_prob=.4)
nx.draw_networkx(G)
independent_cascade(G, [1], steps=0)
n_A = 0.0
for i in range(run_times):
A = independent_cascade(G, [1], steps=1)
print A
for layer in A:
n_A += len(layer)
n_A / run_times
#assert_almost_equal(n_A / run_times, 1.7, places=1)
Explanation: Networkx Independent Cascade Model
End of explanation |
632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python's iterators and generators
TODO
* https
Step1: Iterator
TODO...
Step2: Generator
Goal
Step3: Generator iterator
https
Step4: Generator expression
https
Step5: Strange test | Python Code:
# import python packages here...
Explanation: Python's iterators and generators
TODO
* https://stackoverflow.com/questions/2776829/difference-between-pythons-generators-and-iterators
* https://docs.python.org/3/glossary.html#term-generator
* https://docs.python.org/3/glossary.html#term-generator-iterator
* https://docs.python.org/3/reference/simple_stmts.html#yield
* https://docs.python.org/3/library/stdtypes.html#iterator-types
* https://docs.python.org/3/library/functions.html#next
* https://docs.python.org/3/glossary.html#term-iterable
* https://docs.python.org/3/glossary.html#term-iterator
* https://docs.python.org/3/library/stdtypes.html#iterator.next
* https://docs.python.org/3/library/functions.html#iter
* https://docs.python.org/3/library/exceptions.html#StopIteration
* https://docs.python.org/3/reference/datamodel.html#object.iter
* https://docs.python.org/3/library/stdtypes.html#typeiter
* https://wiki.python.org/moin/Generators
* https://wiki.python.org/moin/Iterator
End of explanation
class Counter:
def __init__(self, max_value):
self.current_value = 0
self.max_value = max_value
def __iter__(self):
return self
def __next__(self):
if self.current_value >= self.max_value:
raise StopIteration
self.current_value += 1
return self.current_value
###
cpt = Counter(10) # cpt is an iterator
for i in cpt:
print(i)
###
cpt = Counter(10) # cpt is an iterator
print(next(cpt))
print(next(cpt))
###
print([i for i in cpt])
Explanation: Iterator
TODO...
End of explanation
def counter(max_value):
current_value = 0
while current_value < max_value:
yield current_value # temporarily suspends processing, returning i and remembering the location execution state (including local variables and pending try-statements)
current_value += 1
for elem in altrange(10):
print(elem)
list(altrange(10))
def gen(nmax): # "gen" is a "generator"
n = 0
while n<nmax:
yield n
n += 1
gi = foo() # "gi" is a "generator iterator"
for n in gi;
print(n)
Explanation: Generator
Goal: increase memory efficiency of some functions. See https://wiki.python.org/moin/Generators.
Generator (or generator function)
https://docs.python.org/3/glossary.html#term-generator
A function which returns a generator iterator. It looks like a normal function except that it contains yield expressions for producing a series of values usable in a for-loop or that can be retrieved one at a time with the next() function.
Usually refers to a generator function, but may refer to a generator iterator in some contexts. In cases where the intended meaning isn't clear, using the full terms avoids ambiguity.
End of explanation
altrange(10)
Explanation: Generator iterator
https://docs.python.org/3/glossary.html#term-generator-iterator
An object created by a generator function.
Each yield temporarily suspends processing, remembering the location execution state (including local variables and pending try-statements). When the generator iterator resumes, it picks-up where it left-off (in contrast to functions which start fresh on every invocation).
End of explanation
(i*i for i in range(10))
list((i*i for i in range(10)))
list(i*i for i in range(10))
sum(i*i for i in range(10)) # sum of squares 0, 1, 4, ... 81
Explanation: Generator expression
https://docs.python.org/3/glossary.html#term-generator-expression
An expression that returns an iterator. It looks like a normal expression followed by a for expression defining a loop variable, range, and an optional if expression. The combined expression generates values for an enclosing function:
End of explanation
class AltRange:
def __init__(self, n):
self.i = 0
self.n = n
def run(self):
self.i = 0
while self.i < self.n:
yield self.i # temporarily suspends processing, returning i and remembering the location execution state (including local variables and pending try-statements)
self.i += 1
obj = AltRange(10)
obj.run()
list(obj.run())
for elem in obj.run():
obj.i += 1
print(elem)
Explanation: Strange test: does it memorize the object state ? => that's a stupid question as the state of an object is always "memorized"... This only concerns function's internal state (i.e. yield only memorize the state of the function it is defined in, not the objects state)...
End of explanation |
633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy
Step1: 1. Создание векторов
Самый простой способ создать вектор в NumPy — задать его явно с помощью numpy.array(list, dtype=None, ...).
Параметр list задает итерируемый объект, из которого можно создать вектор. Например, в качестве этого параметра можно задать список чисел. Параметр dtype задает тип значений вектора, например, float — для вещественных значений и int — для целочисленных. Если этот параметр не задан, то тип данных будет определен из типа элементов первого аргумента.
Step2: Тип значений вектора можно узнать с помощью numpy.ndarray.dtype
Step3: Другим способом задания вектора является функция numpy.arange(([start, ]stop, [step, ]...), которая задает последовательность чисел заданного типа из промежутка [start, stop) через шаг step
Step4: По сути вектор в NumPy является одномерным массивом, что соответствует интуитивному определению вектора
Step5: Обратите внимание
Step6: 3. Нормы векторов
Вспомним некоторые нормы, которые можно ввести в пространстве $\mathbb{R}^{n}$, и рассмотрим, с помощью каких библиотек и функций их можно вычислять в NumPy.
p-норма
p-норма (норма Гёльдера) для вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ вычисляется по формуле
Step7: $\ell_{1}$ норма
$\ell_{1}$ норма
(также известная как манхэттенское расстояние)
для вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ вычисляется по формуле
Step8: $\ell_{2}$ норма
$\ell_{2}$ норма (также известная как евклидова норма)
для вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ вычисляется по формуле
Step9: Более подробно о том, какие еще нормы (в том числе матричные) можно вычислить, см. документацию.
4. Расстояния между векторами
Для двух векторов $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ и $y = (y_{1}, \dots, y_{n}) \in \mathbb{R}^{n}$ $\ell_{1}$ и $\ell_{2}$ раccтояния вычисляются по следующим формулам соответственно
Step10: Также расстояние между векторами можно посчитать с помощью функции scipy.spatial.distance.cdist(XA, XB, metric='euclidean', p=2, ...) из модуля SciPy, предназначенного для выполнения научных и инженерных расчётов.
Step11: scipy.spatial.distance.cdist(...) требует, чтобы размерность XA и XB была как минимум двумерная. По этой причине для использования этой функции необходимо преобразовать векторы, которые мы рассматриваем в этом ноутбуке, к вектор-строкам с помощью способов, которые мы рассмотрим ниже.
Параметры XA, XB — исходные вектор-строки, а metric и p задают метрику расстояния
(более подробно о том, какие метрики можно использовать, см. документацию).
Первый способ из вектора сделать вектор-строку (вектор-столбец) — это использовать метод array.reshape(shape), где параметр shape задает размерность вектора (кортеж чисел).
Step12: Заметим, что после применения этого метода размерность полученных вектор-строк будет равна shape. Следующий метод позволяет сделать такое же преобразование, но не изменяет размерность исходного вектора.
В NumPy к размерностям объектов можно добавлять фиктивные оси с помощью np.newaxis. Для того, чтобы понять, как это сделать, рассмотрим пример
Step13: Важно, что np.newaxis добавляет к размерности ось, длина которой равна 1 (это и логично, так как количество элементов должно сохраняться). Таким образом, надо вставлять новую ось там, где нужна единица в размерности.
Теперь посчитаем расстояния с помощью scipy.spatial.distance.cdist(...), используя np.newaxis для преобразования векторов
Step14: Эта функция также позволяет вычислять попарные расстояния между множествами векторов. Например, пусть у нас имеется матрица размера $m_{A} \times n$. Мы можем рассматривать ее как описание некоторых $m_{A}$ наблюдений в $n$-мерном пространстве. Пусть также имеется еще одна аналогичная матрица размера $m_{B} \times n$, где $m_{B}$ векторов в том же $n$-мерном пространстве. Часто необходимо посчитать попарные расстояния между векторами первого и второго множеств. В этом случае можно пользоваться функцией scipy.spatial.distance.cdist(XA, XB, metric='euclidean', p=2, ...), где в качестве XA, XB необходимо передать две описанные матрицы. Функция возвращает матрицу попарных расстояний размера $m_{A} \times m_{B}$, где элемент матрицы на $[i, j]$-ой позиции равен расстоянию между $i$-тым вектором первого множества и $j$-ым вектором второго множества.
В данном случае эта функция предподчительнее numpy.linalg.norm(...), так как она вычисляет попарные расстояния быстрее и эффективнее.
5. Скалярное произведение и угол между векторами
Step15: Скалярное произведение в пространстве $\mathbb{R}^{n}$ для двух векторов $x = (x_{1}, \dots, x_{n})$ и $y = (y_{1}, \dots, y_{n})$ определяется как
Step16: Длиной вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ называется квадратный корень из скалярного произведения, то есть длина равна евклидовой норме вектора | Python Code:
import numpy as np
Explanation: NumPy: векторы и операции над ними
В этом ноутбуке нам понадобятся библиотека NumPy. Для удобства импортируем ее под более коротким именем:
End of explanation
a = np.array([1, 2, 3, 4])
print 'Вектор:\n', a
b = np.array([1, 2, 3, 4, 5], dtype=float)
print 'Вещественный вектор:\n', b
c = np.array([True, False, True], dtype=bool)
print 'Булевский вектор:\n', c
Explanation: 1. Создание векторов
Самый простой способ создать вектор в NumPy — задать его явно с помощью numpy.array(list, dtype=None, ...).
Параметр list задает итерируемый объект, из которого можно создать вектор. Например, в качестве этого параметра можно задать список чисел. Параметр dtype задает тип значений вектора, например, float — для вещественных значений и int — для целочисленных. Если этот параметр не задан, то тип данных будет определен из типа элементов первого аргумента.
End of explanation
print 'Тип булевского вектора:\n', c.dtype
Explanation: Тип значений вектора можно узнать с помощью numpy.ndarray.dtype:
End of explanation
d = np.arange(start=10, stop=20, step=2) # последнее значение не включается!
print 'Вектор чисел от 10 до 20 с шагом 2:\n', d
f = np.arange(start=0, stop=1, step=0.3, dtype=float)
print 'Вещественный вектор чисел от 0 до 1 с шагом 0.3:\n', f
Explanation: Другим способом задания вектора является функция numpy.arange(([start, ]stop, [step, ]...), которая задает последовательность чисел заданного типа из промежутка [start, stop) через шаг step:
End of explanation
print c.ndim # количество размерностей
print c.shape # shape фактически задает длину вектора
Explanation: По сути вектор в NumPy является одномерным массивом, что соответствует интуитивному определению вектора:
End of explanation
a = np.array([1, 2, 3])
b = np.array([6, 5, 4])
k = 2
print 'Вектор a:', a
print 'Вектор b:', b
print 'Число k:', k
print 'Сумма a и b:\n', a + b
print 'Разность a и b:\n', a - b
print 'Покоординатное умножение a и b:\n', a * b
print 'Умножение вектора на число (осуществляется покоординатно):\n', k * a
Explanation: Обратите внимание: вектор _и одномерный массив тождественные понятия в NumPy. Помимо этого, также существуют понятия _вектор-столбец и вектор-строка, которые, несмотря на то что математически задают один и тот же объект, являются двумерными массивами и имеют другое значение поля shape (в этом случае поле состоит из двух чисел, одно из которых равно единице). Эти тонкости будут рассмотрены в следующем уроке.
Более подробно о том, как создавать векторы в NumPy,
см. документацию.
2. Операции над векторами
Векторы в NumPy можно складывать, вычитать, умножать на число и умножать на другой вектор (покоординатно):
End of explanation
from numpy.linalg import norm
Explanation: 3. Нормы векторов
Вспомним некоторые нормы, которые можно ввести в пространстве $\mathbb{R}^{n}$, и рассмотрим, с помощью каких библиотек и функций их можно вычислять в NumPy.
p-норма
p-норма (норма Гёльдера) для вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ вычисляется по формуле:
$$
\left\Vert x \right\Vert_{p} = \left( \sum_{i=1}^n \left| x_{i} \right|^{p} \right)^{1 / p},~p \geq 1.
$$
В частных случаях при:
* $p = 1$ получаем $\ell_{1}$ норму
* $p = 2$ получаем $\ell_{2}$ норму
Далее нам понабится модуль numpy.linalg, реализующий некоторые приложения линейной алгебры. Для вычисления различных норм мы используем функцию numpy.linalg.norm(x, ord=None, ...), где x — исходный вектор, ord — параметр, определяющий норму (мы рассмотрим два варианта его значений — 1 и 2). Импортируем эту функцию:
End of explanation
a = np.array([1, 2, -3])
print 'Вектор a:', a
print 'L1 норма вектора a:\n', norm(a, ord=1)
Explanation: $\ell_{1}$ норма
$\ell_{1}$ норма
(также известная как манхэттенское расстояние)
для вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ вычисляется по формуле:
$$
\left\Vert x \right\Vert_{1} = \sum_{i=1}^n \left| x_{i} \right|.
$$
Ей в функции numpy.linalg.norm(x, ord=None, ...) соответствует параметр ord=1.
End of explanation
a = np.array([1, 2, -3])
print 'Вектор a:', a
print 'L2 норма вектора a:\n', norm(a, ord=2)
Explanation: $\ell_{2}$ норма
$\ell_{2}$ норма (также известная как евклидова норма)
для вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ вычисляется по формуле:
$$
\left\Vert x \right\Vert_{2} = \sqrt{\sum_{i=1}^n \left( x_{i} \right)^2}.
$$
Ей в функции numpy.linalg.norm(x, ord=None, ...) соответствует параметр ord=2.
End of explanation
a = np.array([1, 2, -3])
b = np.array([-4, 3, 8])
print 'Вектор a:', a
print 'Вектор b:', b
print 'L1 расстояние между векторами a и b:\n', norm(a - b, ord=1)
print 'L2 расстояние между векторами a и b:\n', norm(a - b, ord=2)
Explanation: Более подробно о том, какие еще нормы (в том числе матричные) можно вычислить, см. документацию.
4. Расстояния между векторами
Для двух векторов $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ и $y = (y_{1}, \dots, y_{n}) \in \mathbb{R}^{n}$ $\ell_{1}$ и $\ell_{2}$ раccтояния вычисляются по следующим формулам соответственно:
$$
\rho_{1}\left( x, y \right) = \left\Vert x - y \right\Vert_{1} = \sum_{i=1}^n \left| x_{i} - y_{i} \right|
$$
$$
\rho_{2}\left( x, y \right) = \left\Vert x - y \right\Vert_{2} =
\sqrt{\sum_{i=1}^n \left( x_{i} - y_{i} \right)^2}.
$$
End of explanation
from scipy.spatial.distance import cdist
Explanation: Также расстояние между векторами можно посчитать с помощью функции scipy.spatial.distance.cdist(XA, XB, metric='euclidean', p=2, ...) из модуля SciPy, предназначенного для выполнения научных и инженерных расчётов.
End of explanation
a = np.array([6, 3, -5])
b = np.array([-1, 0, 7])
print 'Вектор a:', a
print 'Его размерность:', a.shape
print 'Вектор b:', b
print 'Его размерность:', b.shape
a = a.reshape((1, 3))
b = b.reshape((1, 3))
print 'После применения метода reshape:\n'
print 'Вектор-строка a:', a
print 'Его размерность:', a.shape
print 'Вектор-строка b:', b
print 'Его размерность:', b.shape
print 'Манхэттенское расстояние между a и b (через cdist):', cdist(a, b, metric='cityblock')
Explanation: scipy.spatial.distance.cdist(...) требует, чтобы размерность XA и XB была как минимум двумерная. По этой причине для использования этой функции необходимо преобразовать векторы, которые мы рассматриваем в этом ноутбуке, к вектор-строкам с помощью способов, которые мы рассмотрим ниже.
Параметры XA, XB — исходные вектор-строки, а metric и p задают метрику расстояния
(более подробно о том, какие метрики можно использовать, см. документацию).
Первый способ из вектора сделать вектор-строку (вектор-столбец) — это использовать метод array.reshape(shape), где параметр shape задает размерность вектора (кортеж чисел).
End of explanation
d = np.array([3, 0, 8, 9, -10])
print 'Вектор d:', d
print 'Его размерность:', d.shape
print 'Вектор d с newaxis --> вектор-строка:\n', d[np.newaxis, :]
print 'Полученная размерность:', d[np.newaxis, :].shape
print 'Вектор d с newaxis --> вектор-столбец:\n', d[:, np.newaxis]
print 'Полученная размерность:', d[:, np.newaxis].shape
Explanation: Заметим, что после применения этого метода размерность полученных вектор-строк будет равна shape. Следующий метод позволяет сделать такое же преобразование, но не изменяет размерность исходного вектора.
В NumPy к размерностям объектов можно добавлять фиктивные оси с помощью np.newaxis. Для того, чтобы понять, как это сделать, рассмотрим пример:
End of explanation
a = np.array([6, 3, -5])
b = np.array([-1, 0, 7])
print 'Евклидово расстояние между a и b (через cdist):', cdist(a[np.newaxis, :],
b[np.newaxis, :],
metric='euclidean')
Explanation: Важно, что np.newaxis добавляет к размерности ось, длина которой равна 1 (это и логично, так как количество элементов должно сохраняться). Таким образом, надо вставлять новую ось там, где нужна единица в размерности.
Теперь посчитаем расстояния с помощью scipy.spatial.distance.cdist(...), используя np.newaxis для преобразования векторов:
End of explanation
a = np.array([0, 5, -1])
b = np.array([-4, 9, 3])
print 'Вектор a:', a
print 'Вектор b:', b
Explanation: Эта функция также позволяет вычислять попарные расстояния между множествами векторов. Например, пусть у нас имеется матрица размера $m_{A} \times n$. Мы можем рассматривать ее как описание некоторых $m_{A}$ наблюдений в $n$-мерном пространстве. Пусть также имеется еще одна аналогичная матрица размера $m_{B} \times n$, где $m_{B}$ векторов в том же $n$-мерном пространстве. Часто необходимо посчитать попарные расстояния между векторами первого и второго множеств. В этом случае можно пользоваться функцией scipy.spatial.distance.cdist(XA, XB, metric='euclidean', p=2, ...), где в качестве XA, XB необходимо передать две описанные матрицы. Функция возвращает матрицу попарных расстояний размера $m_{A} \times m_{B}$, где элемент матрицы на $[i, j]$-ой позиции равен расстоянию между $i$-тым вектором первого множества и $j$-ым вектором второго множества.
В данном случае эта функция предподчительнее numpy.linalg.norm(...), так как она вычисляет попарные расстояния быстрее и эффективнее.
5. Скалярное произведение и угол между векторами
End of explanation
print 'Скалярное произведение a и b (через функцию):', np.dot(a, b)
print 'Скалярное произведение a и b (через метод):', a.dot(b)
Explanation: Скалярное произведение в пространстве $\mathbb{R}^{n}$ для двух векторов $x = (x_{1}, \dots, x_{n})$ и $y = (y_{1}, \dots, y_{n})$ определяется как:
$$
\langle x, y \rangle = \sum_{i=1}^n x_{i} y_{i}.
$$
Скалярное произведение двух векторов можно вычислять с помощью функции numpy.dot(a, b, ...) или метода vec1.dot(vec2), где vec1 и vec2 — исходные векторы. Также эти функции подходят для матричного умножения, о котором речь пойдет в следующем уроке.
End of explanation
cos_angle = np.dot(a, b) / norm(a) / norm(b)
print 'Косинус угла между a и b:', cos_angle
print 'Сам угол:', np.arccos(cos_angle)
Explanation: Длиной вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ называется квадратный корень из скалярного произведения, то есть длина равна евклидовой норме вектора:
$$
\left| x \right| = \sqrt{\langle x, x \rangle} = \sqrt{\sum_{i=1}^n x_{i}^2} = \left\Vert x \right\Vert_{2}.
$$
Теперь, когда мы знаем расстояние между двумя ненулевыми векторами и их длины, мы можем вычислить угол между ними через скалярное произведение:
$$
\langle x, y \rangle = \left| x \right| | y | \cos(\alpha)
\implies \cos(\alpha) = \frac{\langle x, y \rangle}{\left| x \right| | y |},
$$
где $\alpha \in [0, \pi]$ — угол между векторами $x$ и $y$.
End of explanation |
634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Загрузим файл о именах,поле и числе детей с сайта
https
Step1: Посчитаем количество родившихся в зависимости от пола
Step2: Теперь создадим первую сводную таблицу
Step3: Построим график
Step4: Давайте узнаем информацию о нашем DataFrame с именем names
Step5: Отберем имена только мальчиков и девочек по повторяемости | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
names1880 = pd.read_csv('/Users/kirill/Downloads/names/yob1880.txt',
names= ['name', 'sex', 'births'])
names1880
Explanation: Загрузим файл о именах,поле и числе детей с сайта
https://www.ssa.gov/oact/babynames/limits.html
Соберем все данные в одну таблицу.
End of explanation
names1880.groupby('sex').sum()
# Для начала создадим новое поле year
years=range(1880,2017)
pieces = []
columns = [ 'name', 'sex', 'births' ]
for year in years:
#меняем год в имени файла
path = '/Users/kirill/Downloads/names/yob%d.txt' %year
#читаем файл с полями
frame = pd.read_csv(path, names=columns)
#дописываем поле года
frame['year'] = year
#собираем в единый список
pieces.append(frame)
#создаем DataFrame, внимание! игнорируем исходные номера строк
names = pd.concat(pieces,ignore_index = True)
names
Explanation: Посчитаем количество родившихся в зависимости от пола
End of explanation
total_births=names.pivot_table(
'births',
index = 'year', # строки
columns='sex', # колонки
aggfunc=sum # сумма по births
)
total_births
Explanation: Теперь создадим первую сводную таблицу
End of explanation
total_births.plot(title='Общее количество роодившихся детей')
plt.show()
Explanation: Построим график
End of explanation
names.info()
#names.info(memory_usage='deep')
Explanation: Давайте узнаем информацию о нашем DataFrame с именем names
End of explanation
grouped=names.groupby(['name','sex'])['births'].sum().reset_index()
grouped_m=grouped[grouped.sex=='M']
grouped_f=grouped[grouped.sex=='F']
grouped_f
most_popular_m=names[names.sex=='M'].drop('year',axis=1).groupby('name').sum().sort_values('births',ascending=False)
most_popular_m_10=most_popular_m[:10].reset_index()
most_popular_m_10
james_f=names[(names.name=='James') & (names.sex=='F')]
plt.plot(james_f['year'],james_f['births'])
plt.show()
james_m=names[(names.name=='James') & (names.sex=='M')]
plt.plot(james_m['year'],james_m['births'])
plt.show()
Explanation: Отберем имена только мальчиков и девочек по повторяемости
End of explanation |
635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use decision optimization to determine Cloud balancing.
This tutorial includes everything you need to set up decision optimization engines, build mathematical programming models, and a solve a capacitated facility location problem to do server load balancing.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step5: Define the decision variables
Step6: Express the business constraints
Step7: Express the objective
Step8: Solve with Decision Optimization
You will get the best solution found after n seconds, due to a time limit parameter.
Step9: Step 5 | Python Code:
import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
Explanation: Use decision optimization to determine Cloud balancing.
This tutorial includes everything you need to set up decision optimization engines, build mathematical programming models, and a solve a capacitated facility location problem to do server load balancing.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
The business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Import the library
Step 2: Model the Data
Step 3: Prepare the data
Step 4: Set up the prescriptive model
Define the decision variables
Express the business constraints
Express the objective
Solve with Decision Optimization
Step 5: Investigate the solution and run an example analysis
Summary
The business problem: Capacitated Facility Location.
This example looks at cloud load balancing to keep a service running in the cloud at reasonable cost by reducing the expense of running cloud servers, minimizing risk and human time due to rebalancing, and doing balance sleeping models across servers.
The different KPIs are optimized in a hierarchical manner: first, the number of active servers is minimized, then the total number of migrations is minimized, and finally the sleeping workload is balanced.
How decision optimization can help
Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
<u>With prescriptive analytics, you can:</u>
Automate the complex decisions and trade-offs to better manage your limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
Use decision optimization
Step 1: Import the library
Run the following code to import the Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming (docplex.mp) and Constraint Programming (docplex.cp).
End of explanation
from collections import namedtuple
class TUser(namedtuple("TUser", ["id", "running", "sleeping", "current_server"])):
def __str__(self):
return self.id
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
try:
from urllib2 import urlopen
except ImportError:
from urllib.request import urlopen
import csv
data_url = "https://github.com/vberaudi/utwt/blob/master/users.csv?raw=true"
xld = urlopen(data_url).read()
xlds = StringIO(xld.decode('utf-8'))
reader = csv.reader(xlds)
users = [(row[0], int(row[1]), int(row[2]), row[3]) for row in reader]
Explanation: Step 2: Model the data
In this scenario, the data is simple and is delivered in the json format under the Optimization github.
End of explanation
max_processes_per_server = 50
users = [TUser(*user_row) for user_row in users]
servers = list({t.current_server for t in users})
Explanation: Step 3: Prepare the data
Given the number of teams in each division and the number of intradivisional and interdivisional games to be played, you can calculate the total number of teams and the number of weeks in the schedule, assuming every team plays exactly one game per week.
The season is split into halves, and the number of the intradivisional games that each team must play in the first half of the season is calculated.
End of explanation
from docplex.mp.model import Model
mdl = Model("load_balancing")
Explanation: Step 4: Set up the prescriptive model
Create the DOcplex model
The model contains all the business constraints and defines the objective.
End of explanation
active_var_by_server = mdl.binary_var_dict(servers, name='isActive')
def user_server_pair_namer(u_s):
u, s = u_s
return '%s_to_%s' % (u.id, s)
assign_user_to_server_vars = mdl.binary_var_matrix(users, servers, user_server_pair_namer)
max_sleeping_workload = mdl.integer_var(name="max_sleeping_processes")
def _is_migration(user, server):
Returns True if server is not the user's current
Used in setup of constraints.
return server != user.current_server
Explanation: Define the decision variables
End of explanation
mdl.add_constraints(
mdl.sum(assign_user_to_server_vars[u, s] * u.running for u in users) <= max_processes_per_server
for s in servers)
mdl.print_information()
# each assignment var <u, s> is <= active_server(s)
for s in servers:
for u in users:
ct_name = 'ct_assign_to_active_{0!s}_{1!s}'.format(u, s)
mdl.add_constraint(assign_user_to_server_vars[u, s] <= active_var_by_server[s], ct_name)
# sum of assignment vars for (u, all s in servers) == 1
for u in users:
ct_name = 'ct_unique_server_%s' % (u[0])
mdl.add_constraint(mdl.sum((assign_user_to_server_vars[u, s] for s in servers)) == 1.0, ct_name)
mdl.print_information()
number_of_active_servers = mdl.sum((active_var_by_server[svr] for svr in servers))
mdl.add_kpi(number_of_active_servers, "Number of active servers")
number_of_migrations = mdl.sum(
assign_user_to_server_vars[u, s] for u in users for s in servers if _is_migration(u, s))
mdl.add_kpi(number_of_migrations, "Total number of migrations")
for s in servers:
ct_name = 'ct_define_max_sleeping_%s' % s
mdl.add_constraint(
mdl.sum(
assign_user_to_server_vars[u, s] * u.sleeping for u in users) <= max_sleeping_workload,
ct_name)
mdl.add_kpi(max_sleeping_workload, "Max sleeping workload")
mdl.print_information()
Explanation: Express the business constraints
End of explanation
# Set objective function
mdl.minimize(number_of_active_servers)
mdl.print_information()
Explanation: Express the objective
End of explanation
# build an ordered sequence of goals
ordered_kpi_keywords = ["servers", "migrations", "sleeping"]
ordered_goals = [mdl.kpi_by_name(k) for k in ordered_kpi_keywords]
mdl.solve_with_goals(ordered_goals)
mdl.report()
Explanation: Solve with Decision Optimization
You will get the best solution found after n seconds, due to a time limit parameter.
End of explanation
active_servers = sorted([s for s in servers if active_var_by_server[s].solution_value == 1])
print("Active Servers: {}".format(active_servers))
print("*** User assignment ***")
for (u, s) in sorted(assign_user_to_server_vars):
if assign_user_to_server_vars[(u, s)].solution_value == 1:
print("{} uses {}, migration: {}".format(u, s, "yes" if _is_migration(u, s) else "no"))
print("*** Servers sleeping processes ***")
for s in active_servers:
sleeping = sum(assign_user_to_server_vars[u, s].solution_value * u.sleeping for u in users)
print("Server: {} #sleeping={}".format(s, sleeping))
Explanation: Step 5: Investigate the solution and then run an example analysis
End of explanation |
636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: MNIST Test
WNixalo 2018/5/19-20;25-26
Making sure I have a working baseline for the MNIST dataset. PyTorch version
Step2: 1. Data
1.1 PyTorch method
Step4: 1.1.1 Aside
Step5: 1.3 Fast AI Model Data object
inception_stats have the same Normalization that the pytorch transform above uses for its dataloader. I don't do any data augmentation besides that normalization. I also use the same train/val indices from the pytorch dataloader – to ensure my pytorch model and fastai learner are working on the same data.
Additionally in order to use pretrained models I'm going to concatenate the dataset to have 3 channels instead of 1 by copying dimensions. Another option is to forego a pretrained model and use a fresh resnet set to have only 1 input channel.
Step9: 2. Architecture
I want to have a "solid" simple ConvNet to use throughout these experiments. This model will include a large field-of-view input conv layer followed by several conv layers. Each conv layer uses BatchNorm and Leaky ReLU (I don't know if this is better than ReLU, but it sounds like a good'ish idea to me). The model's head uses an AdaptiveConcat Pooling layer (Fast AI invention that concatenates two adaptive average and max pooling layers) leading to a Linear layer. This model doesn't use dropout (I'll add that if it looks like it needs it).
Step10: 2.0.1 Aside
Step11: 2.1 Fast AI Learner
I'll use two fast.ai learners
Step12: 2.1.1 Aside
Step13: By default only the 'head' classification layer is trainable
Step14: Construct the custom learner with ConvnetBuilder in order to make it's layers iterable
Step15: 2.1.2 Recap
Step16: The Fast.ai Learners
Step17: 4. Training
As far as I know, training in base PyTorch is tedious, so I'll do a sanity-check of it first, then do all my training with Fast AI. See ref
Step18: There are more improvements to doing train / valid phases – including learning rate scheduling and automatically saving best weights (see
Step19: NOTE 1 the criterion and optimizer need to be initialized after the model is sent to the GPU if it is. See pytorch thread.
NOTE 2
Step20: Manual PyTorch train / val training phases. See
Step21: Previous run on CPU
Step22: 4.2 with Fast AI
4.2.1 Finding Learning Rates
To keep things simple, I won't be using 1-Cycle, Progressive Resizing, or much in the way of Cyclical Learning Rates. That could be a topic for later runs.
Step23: I'll use 1e-2 as the lr for all of them.
Step24: 4.2.2 custom_learner
Step25: 4.2.2.1 Aside
Step26: 4.2.3 resnet_learner
Step27: 4.2.4 pt_res_learner
Step28: 5. Testing
5.0.1 PyTorch convnet
Step30: Cool, even with that little training it's able to get a lot right.
Step31: 97-98% accuracy on test set. Just checking
Step32: 5.0.2 custom_learner
Step33: Aside
Step34: I had some confusion. You do take the max as the top prediction; to get the actual probabilities, since it's a log softmax ouput, you exponentiate.
Step35: Untrained CNN gets sub-random (< 10%) accuracy. No surprise, it only ever guesses '5', and sometimes '4'
Step36: 5.0.3 resnet_learner
Step37: 5.0.4 pt_res_learner
Step38: Further Training & Testing
Seeing how far I can go (simply) before overfitting
Step39: Aside | Python Code:
%matplotlib inline
%reload_ext autoreload
%autoreload 2
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from pathlib import Path
import os
import struct # for IDX conversion
import gzip # for IDX conversion
from urllib.request import urlretrieve # for IDX conversion
from fastai.conv_learner import * # if you want to use fastai Learner
PATH = Path('data/mnist')
bs = 64
sz = 28
def plot_loss(learner, val=None):
Plots iterations vs loss and learning rate. Plots training or validation.
lrs = learner.sched.lrs
x_axis = range(len(lrs))
loss = learner.sched.losses
min_loss = min(loss)
fig,ax = plt.subplots(figsize=(14,7))
ax.set_xlim(left=-20, right=x_axis[-1]+20)
ax.plot(x_axis, loss, label='loss')
ax.plot(x_axis, lrs, label='learning rate', color='firebrick');
ax.set_xlabel('Iterations')
ax.set_ylabel('Loss & LR')
# Validation Loss
if val is not None:
ep_end = len(lrs) // len(val)
ax.scatter(range(ep_end-1, len(lrs), ep_end), val, c='r', s=20, label='val loss')
# Minimum Loss
ax.axhline(y=min_loss, c='r', alpha=0.9, label='Min loss', lw=0.5)
idx = np.argmin(min_loss)
yscal = 1 / (ax.get_ylim()[1] - ax.get_ylim()[0])
yrltv = (min_loss - ax.get_ylim()[0]) * yscal
ax.axvline(x=x_axis[idx], ymin=0.5*yrltv, ymax=1.5*yrltv, c='r', alpha=0.9, lw=0.5)
# 150% Minimum Loss
idx = np.where(np.array(loss) <= 1.5*min_loss)[0][0]
ax.axvline(x=x_axis[idx], c='slateblue', alpha=0.9, label='50% above Min Loss', lw=0.5)
# 50% Maximum Loss
idx = np.where(np.array(loss) <= 0.5*max(loss))[0][0]
ax.axvline(x=x_axis[idx], c='teal', alpha=0.9, label='50% of Max Loss', lw=0.5)
fig.legend(bbox_to_anchor=(0.82,0.82), loc="upper right")
Explanation: MNIST Test
WNixalo 2018/5/19-20;25-26
Making sure I have a working baseline for the MNIST dataset. PyTorch version: 0.3.1.post2
For a walkthrough on converting binary IDX files to NumPy arrays, see idx-to-numpy.ipynb
For a walkthrough debugging several issues with dataloading, see mnist-dataloader-issue.ipynb
This notebook is in large part a practice stage for a research-oriented work flow.
Imports
End of explanation
# torchvision datasets are PIL.Image images of range [0,1]. Must trsfm them
# to Tensors of normalized range [-1,1]
transform = torchvision.transforms.Compose(
[torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
# see: https://gist.github.com/kevinzakka/d33bf8d6c7f06a9d8c76d97a7879f5cb
# frm: https://github.com/pytorch/pytorch/issues/1106
trainset = torchvision.datasets.MNIST(root=PATH, train=True, download=True,
transform=transform)
validset = torchvision.datasets.MNIST(root=PATH, train=True, download=True,
transform=transform)
testset = torchvision.datasets.MNIST(root=PATH, train=False, download=True,
transform=transform)
p_val = 0.15
n_val = int(p_val * len(trainset))
idxs = np.arange(len(trainset))
np.random.shuffle(idxs)
train_idxs, valid_idxs = idxs[n_val:], idxs[:n_val]
train_sampler = torch.utils.data.sampler.SubsetRandomSampler(train_idxs)
valid_sampler = torch.utils.data.sampler.SequentialSampler(valid_idxs)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=bs,
sampler=train_sampler, num_workers=2)
validloader = torch.utils.data.DataLoader(validset, batch_size=bs,
sampler=valid_sampler, num_workers=2)
testloader = torch.utils.data.DataLoader(testset, batch_size=bs, num_workers=2)
classes = [str(i) for i in range(10)]; classes
Explanation: 1. Data
1.1 PyTorch method:
The basic method for creating a DataLoader in PyTorch. Adapted from their tutorial and an older notebook.
- NOTE the normalization values are largely arbitrary.
End of explanation
def download_mnist(path=Path('data/mnist')):
os.makedirs(path, exist_ok=True)
urls = ['http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz',]
for url in urls:
fname = url.split('/')[-1]
if not os.path.exists(path/fname): urlretrieve(url, path/fname)
def read_IDX(fname):
see: https://gist.github.com/tylerneylon/ce60e8a06e7506ac45788443f7269e40
with gzip.open(fname) as f:
zero, data_type, dims = struct.unpack('>HBB', f.read(4))
shape = tuple(struct.unpack('>I', f.read(4))[0] for d in range(dims))
return np.frombuffer(f.read(), dtype=np.uint8).reshape(shape)
download_mnist()
fnames = [o for o in os.listdir(PATH) if 'ubyte.gz' in o] # could just use glob
fnames
# thanks to: https://stackoverflow.com/a/14849322
trn_x_idx = [i for i,s in enumerate(fnames) if 'train-imag' in s][0]
trn_y_idx = [i for i,s in enumerate(fnames) if 'train-lab' in s][0]
# test data:
tst_x_idx = [i for i,s in enumerate(fnames) if 't10k-imag' in s][0]
tst_y_idx = [i for i,s in enumerate(fnames) if 't10k-lab' in s][0]
# load entire IDX files into memory as ndarrays
train_x_array = read_IDX(PATH/fnames[trn_x_idx])
train_y_array = read_IDX(PATH/fnames[trn_y_idx])
# test data:
test_x_array = read_IDX(PATH/fnames[tst_x_idx])
test_y_array = read_IDX(PATH/fnames[tst_y_idx])
# size of numpy arrays in MBs
train_x_array.nbytes / 2**20, train_y_array.nbytes / 2**20
Explanation: 1.1.1 Aside: DataLoaders – PyTorch & fastai:
See mnist-dataloader-issue.ipynb for an in depth dive.
The FastAI DataLoader shares some similarities in construction with the PyTorch one. The logic defining pytorch's DataLoader in the PyTorch source code:
if batch_sampler is None:
if sampler is None:
if shuffle:
sampler = RandomSampler(dataset)
else:
sampler = SequentialSampler(dataset)
batch_sampler = BatchSampler(sampler, batch_size, drop_last)
is the same as that in fast.ai's
if batch_sampler is None:
if sampler is None:
sampler = RandomSampler(dataset) if shuffle else SequentialSampler(dataset)
batch_sampler = BatchSampler(sampler, batch_size, drop_last)
So now I'm not confused about not using a batch sampler when building a pytorch dataloader, although I see one in fastai's DataLoader –– that's because pytorch does it too.
1.2 Custom Method (for Fast AI Model Data)
This loads and converts the MNIST IDX files into NumPy arrays. For MNIST data this looks to be about 45 MB for the images. This way allows for easy use of FastAI's ModelData class, and thus its (extremely useful) Learner abstraction and all other capabilities that come with it. The arrays can be loaded via: ImageClassifierData.from_arrays(..)
End of explanation
tfms = tfms_from_stats(inception_stats, sz=sz)
# `inception_stats` are: ([0.5,0.5,0.5],[0.5,0.5,0.5])
# see: https://github.com/fastai/fastai/blob/master/fastai/transforms.py#L695
# using same trn/val indices as pytorch dataloader
valid_x_array, valid_y_array = train_x_array[valid_idxs], train_y_array[valid_idxs]
train_x_array, train_y_array = train_x_array[train_idxs], train_y_array[train_idxs]
# stack dims for 3 channels
train_x_array = np.stack((train_x_array, train_x_array, train_x_array), axis=-1)
valid_x_array = np.stack((valid_x_array, valid_x_array, valid_x_array), axis=-1)
test_x_array = np.stack((test_x_array, test_x_array, test_x_array), axis=-1)
# convert labels to np.int8
train_y_array = train_y_array.astype(np.int8)
valid_y_array = valid_y_array.astype(np.int8)
test_y_array = test_y_array.astype(np.int8)
model_data = ImageClassifierData.from_arrays(PATH,
(train_x_array, train_y_array), (valid_x_array, valid_y_array),
bs=bs, tfms=tfms, num_workers=2, test=(test_x_array, test_y_array))
Explanation: 1.3 Fast AI Model Data object
inception_stats have the same Normalization that the pytorch transform above uses for its dataloader. I don't do any data augmentation besides that normalization. I also use the same train/val indices from the pytorch dataloader – to ensure my pytorch model and fastai learner are working on the same data.
Additionally in order to use pretrained models I'm going to concatenate the dataset to have 3 channels instead of 1 by copying dimensions. Another option is to forego a pretrained model and use a fresh resnet set to have only 1 input channel.
End of explanation
class AdaptiveConcatPool2d(nn.Module):
fast.ai, see: https://github.com/fastai/fastai/tree/master/fastai/layers.py
def __init__(self, sz=None):
super().__init__()
sz = sz or (1,1)
self.ap = torch.nn.AdaptiveAvgPool2d(sz)
self.mp = torch.nn.AdaptiveAvgPool2d(sz)
def forward(self, x):
return torch.cat([self.mp(x), self.ap(x)], 1)
class Flatten(nn.Module):
fast.ai, see: https://github.com/fastai/fastai/tree/master/fastai/layers.py
def __init__(self):
super().__init__()
def forward(self, x):
return x.view(x.size(0), -1)
class ConvBNLayer(nn.Module):
conv layer with batchnorm
def __init__(self, ch_in, ch_out, kernel_size=3, stride=1, padding=0):
super().__init__()
self.conv = nn.Conv2d(ch_in, ch_out, kernel_size=kernel_size, stride=stride)
self.bn = nn.BatchNorm2d(ch_out, momentum=0.1) # mom at default 0.1
self.lrelu = nn.LeakyReLU(0.01, inplace=True) # neg slope at default 0.01
def forward(self, x): return self.lrelu(self.bn(self.conv(x)))
class ConvNet(nn.Module):
# see ref: https://github.com/fastai/fastai/blob/master/fastai/models/darknet.py
def __init__(self, ch_in=1):
super().__init__()
self.conv0 = ConvBNLayer(ch_in, 16, kernel_size=7, stride=1, padding=2) # large FoV Conv
self.conv1 = ConvBNLayer(16, 32)
self.conv2 = ConvBNLayer(32, 64)
self.conv3 = ConvBNLayer(64, 128)
self.neck = nn.Sequential(*[AdaptiveConcatPool2d(1), Flatten()])
self.head = nn.Sequential(*[nn.BatchNorm2d(256),
nn.Dropout(p=0.25),
nn.Linear(256, 10)])
def forward(self, x):
x = self.conv0(x)
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.neck(x)
x = self.head(x)
return F.log_softmax(x, dim=-1)
convnet = ConvNet()
Explanation: 2. Architecture
I want to have a "solid" simple ConvNet to use throughout these experiments. This model will include a large field-of-view input conv layer followed by several conv layers. Each conv layer uses BatchNorm and Leaky ReLU (I don't know if this is better than ReLU, but it sounds like a good'ish idea to me). The model's head uses an AdaptiveConcat Pooling layer (Fast AI invention that concatenates two adaptive average and max pooling layers) leading to a Linear layer. This model doesn't use dropout (I'll add that if it looks like it needs it).
End of explanation
x,y = next(iter(trainloader))
x,y = Variable(x), Variable(y)
convnet(x)
Explanation: 2.0.1 Aside: Discovering AdaptiveConcatPool doubles input tensor length
End of explanation
model_data.c, model_data.is_multi, model_data.is_reg
resnet_model = ConvnetBuilder(resnet18, model_data.c, model_data.is_multi, model_data.is_reg, pretrained=False)
resnet_learner = ConvLearner(model_data, resnet_model)
custom_learner = ConvLearner.from_model_data(ConvNet(ch_in=3), model_data)
pt_res_learner = ConvLearner.pretrained(resnet18, model_data, metrics=[accuracy]) ## NOTE: metrics=[accuracy] not needed - is default
Explanation: 2.1 Fast AI Learner
I'll use two fast.ai learners: the basic convnet defined above that the pytorch model will also use, and a resnet18. I'll also use an ImageNet-pretrained resnet18 to see if that helps at all. If .pretrained is not called, you will need to either use ConvnetBuilder or define a custom head yourself. NOTE also that the standard pytorch ResNet model has a 7x7 ouput pooling layer by default, which may restrict your model's performance if it's not replaced (such as with ConvnetBuilder).
The non-pretrained learner's will need their conv layers unfrozen to train them.
End of explanation
True in [[layer.trainable for layer in layer_group] for layer_group in resnet_learner.get_layer_groups()]
Explanation: 2.1.1 Aside: Layers
Again, the learners' conv layers are initially frozen:
End of explanation
[[layer.trainable for layer in layer_group] for layer_group in resnet_learner.get_layer_groups()]
Explanation: By default only the 'head' classification layer is trainable:
End of explanation
[[layer.trainable for layer in layer_group] for layer_group in custom_learner.get_layer_groups()]
custom_learner.models
resnet_learner.models
# custom_learner
# resnet_learner
# pt_res_learner
Explanation: Construct the custom learner with ConvnetBuilder in order to make it's layers iterable:
End of explanation
criterion = torch.nn.NLLLoss() # log_softmax already in arch; nll(log_softmax) <=> CE
optimizer = torch.optim.SGD(convnet.parameters(), lr=0.01, momentum=0.9)
Explanation: 2.1.2 Recap: Models
I'll be comparing 4 models:
1. convnet a 1-input channel custom CNN trained in straight PyTorch
2. custom_learner a 3-input channel custom CNN trained with Fast AI
3. resnet_learner a 3-input channel fresh ResNet18 trained with Fast AI
4. pt_res_learner a 3-input channel pretrained (ImageNet) ResNet18 trained with Fast AI.
Perhaps it'd be a good idea to replace the fresh ResNet18's input layer with a 1-channel input to compare it directly to the custom CNN. That's for a future run if I or anyone chooses to do so.
3. Loss Function
torch.nn.CrossEntropyLoss
Do nn.functional. loss functions go in the architecture, and nn. loss functions become criterion? Huh, interesting. It calls nn.functional..
End of explanation
custom_learner.crit
resnet_learner.crit
pt_res_learner.crit
Explanation: The Fast.ai Learners:
End of explanation
len(trainloader) # ceil(51,000 / bs) batches
Explanation: 4. Training
As far as I know, training in base PyTorch is tedious, so I'll do a sanity-check of it first, then do all my training with Fast AI. See ref: §4: Training or §9.1: Train ConvNet & ConvNetMod in this notebook.
There are ways to implement learning-rate scheduling and other advanced techniques in PyTorch – but by that point unless you're doing it for practice or testing a new module: that's what Fast.AI is for.
4.1 base PyTorch
End of explanation
optimizer
Explanation: There are more improvements to doing train / valid phases – including learning rate scheduling and automatically saving best weights (see: pytorch tutorial) – but that's what fast.ai's for. I'll practice those in the future. Also since the FastAI library is pending an update to PyTorch 0.4, torch.set_grad_enabled can't be used for inference mode. Instead I follow the advice on this pytorch forum thread. For now:
End of explanation
def train(model=None, crit=None, trainloader=None, valloader=None, num_epochs=1, verbose=True):
# if verbose:
# displays = 5
# display_step = max(len(dataloader) // displays, 1)
t0 = time.time()
dataloaders = {'train':trainloader}
if valloader: dataloaders['valid'] = valloader
# model.to('cuda:0' if torch.cuda.is_available() else 'cpu') # pytorch >= 0.4
to_gpu(model)
criterion = torch.nn.NLLLoss() # log_softmax already in arch; nll(log_softmax) <=> CE
optimizer = torch.optim.SGD(convnet.parameters(), lr=0.01, momentum=0.9)
# epoch w/ train & val phases
for epoch in range(num_epochs):
print(f'Epoch {epoch+1}/{num_epochs}\n{"-"*10}')
for phase in dataloaders:
running_loss = 0.0
running_correct = 0
for i,datum in enumerate(dataloaders[phase]):
inputs, labels = datum
inputs, labels = torch.autograd.Variable(inputs), torch.autograd.Variable(labels)
# zero param gradients
optimizer.zero_grad()
# (forward) track history if train
# with torch.set_grad_enabled(phase=='train'): # pytorch >= 0.4
if phase == 'valid': # pytorch 3.1 #
inputs.volatile=True #
labels.volatile=True #
# send data to gpu
inputs, labels = to_gpu(inputs), to_gpu(labels) # pytorch < 0.4
outputs = model(inputs) #
loss = crit(outputs, labels) #
_, preds= torch.max(outputs, 1) # for accuracy metric
#
# backward & optimize if train #
if phase == 'train': #
loss.backward() #
optimizer.step() # indent for pytorch >= 0.4
# stats
# pdb.set_trace()
running_loss += loss.data[0]
running_correct += torch.sum(preds == V(labels.data)) # wrap in V; pytorch 3.1
epoch_loss = running_loss / len(dataloaders[phase])
# if phase == 'valid': pdb.set_trace()
epoch_acc = float(running_correct.double() / len(dataloaders[phase])) # ? pytorch 3.1 reqs float conversion?
# pdb.set_trace()
print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}')
time_elapsed = time.time() - t0
print(f'Training Time {num_epochs} Epochs: {time_elapsed:.3f}s')
Explanation: NOTE 1 the criterion and optimizer need to be initialized after the model is sent to the GPU if it is. See pytorch thread.
NOTE 2: Variable.volatile = True can only be set immediately after a Variable is created. See pytorch thread. (this is for using a validation set and not affecting the gradients) – I got this error when trying to set .volatile=True after sending the val data to GPU (torch.FloatTensor $\rightarrow$ torch.cuda.FloatTensor)
End of explanation
train(model=convnet, crit=criterion, trainloader=trainloader, valloader=validloader)
Explanation: Manual PyTorch train / val training phases. See: pytorch tutorial
(forward) track history only if in train:
with torch.set_grad_enabled(False):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
NOTE: I think I'm doing something wrong with the validation phase. Saving. PyTorch Docs on Saving.
End of explanation
# train(model=convnet, crit=criterion, trainloader=trainloader, valloader=validloader)
torch.save(convnet.state_dict(), 'convnet_mnist_base.pth')
convnet.load_state_dict(torch.load('convnet_mnist_base.pth'))
Explanation: Previous run on CPU:
End of explanation
model_data.trn_ds.get1item(0)[1].dtype
custom_learner.lr_find()
custom_learner.sched.plot()
custom_learner.sched.plot_lr()
# next(iter(model_data.get_dl(model_data.trn_ds, False)))
resnet_learner.lr_find()
resnet_learner.sched.plot()
pt_res_learner.lr_find()
pt_res_learner.sched.plot()
Explanation: 4.2 with Fast AI
4.2.1 Finding Learning Rates
To keep things simple, I won't be using 1-Cycle, Progressive Resizing, or much in the way of Cyclical Learning Rates. That could be a topic for later runs.
End of explanation
lrs = 1e-2
Explanation: I'll use 1e-2 as the lr for all of them.
End of explanation
# checking all conv layers are being trained:
[layer.trainable for layer in custom_learner.models.get_layer_groups()]
%time custom_learner.fit(lrs, n_cycle=1, cycle_len=1, cycle_mult=1)
plot_metrics(custom_learner)
Explanation: 4.2.2 custom_learner
End of explanation
custom_learner.sched.plot_lr()
Explanation: 4.2.2.1 Aside: Fast.ai Automatic LR scaling:
Just noticed this very useful feature. Even at very stripped-down settings, Fastai still 'revs' the learning rate up during train-start and back down before train-end:
End of explanation
[layer[0].trainable for layer in resnet_learner.models.get_layer_groups()]
resnet_learner.unfreeze()
[layer[0].trainable for layer in resnet_learner.models.get_layer_groups()]
%time resnet_learner.fit(lrs, n_cycle=1, cycle_len=1, cycle_mult=1)
plot_metrics(resnet_learner)
Explanation: 4.2.3 resnet_learner
End of explanation
# only training classifier head
%time pt_res_learner.fit(lrs, n_cycle=1, cycle_len=1, cycle_mult=1)
# min(pt_res_learner.sched.losses)
pt_res_learner.sched.losses[-1]
pt_res_learner.sched.val_losses
plot_metrics(pt_res_learner)
Explanation: 4.2.4 pt_res_learner
End of explanation
x,y = next(iter(testloader)) # shape: ([64,1,28,28]; [64])
out = convnet(V(x)) # shape: ([64, 10])
_, preds = torch.max(out.data, 1)
list(zip(preds[:9], y[:9]))
Explanation: 5. Testing
5.0.1 PyTorch convnet
End of explanation
def test_pytorch(model, dataloader):
evaluation script. Returns tuple: (list of predictions, ratio correct)
correct = 0
total = 0
predictions = []
for batch in dataloader:
images, labels = batch ## could also go w: testloader.dataset.test_labels
images, labels = to_gpu(images), to_gpu(labels)
outputs = convnet(Variable(images))
_, preds = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (preds == labels).sum()
predictions.extend(preds)
return predictions, correct/total
preds, test_acc = test_pytorch(convnet, testloader)
test_acc
Explanation: Cool, even with that little training it's able to get a lot right.
End of explanation
_,y = next(iter(testloader))
list(zip(preds[:9], y[:9]))
Explanation: 97-98% accuracy on test set. Just checking:
End of explanation
# get output predictions
log_preds = custom_learner.predict(is_test=True)
# compare top-scoring preds against dataset
np.equal(model_data.test_dl.dataset.y, np.argmax(log_preds, axis=1)).sum() / model_data.test_ds.n
Explanation: 5.0.2 custom_learner
End of explanation
## 2-3 ways to do the same thing
# log_preds_dl = custom_learner.predict_dl(testloader) # make sure num channels correct before trying this; havent tested
log_preds_dl = custom_learner.predict_dl(model_data.test_dl)
log_preds = custom_learner.predict(is_test=True)
Explanation: Aside: (untrained) custom_learner Sanity Checks:
End of explanation
log_preds_dl.shape, log_preds.shape # same shape
np.unique(log_preds_dl == log_preds) # same values
np.equal(testloader.dataset.test_labels, np.argmax(log_preds, axis=1)).sum() / len(testloader.dataset.test_labels)
Explanation: I had some confusion. You do take the max as the top prediction; to get the actual probabilities, since it's a log softmax ouput, you exponentiate.
End of explanation
set(np.argmax(log_preds, axis=1)), np.argmax(log_preds, axis=1)
Explanation: Untrained CNN gets sub-random (< 10%) accuracy. No surprise, it only ever guesses '5', and sometimes '4':
End of explanation
log_preds = resnet_learner.predict(is_test=True)
np.equal(model_data.test_dl.dataset.y, np.argmax(log_preds, axis=1)).sum() / model_data.test_ds.n
Explanation: 5.0.3 resnet_learner
End of explanation
log_preds = pt_res_learner.predict(is_test=True)
np.equal(model_data.test_dl.dataset.y, np.argmax(log_preds, axis=1)).sum() / model_data.test_ds.n
Explanation: 5.0.4 pt_res_learner
End of explanation
# prev trn/val loss & valacc: 0.088194 0.068054 0.980333
%time custom_learner.fit(lrs, n_cycle=2, cycle_len=1, cycle_mult=1)
Explanation: Further Training & Testing
Seeing how far I can go (simply) before overfitting
End of explanation
class SaveValidationLoss(Callback):
def on_train_begin(self):
self.val_losses = []
def on_batch_end(self, metrics):
print(metrics)
# pdb.set_trace()
# self.val_losses.append(metrics[0])
def on_epoch_end(self, metrics):
pdb.set_trace()
self.val_losses.append(metrics[0])
def plot(self):
plt.plot(list(range(len(self.val_losses))), self.val_losses)
save_val = SaveValidationLoss()
# custom_learner.save('tempcnn')
custom_learner.load('tempcnn')
%time custom_learner.fit(lrs, n_cycle=4, cycle_len=1, cycle_mult=1, callbacks=[save_val])
custom_learner.metrics
save_val.val_losses
custom_learner.sched.val_losses
custom_learner.
%time custom_learner.fit(lrs, n_cycle=4, cycle_len=1, cycle_mult=1)
Explanation: Aside: Validation Loss Callback
Note: fastai callbacks tutorial.
I noticed only the last training session's losses are saved in learn.sched.losses and learn.sched.val_losses only holds the validation losses at the end of each epoch for the last training session. So I'll put together a callback to save validation losses and use that from here forward.
It would be very easy to have this automatically save the model at the best validation loss:
def on_epoch_end(self, metrics):
...
val_loss = metrics[0]
if val_loss < self.best_val_loss:
self.best_loss = val_loss
self.learner.save(...)
...
End of explanation |
637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Препроцессинг фич
Step1: Обучение моделей
Step2: Submission | Python Code:
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
def preprocess_anomaly(df):
df["full_sq"] = map(lambda x: x if x > 10 else float("NaN"), df["full_sq"])
df["life_sq"] = map(lambda x: x if x > 5 else float("NaN"), df["life_sq"])
df["kitch_sq"] = map(lambda x: x if x > 2 else float("NaN"), df["kitch_sq"])
# full_sq-life_sq<0 full_sq-kitch_sq<0 life_sq-kitch_sq<0 floor-max_floor<0
return df
def preprocess_categorial(df):
df = mess_y_categorial(df, 5)
df = df.select_dtypes(exclude=['object'])
return df
def apply_categorial(test, train):
test = mess_y_categorial_fold(test, train)
test = test.select_dtypes(exclude=['object'])
return test
def smoothed_likelihood(targ_mean, nrows, globalmean, alpha=10):
try:
return (targ_mean * nrows + globalmean * alpha) / (nrows + alpha)
except Exception:
return float("NaN")
def mess_y_categorial(df, nfolds=3, alpha=10):
from sklearn.utils import shuffle
from copy import copy
folds = np.array_split(shuffle(df), nfolds)
newfolds = []
for i in range(nfolds):
fold = folds[i]
other_folds = copy(folds)
other_folds.pop(i)
other_fold = pd.concat(other_folds)
newfolds.append(mess_y_categorial_fold(fold, other_fold, alpha=10))
return pd.concat(newfolds)
def mess_y_categorial_fold(fold_raw, other_fold, cols=None, y_col="price_doc", alpha=10):
fold = fold_raw.copy()
if not cols:
cols = list(fold.select_dtypes(include=["object"]).columns)
globalmean = other_fold[y_col].mean()
for c in cols:
target_mean = other_fold[[c, y_col]].groupby(c).mean().to_dict()[y_col]
nrows = other_fold[c].value_counts().to_dict()
fold[c + "_sll"] = fold[c].apply(
lambda x: smoothed_likelihood(target_mean.get(x), nrows.get(x), globalmean, alpha) if x else float("NaN")
)
return fold
def apply_macro(df):
macro_cols = [
'timestamp', "balance_trade", "balance_trade_growth", "eurrub", "average_provision_of_build_contract",
"micex_rgbi_tr", "micex_cbi_tr", "deposits_rate", "mortgage_value", "mortgage_rate",
"income_per_cap", "rent_price_4+room_bus", "museum_visitis_per_100_cap", "apartment_build"
]
return pd.merge(df, macro, on='timestamp', how='left')
def preprocess(df):
from sklearn.preprocessing import OneHotEncoder, FunctionTransformer
# df = apply_macro(df)
# df["timestamp_year"] = df["timestamp"].apply(lambda x: x.split("-")[0])
# df["timestamp_month"] = df["timestamp"].apply(lambda x: x.split("-")[1])
# df["timestamp_year_month"] = df["timestamp"].apply(lambda x: x.split("-")[0] + "-" + x.split("-")[1])
df = df.drop(["id", "timestamp"], axis=1)
ecology = ["no data", "poor", "satisfactory", "good", "excellent"]
df["ecology_index"] = map(ecology.index, df["ecology"].values)
bool_feats = [
"thermal_power_plant_raion",
"incineration_raion",
"oil_chemistry_raion",
"radiation_raion",
"railroad_terminal_raion",
"big_market_raion",
"nuclear_reactor_raion",
"detention_facility_raion",
"water_1line",
"big_road1_1line",
"railroad_1line",
"culture_objects_top_25"
]
for bf in bool_feats:
df[bf + "_bool"] = map(lambda x: x == "yes", df[bf].values)
df = preprocess_anomaly(df)
df['rel_floor'] = df['floor'] / df['max_floor'].astype(float)
df['rel_kitch_sq'] = df['kitch_sq'] / df['full_sq'].astype(float)
df['rel_life_sq'] = df['life_sq'] / df['full_sq'].astype(float)
df["material_cat"] = df.material.fillna(0).astype(int).astype(str).replace("0", "")
df["state_cat"] = df.state.fillna(0).astype(int).astype(str).replace("0", "")
# df["age_of_building"] = df["timestamp_year"].astype(float) - df["build_year"].astype(float)
df["num_room_cat"] = df.num_room.fillna(0).astype(int).astype(str).replace("0", "")
return df
# train_raw["price_doc"] = np.log1p(train_raw["price_doc"].values)
train_pr = preprocess(train_raw)
train = preprocess_categorial(train_pr)
train = train.fillna(-1)
X = train.drop(["price_doc"], axis=1)
y = train["price_doc"].values
Explanation: Препроцессинг фич
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X.values, y, test_size=0.20, random_state=43)
dtrain_all = xgb.DMatrix(X.values, y, feature_names=X.columns)
dtrain = xgb.DMatrix(X_train, y_train, feature_names=X.columns)
dval = xgb.DMatrix(X_val, y_val, feature_names=X.columns)
xgb_params = {
'max_depth': 5,
'n_estimators': 200,
'learning_rate': 0.01,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
# Uncomment to tune XGB `num_boost_rounds`
model = xgb.train(xgb_params, dtrain, num_boost_round=2000, evals=[(dval, 'val')],
early_stopping_rounds=40, verbose_eval=40)
num_boost_round = model.best_iteration
cv_output = xgb.cv(dict(xgb_params, silent=0), dtrain_all, num_boost_round=num_boost_round, verbose_eval=40)
cv_output[['train-rmse-mean', 'test-rmse-mean']].plot()
model = xgb.train(dict(xgb_params, silent=0), dtrain_all, num_boost_round=num_boost_round, verbose_eval=40)
print "predict-train:", rmse(model.predict(dtrain_all), y)
model = xgb.XGBRegressor(max_depth=5, n_estimators=100, learning_rate=0.01, nthread=-1, silent=False)
model.fit(X.values, y, verbose=20)
with open("scores.tsv", "a") as sf:
sf.write("%s\n" % rmsle(model.predict(X.values), y))
!tail scores.tsv
show_weights(model, feature_names=list(X.columns), importance_type="weight")
from sklearn.model_selection import cross_val_score
from sklearn.metrics import make_scorer
def validate(clf):
cval = np.abs(cross_val_score(clf, X.values, y, cv=3,
scoring=make_scorer(rmsle, False), verbose=2))
return np.mean(cval), cval
print validate(model)
Explanation: Обучение моделей
End of explanation
test = pd.read_csv("data/test.csv")
test_pr = preprocess(test)
test_pr = apply_categorial(test_pr, train_pr)
test_pr = test_pr.fillna(-1)
dtest = xgb.DMatrix(test_pr.values, feature_names=test_pr.columns)
y_pred = model.predict(dtest)
# y_pred = model.predict(test_pr.values)
# y_pred = np.exp(y_pred) - 1
submdf = pd.DataFrame({"id": test["id"], "price_doc": y_pred})
submdf.to_csv("data/submission.csv", header=True, index=False)
!head data/submission.csv
Explanation: Submission
End of explanation |
638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Nets for Digit Classification
by Khaled Nasr as a part of a <a href="https
Step1: Creating the network
To create a neural network in shogun, we'll first create an instance of NeuralNetwork and then initialize it by telling it how many inputs it has and what type of layers it contains. To specifiy the layers of the network a DynamicObjectArray is used. The array contains instances of NeuralLayer-based classes that determine the type of neurons each layer consists of. Some of the supported layer types are
Step2: We can also visualize what the network would look like. To do that we'll draw a smaller network using networkx. The network we'll draw will have 8 inputs (labeled X), 8 neurons in the first hidden layer (labeled H), 4 neurons in the second hidden layer (labeled U), and 6 neurons in the output layer (labeled Y). Each neuron will be connected to all neurons in the layer that precedes it.
Step3: Training
NeuralNetwork supports two methods for training
Step4: Training without regularization
We'll start by training the first network without regularization using LBFGS optimization. Note that LBFGS is suitable because we're using a small dataset.
Step5: Training with L2 regularization
We'll train another network, but with L2 regularization. This type of regularization attempts to prevent overfitting by penalizing large weights. This is done by adding $\frac{1}{2} \lambda \Vert W \Vert_2$ to the optimization objective that the network tries to minimize, where $\lambda$ is the regularization coefficient.
Step6: Training with L1 regularization
We'll now try L1 regularization. It works by by adding $\lambda \Vert W \Vert_1$ to the optimzation objective. This has the effect of penalizing all non-zero weights, therefore pushing all the weights to be close to 0.
Step7: Training with dropout
The idea behind dropout is very simple
Step8: Convolutional Neural Networks
Now we'll look at a different type of network, namely convolutional neural networks. A convolutional net operates on two principles
Step9: Now we can train the network. Like in the previous section, we'll use gradient descent with dropout and max-norm regularization
Step10: Evaluation
According the accuracy on the validation set, the convolutional network works best in out case. Now we'll measure its performance on the test set
Step11: We can also look at some of the images and the network's response to each of them | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat
from shogun import features, MulticlassLabels, Math
# load the dataset
dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = dataset['data']
# the usps dataset has the digits labeled from 1 to 10
# we'll subtract 1 to make them in the 0-9 range instead
Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1
# 1000 examples for training
Xtrain = features(Xall[:,0:1000])
Ytrain = MulticlassLabels(Yall[0:1000])
# 4000 examples for validation
Xval = features(Xall[:,1001:5001])
Yval = MulticlassLabels(Yall[1001:5001])
# the rest for testing
Xtest = features(Xall[:,5002:-1])
Ytest = MulticlassLabels(Yall[5002:-1])
# initialize the random number generator with a fixed seed, for repeatability
Math.init_random(10)
Explanation: Neural Nets for Digit Classification
by Khaled Nasr as a part of a <a href="https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn
This notebook illustrates how to use the NeuralNets module to teach a neural network to recognize digits. It also explores the different optimization and regularization methods supported by the module. Convolutional neural networks are also discussed.
Introduction
An Artificial Neural Network is a machine learning model that is inspired by the way biological nervous systems, such as the brain, process information. The building block of neural networks is called a neuron. All a neuron does is take a weighted sum of its inputs and pass it through some non-linear function (activation function) to produce its output. A (feed-forward) neural network is a bunch of neurons arranged in layers, where each neuron in layer i takes its input from all the neurons in layer i-1. For more information on how neural networks work, follow this link.
In this notebook, we'll look at how a neural network can be used to recognize digits. We'll train the network on the USPS dataset of handwritten digits.
We'll start by loading the data and dividing it into a training set, a validation set, and a test set. The USPS dataset has 9298 examples of handwritten digits. We'll intentionally use just a small portion (1000 examples) of the dataset for training . This is to keep training time small and to illustrate the effects of different regularization methods.
End of explanation
from shogun import NeuralNetwork, NeuralInputLayer, NeuralLogisticLayer, NeuralSoftmaxLayer
from shogun import DynamicObjectArray
# setup the layers
layers = DynamicObjectArray()
layers.append_element(NeuralInputLayer(256)) # input layer, 256 neurons
layers.append_element(NeuralLogisticLayer(256)) # first hidden layer, 256 neurons
layers.append_element(NeuralLogisticLayer(128)) # second hidden layer, 128 neurons
layers.append_element(NeuralSoftmaxLayer(10)) # output layer, 10 neurons
# create the networks
net_no_reg = NeuralNetwork(layers)
net_no_reg.quick_connect()
net_no_reg.initialize_neural_network()
net_l2 = NeuralNetwork(layers)
net_l2.quick_connect()
net_l2.initialize_neural_network()
net_l1 = NeuralNetwork(layers)
net_l1.quick_connect()
net_l1.initialize_neural_network()
net_dropout = NeuralNetwork(layers)
net_dropout.quick_connect()
net_dropout.initialize_neural_network()
Explanation: Creating the network
To create a neural network in shogun, we'll first create an instance of NeuralNetwork and then initialize it by telling it how many inputs it has and what type of layers it contains. To specifiy the layers of the network a DynamicObjectArray is used. The array contains instances of NeuralLayer-based classes that determine the type of neurons each layer consists of. Some of the supported layer types are: NeuralLinearLayer, NeuralLogisticLayer and
NeuralSoftmaxLayer.
We'll create a feed-forward, fully connected (every neuron is connected to all neurons in the layer below) neural network with 2 logistic hidden layers and a softmax output layer. The network will have 256 inputs, one for each pixel (16*16 image). The first hidden layer will have 256 neurons, the second will have 128 neurons, and the output layer will have 10 neurons, one for each digit class. Note that we're using a big network, compared with the size of the training set. This is to emphasize the effects of different regularization methods. We'll try training the network with:
No regularization
L2 regularization
L1 regularization
Dropout regularization
Therefore, we'll create 4 versions of the network, train each one of them differently, and then compare the results on the validation set.
End of explanation
# import networkx, install if necessary
try:
import networkx as nx
except ImportError:
import pip
pip.main(['install', '--user', 'networkx'])
import networkx as nx
G = nx.DiGraph()
pos = {}
for i in range(8):
pos['X'+str(i)] = (i,0) # 8 neurons in the input layer
pos['H'+str(i)] = (i,1) # 8 neurons in the first hidden layer
for j in range(8): G.add_edge('X'+str(j),'H'+str(i))
if i<4:
pos['U'+str(i)] = (i+2,2) # 4 neurons in the second hidden layer
for j in range(8): G.add_edge('H'+str(j),'U'+str(i))
if i<6:
pos['Y'+str(i)] = (i+1,3) # 6 neurons in the output layer
for j in range(4): G.add_edge('U'+str(j),'Y'+str(i))
nx.draw(G, pos, node_color='y', node_size=750)
Explanation: We can also visualize what the network would look like. To do that we'll draw a smaller network using networkx. The network we'll draw will have 8 inputs (labeled X), 8 neurons in the first hidden layer (labeled H), 4 neurons in the second hidden layer (labeled U), and 6 neurons in the output layer (labeled Y). Each neuron will be connected to all neurons in the layer that precedes it.
End of explanation
from shogun import MulticlassAccuracy
def compute_accuracy(net, X, Y):
predictions = net.apply_multiclass(X)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(predictions, Y)
return accuracy*100
Explanation: Training
NeuralNetwork supports two methods for training: LBFGS (default) and mini-batch gradient descent.
LBFGS is a full-batch optimization methods, it looks at the entire training set each time before it changes the network's parameters. This makes it slow with large datasets. However, it works very well with small/medium size datasets and is very easy to use as it requires no parameter tuning.
Mini-batch Gradient Descent looks at only a small portion of the training set (a mini-batch) before each step, which it makes it suitable for large datasets. However, it's a bit harder to use than LBFGS because it requires some tuning for its parameters (learning rate, learning rate decay,..)
Training in NeuralNetwork stops when:
Number of epochs (iterations over the entire training set) exceeds max_num_epochs
The (percentage) difference in error between the current and previous iterations is smaller than epsilon, i.e the error is not anymore being reduced by training
To see all the options supported for training, check the documentation
We'll first write a small function to calculate the classification accuracy on the validation set, so that we can compare different models:
End of explanation
net_no_reg.put('epsilon', 1e-6)
net_no_reg.put('max_num_epochs', 600)
# uncomment this line to allow the training progress to be printed on the console
#from shogun import MSG_INFO; net_no_reg.io.put('loglevel', MSG_INFO)
net_no_reg.put('labels', Ytrain)
net_no_reg.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("Without regularization, accuracy on the validation set =", compute_accuracy(net_no_reg, Xval, Yval), "%")
Explanation: Training without regularization
We'll start by training the first network without regularization using LBFGS optimization. Note that LBFGS is suitable because we're using a small dataset.
End of explanation
# turn on L2 regularization
net_l2.put('l2_coefficient', 3e-4)
net_l2.put('epsilon', 1e-6)
net_l2.put('max_num_epochs', 600)
net_l2.put('labels', Ytrain)
net_l2.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("With L2 regularization, accuracy on the validation set =", compute_accuracy(net_l2, Xval, Yval), "%")
Explanation: Training with L2 regularization
We'll train another network, but with L2 regularization. This type of regularization attempts to prevent overfitting by penalizing large weights. This is done by adding $\frac{1}{2} \lambda \Vert W \Vert_2$ to the optimization objective that the network tries to minimize, where $\lambda$ is the regularization coefficient.
End of explanation
# turn on L1 regularization
net_l1.put('l1_coefficient', 3e-5)
net_l1.put('epsilon', e-6)
net_l1.put('max_num_epochs', 600)
net_l1.put('labels', Ytrain)
net_l1.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("With L1 regularization, accuracy on the validation set =", compute_accuracy(net_l1, Xval, Yval), "%")
Explanation: Training with L1 regularization
We'll now try L1 regularization. It works by by adding $\lambda \Vert W \Vert_1$ to the optimzation objective. This has the effect of penalizing all non-zero weights, therefore pushing all the weights to be close to 0.
End of explanation
from shogun import NNOM_GRADIENT_DESCENT
# set the dropout probabilty for neurons in the hidden layers
net_dropout.put('dropout_hidden', 0.5)
# set the dropout probabilty for the inputs
net_dropout.put('dropout_input', 0.2)
# limit the maximum incoming weights vector lengh for neurons
net_dropout.put('max_norm', 15)
net_dropout.put('epsilon', 1e-6)
net_dropout.put('max_num_epochs', 600)
# use gradient descent for optimization
net_dropout.put('optimization_method', NNOM_GRADIENT_DESCENT)
net_dropout.put('gd_learning_rate', 0.5)
net_dropout.put('gd_mini_batch_size', 100)
net_dropout.put('labels', Ytrain)
net_dropout.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("With dropout, accuracy on the validation set =", compute_accuracy(net_dropout, Xval, Yval), "%")
Explanation: Training with dropout
The idea behind dropout is very simple: randomly ignore some neurons during each training iteration. When used on neurons in the hidden layers, it has the effect of forcing each neuron to learn to extract features that are useful in any context, regardless of what the other hidden neurons in its layer decide to do. Dropout can also be used on the inputs to the network by randomly omitting a small fraction of them during each iteration.
When using dropout, it's usually useful to limit the L2 norm of a neuron's incoming weights vector to some constant value.
Due to the stochastic nature of dropout, LBFGS optimization doesn't work well with it, therefore we'll use mini-batch gradient descent instead.
End of explanation
from shogun import NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR
# prepere the layers
layers_conv = DynamicObjectArray()
# input layer, a 16x16 image single channel image
layers_conv.append_element(NeuralInputLayer(16,16,1))
# the first convolutional layer: 10 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps
layers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 10, 2, 2, 2, 2))
# the first convolutional layer: 15 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 15 4x4 feature maps
layers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2))
# output layer
layers_conv.append_element(NeuralSoftmaxLayer(10))
# create and initialize the network
net_conv = NeuralNetwork(layers_conv)
net_conv.quick_connect()
net_conv.initialize_neural_network()
Explanation: Convolutional Neural Networks
Now we'll look at a different type of network, namely convolutional neural networks. A convolutional net operates on two principles:
Local connectivity: Convolutional nets work with inputs that have some sort of spacial structure, where the order of the inputs features matter, i.e images. Local connectivity means that each neuron will be connected only to a small neighbourhood of pixels.
Weight sharing: Different neurons use the same set of weights. This greatly reduces the number of free parameters, and therefore makes the optimization process easier and acts as a good regularizer.
With that in mind, each layer in a convolutional network consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. The convolution operation satisfies the local connectivity and the weight sharing constraints. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. This adds some translation invarience and improves the performance.
Convolutional nets in Shogun are handled through the CNeuralNetwork class along with the CNeuralConvolutionalLayer class. A CNeuralConvolutionalLayer represents a convolutional layer with multiple feature maps, optional max-pooling, and support for different types of activation functions
Now we'll creates a convolutional neural network with two convolutional layers and a softmax output layer. We'll use the rectified linear activation function for the convolutional layers:
End of explanation
# 50% dropout in the input layer
net_conv.put('dropout_input', 0.5)
# max-norm regularization
net_conv.put('max_norm', 1.0)
# set gradient descent parameters
net_conv.put('optimization_method', NNOM_GRADIENT_DESCENT)
net_conv.put('gd_learning_rate', 0.01)
net_conv.put('gd_mini_batch_size', 100)
net_conv.put('epsilon', 0.0)
net_conv.put('max_num_epochs', 100)
# start training
net_conv.put('labels', Ytrain)
net_conv.train(Xtrain)
# compute accuracy on the validation set
print("With a convolutional network, accuracy on the validation set =", compute_accuracy(net_conv, Xval, Yval), "%")
Explanation: Now we can train the network. Like in the previous section, we'll use gradient descent with dropout and max-norm regularization:
End of explanation
print("Accuracy on the test set using the convolutional network =", compute_accuracy(net_conv, Xtest, Ytest), "%")
Explanation: Evaluation
According the accuracy on the validation set, the convolutional network works best in out case. Now we'll measure its performance on the test set:
End of explanation
predictions = net_conv.apply_multiclass(Xtest)
_=figure(figsize=(10,12))
# plot some images, with the predicted label as the title of each image
# this code is borrowed from the KNN notebook by Chiyuan Zhang and Sören Sonnenburg
for i in range(100):
ax=subplot(10,10,i+1)
title(int(predictions[i]))
ax.imshow(Xtest[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)
ax.set_xticks([])
ax.set_yticks([])
Explanation: We can also look at some of the images and the network's response to each of them:
End of explanation |
639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generic fit for approximating with a polynomial
Step1: Define a quality-of-fit measure, $\chi^2$
Step2: We want to minimized $\chi^2$. Take derivative wrt the coefficients ($a_k$) and set to zero. Will get $k$ equations. | Python Code:
f = Symbol('f') # Function to approximate
f_approx = Symbol('fbar') # Approximating function
w = Symbol('w') # weighting function
chi2 = Symbol('chi^2')
f, f_approx, w, chi2
M = Symbol('M', integer=True)
k = Symbol('k', integer=True,positive=True)
a = IndexedBase('a',(M,)) # coefficient
h = IndexedBase('h',(M,)) # basis function
ak = Symbol('a_k') # Temporary symbols to make some derivatives easier
hk = Symbol('h_k') # Basis function (function of r)
hj = Symbol('h_j')
r = Symbol('r',positive=True)
j = Symbol('j',integer=True)
poly_approx = Sum(a[k]*h[k],(k,0,M))
poly_approx_j = Sum(a[j]*h[j],(j,0,M)) # replace summation variable
poly_approx
Explanation: Generic fit for approximating with a polynomial
End of explanation
eq1 = Eq(chi2, Integral(w(r)*(f(r)-f_approx(r,ak))**2,r))
eq1
Explanation: Define a quality-of-fit measure, $\chi^2$
End of explanation
eq2 = Eq(0,diff(eq1.rhs, ak))
eq2
eq3 = Eq(diff(poly_approx,ak,evaluate=False), hk)
eq3
eq4 = Eq(0, Integral(eq2.rhs.args[0].subs(diff(f_approx(r,ak),ak),hk(r)),r))
eq4
eq5 = Eq(0, Integral(eq4.rhs.args[0].subs(f_approx(r,ak), poly_approx_j),r))
eq5
eq6 = Eq(0, Integral(-eq5.rhs.args[0]/2,r))
eq6
base7 = expand(eq6.rhs.args[0])
eq7 = Eq(Integral(-base7.args[1],r),Integral(base7.args[0],r))
eq7
int7 = eq7.lhs.args[0]
eq8 = Eq(Sum(a[j]*Integral(Mul(*int7.args[1:]),r),(j,0,M)), eq7.rhs)
eq8
Explanation: We want to minimized $\chi^2$. Take derivative wrt the coefficients ($a_k$) and set to zero. Will get $k$ equations.
End of explanation |
640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab
Step1: Some helper functions
Step2: Plotting the data
Step3: 2. Build a Linear Regression Model
Create a regression model and assign the weights to the array bmi_life_model.
Fit the model to the data.
Step4: 3. Predict using the Model
Predict using a BMI of 21.07931 and assign it to the variable laos_life_exp. | Python Code:
import numpy as np
import pandas as pd
# TODO: Load the data in Pandas
bmi_life_data = None
# Print the data
bmi_life_data
Explanation: Lab: Predicting Life Expectancy from BMI in Countries using Linear Regression
In this lab, you'll be working with data on the average life expectancy at birth and the average BMI for males across the world. The data comes from Gapminder.
The data file can be found in the "bmi_and_life_expectancy.csv" file. It includes three columns, containing the following data:
* Country – The country the person was born in.
Life expectancy – The average life expectancy at birth for a person in that country.
* BMI* – The mean BMI of males in that country.
You'll need to complete each of the following steps:
1. Load the data
2. Build a linear regression model
3. Predict using the model
1. Load and plot the data
The data is in the file called "bmi_and_life_expectancy.csv".
Use pandas read_csv to load the data into a dataframe.
Assign the dataframe to the variable bmi_life_data.
End of explanation
import matplotlib.pyplot as plt
x = np.array(bmi_life_data[["BMI"]])
y = np.array(bmi_life_data["Life expectancy"])
def draw_data(x, y):
for i in range(len(x)):
plt.scatter(x[i], y[i], color='blue', edgecolor='k')
plt.xlabel('BMI')
plt.ylabel('Life expectancy')
def display(m, b, color='g'):
r = np.arange(min(x), max(x), 0.1)
plt.plot(r, m*r+b, color)
Explanation: Some helper functions:
- One to plot the data.
- One to plot any line, given the slope $m$ and the y-intercept $b$.
End of explanation
draw_data(x, y)
plt.show()
Explanation: Plotting the data
End of explanation
epochs = 1000
learning_rate = 0.001
# TODO: Finish the code for this function
def linear_regression(x, y):
# Initialize m and b
m=1
b=0
# TODO: Use the square trick to update the weights
# and run it for a number of epochs
return(m, b)
m, b = linear_regression(x, y)
linear_regression(x,y)
draw_data(x, y)
display(m[0], b[0])
plt.show()
Explanation: 2. Build a Linear Regression Model
Create a regression model and assign the weights to the array bmi_life_model.
Fit the model to the data.
End of explanation
# TODO: Write the prediction function
def predict(m, b, bmi):
pass
Explanation: 3. Predict using the Model
Predict using a BMI of 21.07931 and assign it to the variable laos_life_exp.
End of explanation |
641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Software Engineering for Data Scientists
Sophisticated Data Manipulation
DATA 515 A
1. Python's Data Science Ecosystem
With this simple Python computation experience under our belt, we can now move to doing some more interesting analysis.
Python's Data Science Ecosystem
In addition to Python's built-in modules like the math module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python.
Some of the most important ones are
Step1: Lists in native Python
Let's create a list, a native Python object that we've used earlier today.
Step2: This list is one-dimensional, let's make it multidimensional!
Step3: How do we access the 6 element in the second row, third column for native Python list?
Step4: Converting to numpy Arrays
Step5: How do we access the 6 element in the second row, third column for numpy array?
Step6: How do we retrieve a slice of the array, array([[1, 2], [4, 5]])?
Step7: How do we retrieve the second column of the array?
Step8: 4. Introduction to Pandas DataFrames
What are the elements of a table?
Step9: What operations do we perform on tables?
Step10: Operations on a Pandas DataFrame
5. Manipulating Data with DataFrames
Downloading the data
Shell commands can be run from the notebook by preceding them with an exclamation point
Step11: uncomment this to download the data
Step12: Loading Data into a DataFrame
Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern
Step13: Now we can use the read_csv command to read the comma-separated-value data
Step14: The shape attribute shows us the number of elements
Step15: The columns attribute gives us the column names
The index attribute gives us the index names
The dtypes attribute gives the data types of each column
Step16: Sophisticated Data Manipulation
Here we'll cover some key features of manipulating data with pandas
Access columns by name using square-bracket indexing
Step17: Mathematical operations on columns happen element-wise
Step18: Columns can be created (or overwritten) with the assignment operator.
Let's create a tripminutes column with the number of minutes for each trip
More complicated mathematical operations can be done with tools in the numpy package
Step19: Or to break down rides by age
Step20: By default, the values rather than the index are sorted. Use sort=False to turn this behavior off
Step21: We can explore other things as well
Step22: Group-by Operation
One of the killer features of the Pandas dataframe is the ability to do group-by operations.
You can visualize the group-by like this (image borrowed from the Python Data Science Handbook)
Step23: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
<data object>.groupby(<grouping values>).<aggregate>()
for example, we can group by gender and find the average of all numerical columns
Step24: It's also possible to index the grouped object like it is a dataframe
Step25: Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data
Step26: Adjusting the Plot Style
Matplotlib has a number of plot styles you can use. For example, if you like R you might use the ggplot style
Step27: Other plot types
Pandas supports a range of other plotting types; you can find these by using the <TAB> autocomplete on the plot method
Step28: For example, we can create a histogram of trip durations | Python Code:
import numpy as np
Explanation: Software Engineering for Data Scientists
Sophisticated Data Manipulation
DATA 515 A
1. Python's Data Science Ecosystem
With this simple Python computation experience under our belt, we can now move to doing some more interesting analysis.
Python's Data Science Ecosystem
In addition to Python's built-in modules like the math module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python.
Some of the most important ones are:
numpy: Numerical Python
Numpy is short for "Numerical Python", and contains tools for efficient manipulation of arrays of data.
If you have used other computational tools like IDL or MatLab, Numpy should feel very familiar.
scipy: Scientific Python
Scipy is short for "Scientific Python", and contains a wide range of functionality for accomplishing common scientific tasks, such as optimization/minimization, numerical integration, interpolation, and much more.
We will not look closely at Scipy today, but we will use its functionality later in the course.
pandas: Labeled Data Manipulation in Python
Pandas is short for "Panel Data", and contains tools for doing more advanced manipulation of labeled data in Python, in particular with a columnar data structure called a Data Frame.
If you've used the R statistical language (and in particular the so-called "Hadley Stack"), much of the functionality in Pandas should feel very familiar.
matplotlib: Visualization in Python
Matplotlib started out as a Matlab plotting clone in Python, and has grown from there in the 15 years since its creation. It is the most popular data visualization tool currently in the Python data world (though other recent packages are starting to encroach on its monopoly).
2. Installation
Installing Pandas & friends
Because the above packages are not included in Python itself, you need to install them separately. While it is possible to install these from source (compiling the C and/or Fortran code that does the heavy lifting under the hood) it is much easier to use a package manager like conda. All it takes is to run
$ conda install numpy scipy pandas matplotlib
and (so long as your conda setup is working) the packages will be downloaded and installed on your system.
3. Arrays and slicing in Numpy
End of explanation
my_list = [2, 5, 7, 8]
my_list
type(my_list)
Explanation: Lists in native Python
Let's create a list, a native Python object that we've used earlier today.
End of explanation
multi_list = [[1, 2, 3], [4, 5, 6]]
Explanation: This list is one-dimensional, let's make it multidimensional!
End of explanation
#
Explanation: How do we access the 6 element in the second row, third column for native Python list?
End of explanation
my_array = np.array(my_list)
type(my_array)
my_array.dtype
multi_array.shape
multi_array = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
Explanation: Converting to numpy Arrays
End of explanation
#
Explanation: How do we access the 6 element in the second row, third column for numpy array?
End of explanation
#
Explanation: How do we retrieve a slice of the array, array([[1, 2], [4, 5]])?
End of explanation
#
Explanation: How do we retrieve the second column of the array?
End of explanation
# Pandas DataFrames as table elements
import pandas as pd
Explanation: 4. Introduction to Pandas DataFrames
What are the elements of a table?
End of explanation
df = pd.DataFrame({'A': [1,2,3], 'B': [2, 4, 6], 'ccc': [1.0, 33, 4]})
df
sub_df = df[['A', 'ccc']]
sub_df
df['A'] + 2*df['B']
Explanation: What operations do we perform on tables?
End of explanation
!ls
Explanation: Operations on a Pandas DataFrame
5. Manipulating Data with DataFrames
Downloading the data
Shell commands can be run from the notebook by preceding them with an exclamation point:
End of explanation
!curl -o pronto.csv https://data.seattle.gov/api/views/tw7j-dfaw/rows.csv?accessType=DOWNLOAD
Explanation: uncomment this to download the data:
End of explanation
import pandas as pd
df = pd.read_csv('pronto.csv')
type(df)
len(df)
Explanation: Loading Data into a DataFrame
Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern:
End of explanation
df.head()
df.columns
df.index
smaller_df = df.loc[[1,4,6,7,9,34],:]
smaller_df.index
Explanation: Now we can use the read_csv command to read the comma-separated-value data:
Note: strings in Python can be defined either with double quotes or single quotes
Viewing Pandas Dataframes
The head() and tail() methods show us the first and last rows of the data
End of explanation
df.shape
Explanation: The shape attribute shows us the number of elements:
End of explanation
df.dtypes
Explanation: The columns attribute gives us the column names
The index attribute gives us the index names
The dtypes attribute gives the data types of each column:
End of explanation
df_small = df['stoptime']
type(df_small)
df_small.tolist()
Explanation: Sophisticated Data Manipulation
Here we'll cover some key features of manipulating data with pandas
Access columns by name using square-bracket indexing:
End of explanation
trip_duration_hours = df['tripduration']/3600
trip_duration_hours[:2]
trip_duration_hours.head()
df['trip_duration_hours'] = df['tripduration']/3600
del df['trip_duration_hours']
df.head()
df.loc[[0,1],:]
df_long_trips = df[df['tripduration'] >10000]
sel = df['tripduration'] > 10000
df_long_trips = df[sel]
df_long_trips
df[sel].shape
# Make a copy of a slice
df_subset = df[['starttime', 'stoptime']].copy()
df_subset['trip_hours'] = df['tripduration']/3600
Explanation: Mathematical operations on columns happen element-wise:
End of explanation
pd.value_counts(df["gender"])
Explanation: Columns can be created (or overwritten) with the assignment operator.
Let's create a tripminutes column with the number of minutes for each trip
More complicated mathematical operations can be done with tools in the numpy package:
Working with Times
One trick to know when working with columns of times is that Pandas DateTimeIndex provides a nice interface for working with columns of times.
For a dataset of this size, using pd.to_datetime and specifying the date format can make things much faster (from the strftime reference, we see that the pronto data has format "%m/%d/%Y %I:%M:%S %p"
(Note: you can also use infer_datetime_format=True in most cases to automatically infer the correct format, though due to a bug it doesn't work when AM/PM are present)
With it, we can extract, the hour of the day, the day of the week, the month, and a wide range of other views of the time:
Simple Grouping of Data
The real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at value counts and the basics of group-by operations.
Value Counts
Pandas includes an array of useful functionality for manipulating and analyzing tabular data.
We'll take a look at two of these here.
The pandas.value_counts returns statistics on the unique values within each column.
We can use it, for example, to break down rides by gender:
End of explanation
pd.value_counts(2019 - df["birthyear"])
Explanation: Or to break down rides by age:
End of explanation
pd.value_counts(df["birthyear"], sort=False)
Explanation: By default, the values rather than the index are sorted. Use sort=False to turn this behavior off:
End of explanation
#
Explanation: We can explore other things as well: day of week, hour of day, etc.
End of explanation
df.head()
df_count = df.groupby(['from_station_id']).count()
df_count.head()
df_mean = df.groupby(['from_station_id']).mean()
df_mean.head()
dfgroup = df.groupby(['from_station_id'])
dfgroup.groups
Explanation: Group-by Operation
One of the killer features of the Pandas dataframe is the ability to do group-by operations.
You can visualize the group-by like this (image borrowed from the Python Data Science Handbook)
End of explanation
df.groupby(gender).mean()
Explanation: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
<data object>.groupby(<grouping values>).<aggregate>()
for example, we can group by gender and find the average of all numerical columns:
End of explanation
%matplotlib inline
Explanation: It's also possible to index the grouped object like it is a dataframe:
You can even group by multiple values: for example we can look at the trip duration by time of day and by gender:
The unstack() operation can help make sense of this type of multiply-grouped data. What this technically does is split a multiple-valued index into an index plus columns:
Visualizing data with pandas
Of course, looking at tables of data is not very intuitive.
Fortunately Pandas has many useful plotting functions built-in, all of which make use of the matplotlib library to generate plots.
Whenever you do plotting in the IPython notebook, you will want to first run this magic command which configures the notebook to work well with plots:
End of explanation
import matplotlib.pyplot as plt
df['tripduration'].hist()
Explanation: Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data:
End of explanation
plt.style.use("ggplot")
Explanation: Adjusting the Plot Style
Matplotlib has a number of plot styles you can use. For example, if you like R you might use the ggplot style:
End of explanation
plt.
Explanation: Other plot types
Pandas supports a range of other plotting types; you can find these by using the <TAB> autocomplete on the plot method:
End of explanation
# A script for creating a dataframe with counts of the occurrence of a columns' values
df_count = df.groupby('from_station_id').count()
df_count1 = df_count[['trip_id']]
df_count2 = df_count1.rename(columns={'trip_id': 'count'})
df_count2.head()
def make_table_count(df_arg, groupby_column):
df_count = df_arg.groupby(groupby_column).count()
column_name = df.columns[0]
df_count1 = df_count[[column_name]]
df_count2 = df_count1.rename(columns={column_name: 'count'})
return df_count2
dff = make_table_count(df, 'from_station_id')
dff.head()
Explanation: For example, we can create a histogram of trip durations:
If you'd like to adjust the x and y limits of the plot, you can use the set_xlim() and set_ylim() method of the resulting object:
Breakout: Exploring the Data
Make a plot of the total number of rides as a function of month of the year (You'll need to extract the month, use a groupby, and find the appropriate aggregation to count the number in each group).
Split this plot by gender. Do you see any seasonal ridership patterns by gender?
Split this plot by user type. Do you see any seasonal ridership patterns by usertype?
Repeat the above three steps, counting the number of rides by time of day rather that by month.
Are there any other interesting insights you can discover in the data using these tools?
Using Files
Writing and running python modules
Using python modules in your Jupyter Notebook
End of explanation |
642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Accessing Databases via Web APIs
Step1: 1. Constructing API GET Request
In the first place, we know that every call will require us to provide
Step2: You often want to send some sort of data in the URL’s query string. This data tells the API what information you want. In our case, we want articles about Duke Ellington. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. In addition to the search term q, we have to put in the api-key term.
Step3: Now we're ready to make the request. We use the .get method from the requests library to make an HTTP GET Request.
Step4: Now, we have a response object called r. We can get all the information we need from this object. For instance, we can see that the URL has been correctly encoded by printing the URL. Click on the link to see what happens.
Step5: Click on that link to see it returns!
Challenge 1
Step6: Challenge 2
Step7: 2. Parsing the response text
We can read the content of the server’s response using .text
Step8: What you see here is JSON text, encoded as unicode text. JSON stands for "Javascript object notation." It has a very similar structure to a python dictionary -- both are built on key/value pairs. This makes it easy to convert JSON response to a python dictionary.
Step9: That looks intimidating! But it's really just a big dictionary. Let's see what keys we got in there.
Step10: That looks what we want! Let's put that in it's own variable.
Step11: 3. Putting everything together to get all the articles.
That's great. But we only have 10 items. The original response said we had 93 hits! Which means we have to make 93 /10, or 10 requests to get them all. Sounds like a job for a loop!
But first, let's review what we've done so far.
Step12: Challenge 3
Step13: 4. Formatting
Let's take another look at one of these documents.
Step14: This is all great, but it's pretty messy. What we’d really like to to have, eventually, is a CSV, with each row representing an article, and each column representing something about that article (header, date, etc). As we saw before, the best way to do this is to make a lsit of dictionaries, with each dictionary representing an article and each dictionary representing a field of metadata from that article (e.g. headline, date, etc.) We can do this with a custom function
Step15: Challenge 4 Collect more fields
Edit the function above so that we include the lead_paragraph and word_count fields.
HINT
Step16: 5. Exporting
We can now export the data to a CSV.
Step17: Capstone Challenge
Using what you learned, tell me if Chris' claim (i.e. that Duke Ellington has gotten more popular lately) holds water. | Python Code:
# Import required libraries
import requests
import json
from __future__ import division
import math
import csv
import matplotlib.pyplot as plt
Explanation: Accessing Databases via Web APIs
End of explanation
# set key
key="be8992a420bfd16cf65e8757f77a5403:8:44644296"
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
Explanation: 1. Constructing API GET Request
In the first place, we know that every call will require us to provide:
a base URL for the API,
some authorization code or key, and
a format for the response.
So let's put store those in some variables.
Use the following demonstration keys for now, but in the future, get your own!
ef9055ba947dd842effe0ecf5e338af9:15:72340235
25e91a4f7ee4a54813dca78f474e45a0:15:73273810
e15cea455f73cc47d6d971667e09c31c:19:44644296
b931c838cdb745bbab0f213cfc16b7a5:12:44644296
1dc1475b6e7d5ff5a982804cc565cd0b:6:44644296
18046cd15e21e1b9996ddfb6dafbb578:4:44644296
be8992a420bfd16cf65e8757f77a5403:8:44644296
End of explanation
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key}
Explanation: You often want to send some sort of data in the URL’s query string. This data tells the API what information you want. In our case, we want articles about Duke Ellington. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. In addition to the search term q, we have to put in the api-key term.
End of explanation
# make request
r = requests.get(base_url+response_format, params=search_params)
Explanation: Now we're ready to make the request. We use the .get method from the requests library to make an HTTP GET Request.
End of explanation
print(r.url)
Explanation: Now, we have a response object called r. We can get all the information we need from this object. For instance, we can see that the URL has been correctly encoded by printing the URL. Click on the link to see what happens.
End of explanation
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key,
"begin_date": "20150101", # date must be in YYYYMMDD format
"end_date": "20151231"}
# Uncomment to test
r = requests.get(base_url+response_format, params=search_params)
print(r.url)
Explanation: Click on that link to see it returns!
Challenge 1: Adding a date range
What if we only want to search within a particular date range? The NYT Article Api allows us to specify start and end dates.
Alter the search_params code above so that the request only searches for articles in the year 2015.
You're gonna need to look at the documentation here to see how to do this.
End of explanation
search_params["page"] = 0
# Uncomment to test
r = requests.get(base_url+response_format, params=search_params)
print(r.url)
Explanation: Challenge 2: Specifying a results page
The above will return the first 10 results. To get the next ten, you need to add a "page" parameter. Change the search parameters above to get the second 10 resuls.
End of explanation
# Inspect the content of the response, parsing the result as text
response_text= r.text
print(response_text[:1000])
Explanation: 2. Parsing the response text
We can read the content of the server’s response using .text
End of explanation
# Convert JSON response to a dictionary
data = json.loads(response_text)
# data
Explanation: What you see here is JSON text, encoded as unicode text. JSON stands for "Javascript object notation." It has a very similar structure to a python dictionary -- both are built on key/value pairs. This makes it easy to convert JSON response to a python dictionary.
End of explanation
print(data.keys())
# this is boring
data['status']
# so is this
data['copyright']
# this is what we want!
# data['response']
data['response'].keys()
data['response']['meta']['hits']
# data['response']['docs']
type(data['response']['docs'])
Explanation: That looks intimidating! But it's really just a big dictionary. Let's see what keys we got in there.
End of explanation
docs = data['response']['docs']
docs[0]
Explanation: That looks what we want! Let's put that in it's own variable.
End of explanation
# set key
key="ef9055ba947dd842effe0ecf5e338af9:15:72340235"
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key,
"begin_date":"20150101", # date must be in YYYYMMDD format
"end_date":"20151231"}
# make request
r = requests.get(base_url+response_format, params=search_params)
# convert to a dictionary
data=json.loads(r.text)
# get number of hits
hits = data['response']['meta']['hits']
print("number of hits: ", str(hits))
# get number of pages
pages = int(math.ceil(hits/10))
# make an empty list where we'll hold all of our docs for every page
all_docs = []
# now we're ready to loop through the pages
for i in range(pages):
print("collecting page", str(i))
# set the page parameter
search_params['page'] = i
# make request
r = requests.get(base_url+response_format, params=search_params)
# get text and convert to a dictionary
data=json.loads(r.text)
# get just the docs
docs = data['response']['docs']
# add those docs to the big list
all_docs = all_docs + docs
len(all_docs)
Explanation: 3. Putting everything together to get all the articles.
That's great. But we only have 10 items. The original response said we had 93 hits! Which means we have to make 93 /10, or 10 requests to get them all. Sounds like a job for a loop!
But first, let's review what we've done so far.
End of explanation
# DEFINE YOUR FUNCTION HERE
def get_api_data(term, year):
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
# set search parameters
search_params = {"q":term,
"api-key":key,
"begin_date": str(year) + "0101", # date must be in YYYYMMDD format
"end_date":str(year) + "1231"}
# make request
r = requests.get(base_url+response_format, params=search_params)
# convert to a dictionary
data=json.loads(r.text)
# get number of hits
hits = data['response']['meta']['hits']
print("number of hits:", str(hits))
# get number of pages
pages = int(math.ceil(hits/10))
# make an empty list where we'll hold all of our docs for every page
all_docs = []
# now we're ready to loop through the pages
for i in range(pages):
print("collecting page", str(i))
# set the page parameter
search_params['page'] = i
# make request
r = requests.get(base_url+response_format, params=search_params)
# get text and convert to a dictionary
data=json.loads(r.text)
# get just the docs
docs = data['response']['docs']
# add those docs to the big list
all_docs = all_docs + docs
return(all_docs)
# uncomment to test
# get_api_data("Duke Ellington", 2014)
Explanation: Challenge 3: Make a function
Turn the code above into a function that inputs a search term and a year, and returns all the documents containing that search term in that year.
End of explanation
all_docs[0]
Explanation: 4. Formatting
Let's take another look at one of these documents.
End of explanation
def format_articles(unformatted_docs):
'''
This function takes in a list of documents returned by the NYT api
and parses the documents into a list of dictionaries,
with 'id', 'header', and 'date' keys
'''
formatted = []
for i in unformatted_docs:
dic = {}
dic['id'] = i['_id']
dic['headline'] = i['headline']['main']
dic['date'] = i['pub_date'][0:10] # cutting time of day.
formatted.append(dic)
return(formatted)
all_formatted = format_articles(all_docs)
all_formatted[:5]
Explanation: This is all great, but it's pretty messy. What we’d really like to to have, eventually, is a CSV, with each row representing an article, and each column representing something about that article (header, date, etc). As we saw before, the best way to do this is to make a lsit of dictionaries, with each dictionary representing an article and each dictionary representing a field of metadata from that article (e.g. headline, date, etc.) We can do this with a custom function:
End of explanation
def format_articles(unformatted_docs):
'''
This function takes in a list of documents returned by the NYT api
and parses the documents into a list of formated dictionaries,
with 'id', 'header', and 'date' keys
'''
formatted = []
for i in unformatted_docs:
dic = {}
dic['id'] = i['_id']
dic['headline'] = i['headline']['main']
dic['date'] = i['pub_date'][0:10] # cutting time of day.
if i['lead_paragraph']:
dic['lead_paragraph'] = i['lead_paragraph']
dic['word_count'] = i['word_count']
dic['keywords'] = [keyword['value'] for keyword in i['keywords']]
formatted.append(dic)
return(formatted)
# uncomment to test
all_formatted = format_articles(all_docs)
# all_formatted[:5]
Explanation: Challenge 4 Collect more fields
Edit the function above so that we include the lead_paragraph and word_count fields.
HINT: Some articles may not contain a lead_paragraph, in which case, it'll throw an error if you try to address this value (which doesn't exist.) You need to add a conditional statement that takes this into consideraiton. If
Advanced: Add another key that returns a list of keywords associated with the article.
End of explanation
keys = all_formatted[1]
# writing the rest
with open('all-formated.csv', 'w') as output_file:
dict_writer = csv.DictWriter(output_file, keys)
dict_writer.writeheader()
dict_writer.writerows(all_formatted)
Explanation: 5. Exporting
We can now export the data to a CSV.
End of explanation
# for this challenge, we just need the number of hits.
def get_api_hits(term, year):
'''
returns an integer, the number of hits (or articles) mentioning the given term
in the given year
'''
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
# set search parameters
search_params = {"q":term,
"api-key":key,
"begin_date": str(year) + "0101", # date must be in YYYYMMDD format
"end_date":str(year) + "1231"}
# make request
r = requests.get(base_url+response_format, params=search_params)
# convert to a dictionary
data=json.loads(r.text)
# get number of hits
hits = data['response']['meta']['hits']
return(hits)
get_api_hits("Duke Ellington", 2014)
# collect data
years = range(2005, 2016)
years
all_duke = []
for i in years:
all_duke.append(get_api_hits("Duke Ellington", i))
all_duke
%matplotlib inline
plt.plot(years, all_duke)
plt.axis([2005, 2015, 0, 200])
Explanation: Capstone Challenge
Using what you learned, tell me if Chris' claim (i.e. that Duke Ellington has gotten more popular lately) holds water.
End of explanation |
644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grove Temperature Sensor 1.2
This example shows how to use the Grove Temperature Sensor v1.2. You will also see how to plot a graph using matplotlib. The Grove Temperature sensor produces an analog signal, and requires an ADC.
A Grove Temperature sensor and Pynq Grove Adapter, or Pynq Shield is required. The Grove Temperature Sensor, Pynq Grove Adapter, and Grove I2C ADC are used for this example.
You can read a single value of temperature or read multiple values at regular intervals for a desired duration.
At the end of this notebook, a Python only solution with single-sample read functionality is provided.
1. Load overlay
Step1: 2. Read single temperature
This example shows on how to get a single temperature sample from the Grove TMP sensor.
The Grove ADC is assumed to be attached to the GR4 connector of the StickIt. The StickIt module is assumed to be plugged in the 1st PMOD labeled JB. The Grove TMP sensor is connected to the other connector of the Grove ADC.
Grove ADC provides a raw sample which is converted into resistance first and then converted into temperature.
Step2: 3. Start logging once every 100ms for 10 seconds
Executing the next cell will start logging the temperature sensor values every 100ms, and will run for 10s. You can try touch/hold the temperature sensor to vary the measured temperature.
You can vary the logging interval and the duration by changing the values 100 and 10 in the cellbelow. The raw samples are stored in the internal memory, and converted into temperature values.
Step6: 4. A Pure Python class to exercise the AXI IIC Controller inheriting from PMOD_IIC
This class is ported from http | Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
Explanation: Grove Temperature Sensor 1.2
This example shows how to use the Grove Temperature Sensor v1.2. You will also see how to plot a graph using matplotlib. The Grove Temperature sensor produces an analog signal, and requires an ADC.
A Grove Temperature sensor and Pynq Grove Adapter, or Pynq Shield is required. The Grove Temperature Sensor, Pynq Grove Adapter, and Grove I2C ADC are used for this example.
You can read a single value of temperature or read multiple values at regular intervals for a desired duration.
At the end of this notebook, a Python only solution with single-sample read functionality is provided.
1. Load overlay
End of explanation
import math
from pynq.lib.pmod import Grove_TMP
from pynq.lib.pmod import PMOD_GROVE_G4
tmp = Grove_TMP(base.PMODB,PMOD_GROVE_G4)
temperature = tmp.read()
print(float("{0:.2f}".format(temperature)),'degree Celsius')
Explanation: 2. Read single temperature
This example shows on how to get a single temperature sample from the Grove TMP sensor.
The Grove ADC is assumed to be attached to the GR4 connector of the StickIt. The StickIt module is assumed to be plugged in the 1st PMOD labeled JB. The Grove TMP sensor is connected to the other connector of the Grove ADC.
Grove ADC provides a raw sample which is converted into resistance first and then converted into temperature.
End of explanation
import time
%matplotlib inline
import matplotlib.pyplot as plt
tmp.set_log_interval_ms(100)
tmp.start_log()
# Change input during this time
time.sleep(10)
tmp_log = tmp.get_log()
plt.plot(range(len(tmp_log)), tmp_log, 'ro')
plt.title('Grove Temperature Plot')
min_tmp_log = min(tmp_log)
max_tmp_log = max(tmp_log)
plt.axis([0, len(tmp_log), min_tmp_log, max_tmp_log])
plt.show()
Explanation: 3. Start logging once every 100ms for 10 seconds
Executing the next cell will start logging the temperature sensor values every 100ms, and will run for 10s. You can try touch/hold the temperature sensor to vary the measured temperature.
You can vary the logging interval and the duration by changing the values 100 and 10 in the cellbelow. The raw samples are stored in the internal memory, and converted into temperature values.
End of explanation
from time import sleep
from math import log
from pynq.lib.pmod import PMOD_GROVE_G3
from pynq.lib.pmod import PMOD_GROVE_G4
from pynq.lib import Pmod_IIC
class Python_Grove_TMP(Pmod_IIC):
This class controls the grove temperature sensor.
This class inherits from the PMODIIC class.
Attributes
----------
iop : _IOP
The _IOP object returned from the DevMode.
scl_pin : int
The SCL pin number.
sda_pin : int
The SDA pin number.
iic_addr : int
The IIC device address.
def __init__(self, pmod_id, gr_pins, model = 'v1.2'):
Return a new instance of a grove OLED object.
Parameters
----------
pmod_id : int
The PMOD ID (1, 2) corresponding to (PMODA, PMODB).
gr_pins: list
The group pins on Grove Adapter. G3 or G4 is valid.
model : string
Temperature sensor model (can be found on the device).
if gr_pins in [PMOD_GROVE_G3, PMOD_GROVE_G4]:
[scl_pin,sda_pin] = gr_pins
else:
raise ValueError("Valid group numbers are G3 and G4.")
# Each revision has its own B value
if model == 'v1.2':
# v1.2 uses thermistor NCP18WF104F03RC
self.bValue = 4250
elif model == 'v1.1':
# v1.1 uses thermistor NCP18WF104F03RC
self.bValue = 4250
else:
# v1.0 uses thermistor TTC3A103*39H
self.bValue = 3975
super().__init__(pmod_id, scl_pin, sda_pin, 0x50)
# Initialize the Grove ADC
self.send([0x2,0x20]);
def read(self):
Read temperature in Celsius from grove temperature sensor.
Parameters
----------
None
Returns
-------
float
Temperature reading in Celsius.
val = self._read_grove_adc()
R = 4095.0/val - 1.0
temp = 1.0/(log(R)/self.bValue + 1/298.15)-273.15
return temp
def _read_grove_adc(self):
self.send([0])
bytes = self.receive(2)
return 2*(((bytes[0] & 0x0f) << 8) | bytes[1])
from pynq import PL
# Flush IOP state
PL.reset()
py_tmp = Python_Grove_TMP(base.PMODB, PMOD_GROVE_G4)
temperature = py_tmp.read()
print(float("{0:.2f}".format(temperature)),'degree Celsius')
Explanation: 4. A Pure Python class to exercise the AXI IIC Controller inheriting from PMOD_IIC
This class is ported from http://wiki.seeedstudio.com/Grove-Temperature_Sensor/
End of explanation |
645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: repeat(1,10000)重复一次,每次10000遍
Step2: 这个就没有重复多少次,就一次,一次10000遍
注意这里timeit.timeit的函数声明,第三个参数是timer,为了避过该参数,到了number这里,显式的声明该参数 | Python Code:
setup_sum='sum=0'
run_sum=
for i in range(1,1000):
if i % 3 ==0:
sum = sum + i
print(timeit.Timer(run_sum, setup="sum=0").repeat(1,10000))
Explanation: repeat(1,10000)重复一次,每次10000遍
End of explanation
t=timeit.timeit(run_sum,setup_sum,number=10000)
print("Time for built-in sum(): {}".format(t))
start=time.time()
sum=0
for i in range(1,10000):
if i % 3==0:
sum+=i
end=time.time()
print("Time for trading way to count the time is %f"%(end-start))
Explanation: 这个就没有重复多少次,就一次,一次10000遍
注意这里timeit.timeit的函数声明,第三个参数是timer,为了避过该参数,到了number这里,显式的声明该参数
End of explanation |
646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contexts
The Vcsn platform relies on a central concept
Step1: If instead of a simple accepter that returns "yes" or "no", you want to compute an integer, work in $\mathbb{Z}$
Step2: To use words on the usual alphabet as labels
Step3: $k$-tape Automata
To create a "classical" two-tape automaton
Step4: Multiple Weights
To compute a Boolean and an integer
Step5: The following automaton is almost able to recognize $a^nb^n$
Step6: Boss
The interpretation of the following monster is left to the reader as an exercise | Python Code:
import vcsn
vcsn.context('lal<char(abc)>, b')
Explanation: Contexts
The Vcsn platform relies on a central concept: "contexts". They denote typing information about automata, rational expressions, etc. This information is alike a function type: an input type (the label), and an output type (the weight).
Contexts are created by the vcsn.context function which takes a string as input. This string follows the following syntax:
<context> ::= <labelset> , <weightset>
i.e., a context name is composed of a labelset name, then a comma, then a weightset name.
Labelsets
Different LabelSets model multiple variations on labels,
members of a monoid:
letterset< genset ><br>
Fully defined by an alphabet $A$, its labels being just
letters. It is simply denoted by $A$. It corresponds to the usual
definition of an NFA.
nullableset< labelset ><br>
Denoted by $A^?$, also defined by an alphabet $A$, its
labels being either letters or the empty word. This corresponds to what
is often called $\varepsilon$-NFAs.
wordset< genset > <br>
Denoted by $A^*$, also defined by an alphabet $A$, its labels
being (possibly empty) words on this alphabet.
oneset<br>
Denoted by ${1}$, containing a single label: 1, the empty word.
tupleset< labelset1 , labelset2 , ..., labelsetn > <br>
Cartesian product of LabelSets, $L_1 \times \cdots \times
L_n$. This type implements the concept of transducers with an arbitrary
number of "tapes". The concept is developed more in-depth here: Transducers.
Gensets
The gensets define the types of the letters, and sets of the valid letters. There is currently a single genset type.
char_letters<br>
Specify that the letters are implemented as char. Any char will be accepted. The genset is said to be "open".
char_letters(abc...)<br>
Specify that the letters are implemented as char, and the genset is closed to {a, b, c}. Any other char will be rejected.
Abbreviations for Labelsets
There are a few abbreviations that are accepted.
lal_char: letterset<char_letters>
lal_char(abc): letterset<char_letters(abc)>
lan_char: nullableset<letterset<char_letters>>
law_char: wordset<letterset<char_letters>>
Weightsets
The WeightSets define the semiring of the weights. Builtin weights include:
b <br/>
The classical Booleans: $\langle \mathbb{B}, \vee, \wedge, \bot, \top \rangle$
z <br/>
The integers coded as ints: $\langle \mathbb{Z}, +, \times, 0, 1 \rangle$
q<br/>
The rationals, coded as pairs of ints: $\langle \mathbb{Q}, +, \times, 0, 1 \rangle$
qmp<br/>
The rationals, with support for multiprecision: $\langle \mathbb{Q}_\text{mp}, +, \times, 0, 1 \rangle$
r <br/>
The reals, coded as doubles: $\langle \mathbb{R}, +, \times, 0, 1 \rangle$
nmin <br/>
The tropical semiring, coded as unsigned ints: $\langle \mathbb{N} \cup {\infty}, \min, +, \infty, 0 \rangle$
zmin <br/>
The tropical semiring, coded as ints: $\langle \mathbb{Z} \cup {\infty}, \min, +, \infty, 0 \rangle$
rmin <br/>
The tropical semiring, coded as floatss: $\langle \mathbb{R} \cup {\infty}, \min, +, \infty, 0 \rangle$
log <br/>
The log semiring, coded as doubles: $\langle \mathbb{R} \cup {-\infty, +\infty}, \oplus_\mathrm{log}, +, +\infty, 0 \rangle$ (where $\oplus_\mathrm{log}$ denotes $x, y \rightarrow - \mathrm{log}(\exp(-x) + \exp(-y))$.
f2<br/>
The field: $\langle \mathbb{F}_2, \oplus, \wedge, 0, 1 \rangle$ (where $\oplus$ denotes the "exclusive or").
tupleset<br/>
Cartesian product of WeightSets, $W_1 \times \cdots \times W_n$.
Examples
The usual framework for automaton is to use letters as labels, and Booleans as weights:
End of explanation
vcsn.context('lal<char(abc)>, z')
Explanation: If instead of a simple accepter that returns "yes" or "no", you want to compute an integer, work in $\mathbb{Z}$:
End of explanation
vcsn.context('law<char(a-z)>, z')
Explanation: To use words on the usual alphabet as labels:
End of explanation
vcsn.context('lat<lal<char(a-f)>, lal<char(A-F)>>, b')
Explanation: $k$-tape Automata
To create a "classical" two-tape automaton:
End of explanation
vcsn.context('lal<char(ab)>, lat<b, z>')
Explanation: Multiple Weights
To compute a Boolean and an integer:
End of explanation
zmin2 = vcsn.context('lal<char(ab)>, lat<zmin, zmin>')
zmin2
ab = zmin2.expression('(<1,0>a)*(<0,0>b)* & (<0,0>a)*(<0,1>b)*')
ab
a = ab.automaton()
a
print(a.shortest(len = 4).format('list'))
Explanation: The following automaton is almost able to recognize $a^nb^n$: it accepts only words of $a^nb^m$ (aka $a^b^$) and return $(n, m)$. One still has to check that $n = m$.
End of explanation
vcsn.context('''nullableset< lat< lal<char(ba)>,
lat< lan<char(vu)>, law<char(x-z)> >
>
>
,
lat<expressionset<nullableset<lat<lan<char(fe)>, lan<char(hg)>>>,
lat<r, q>>,
lat<b, z>
>
''')
Explanation: Boss
The interpretation of the following monster is left to the reader as an exercise:
End of explanation |
647 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
plot line chats between two arrays k and l
| Python Code::
import matplotlib.pyplot as plt
plt.plot(k,l)
|
648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Properties of Rectangular Waveguide
Introduction
This example demonstrates how to use scikit-rf to calculate some properties of rectangular waveguide. For more information regarding the theoretical basis for these calculations, see the References.
Object Creation
This first section imports neccesary modules and creates several RectangularWaveguide objects for some standard waveguide bands.
Step1: Conductor Loss
Step2: Phase Velocity
Step3: Propagation Constant | Python Code:
%matplotlib inline
import skrf as rf
rf.stylely()
# imports
from scipy.constants import mil,c
from skrf.media import RectangularWaveguide, Freespace
from skrf.frequency import Frequency
import matplotlib as mpl
# plot formating
mpl.rcParams['lines.linewidth'] = 2
# create frequency objects for standard bands
f_wr5p1 = Frequency(140,220,1001, 'ghz')
f_wr3p4 = Frequency(220,330,1001, 'ghz')
f_wr2p2 = Frequency(330,500,1001, 'ghz')
f_wr1p5 = Frequency(500,750,1001, 'ghz')
f_wr1 = Frequency(750,1100,1001, 'ghz')
# create rectangular waveguide objects
wr5p1 = RectangularWaveguide(f_wr5p1.copy(), a=51*mil, b=25.5*mil, rho = 'au')
wr3p4 = RectangularWaveguide(f_wr3p4.copy(), a=34*mil, b=17*mil, rho = 'au')
wr2p2 = RectangularWaveguide(f_wr2p2.copy(), a=22*mil, b=11*mil, rho = 'au')
wr1p5 = RectangularWaveguide(f_wr1p5.copy(), a=15*mil, b=7.5*mil, rho = 'au')
wr1 = RectangularWaveguide(f_wr1.copy(), a=10*mil, b=5*mil, rho = 'au')
# add names to waveguide objects for use in plot legends
wr5p1.name = 'WR-5.1'
wr3p4.name = 'WR-3.4'
wr2p2.name = 'WR-2.2'
wr1p5.name = 'WR-1.5'
wr1.name = 'WR-1.0'
# create a list to iterate through
wg_list = [wr5p1, wr3p4,wr2p2,wr1p5,wr1]
# creat a freespace object too
freespace = Freespace(Frequency(125,1100, 1001))
freespace.name = 'Free Space'
Explanation: Properties of Rectangular Waveguide
Introduction
This example demonstrates how to use scikit-rf to calculate some properties of rectangular waveguide. For more information regarding the theoretical basis for these calculations, see the References.
Object Creation
This first section imports neccesary modules and creates several RectangularWaveguide objects for some standard waveguide bands.
End of explanation
from pylab import *
for wg in wg_list:
wg.frequency.plot(rf.np_2_db(wg.alpha), label=wg.name )
legend()
xlabel('Frequency(GHz)')
ylabel('Loss (dB/m)')
title('Loss in Rectangular Waveguide (Au)');
xlim(100,1300)
resistivity_list = linspace(1,10,5)*1e-8 # ohm meter
for rho in resistivity_list:
wg = RectangularWaveguide(f_wr1.copy(), a=10*mil, b=5*mil,
rho = rho)
wg.frequency.plot(rf.np_2_db(wg.alpha),label=r'$ \rho $=%.e$ \Omega m$'%rho )
legend()
#ylim(.0,20)
xlabel('Frequency(GHz)')
ylabel('Loss (dB/m)')
title('Loss vs. Resistivity in\nWR-1.0 Rectangular Waveguide');
Explanation: Conductor Loss
End of explanation
for wg in wg_list:
wg.frequency.plot(100*wg.v_p.real/c, label=wg.name )
legend()
ylim(50,200)
xlabel('Frequency(GHz)')
ylabel('Phase Velocity (\%c)')
title('Phase Veclocity in Rectangular Waveguide');
for wg in wg_list:
plt.plot(wg.frequency.f_scaled[1:],
100/c*diff(wg.frequency.w)/diff(wg.beta),
label=wg.name )
legend()
ylim(50,100)
xlabel('Frequency(GHz)')
ylabel('Group Velocity (\%c)')
title('Phase Veclocity in Rectangular Waveguide');
Explanation: Phase Velocity
End of explanation
for wg in wg_list+[freespace]:
wg.frequency.plot(wg.beta, label=wg.name )
legend()
xlabel('Frequency(GHz)')
ylabel('Propagation Constant (rad/m)')
title('Propagation Constant \nin Rectangular Waveguide');
semilogy();
Explanation: Propagation Constant
End of explanation |
649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Module 2
Step1: As the tide moves through the Strait, it creates a change in the elevation of the water surface. Below we'll cycle through a tidal cycle and look at how the tide moves through the Strait. Use the slider to move through the time series and look how the measured tide at a station relates to the other stations, and its effect on the water elevation.
Step2: Take a look at the time series for each station. It looks like a wave. In fact, the tide is a wave. That wave propogates through the Strait, starting at Neah Bay and travelling to Port Townsend. This is reflected in the elevation, as the peak elevation moves from one station to the following station.
# Module 2 Quiz | Python Code:
import tydal.module2_utils as tide
import tydal.quiz2
stationmap = tide.add_station_maps()
stationmap
Explanation: Module 2: Tides in the Puget Sound
Learning Objectives
I. Tidal Movement
II. Tidal Cycle and Connection to Sea Surface Elevation
Let's take a closer look at the movement of tides through the Strait of Juan de Fuca. We'll be using the tidal stations at Neah Bay, Port Angeles, and Port Townsend. Their tidal data and locations can be found at NOAA Tides and Currents webpage.
Below, we plotted the locations of the three tidal stations in the Strait of Juan de Fuca.
From west to east: Neah Bay, Port Angeles, and Port Townsend.
End of explanation
NeahBay = tide.load_Neah_Bay('Data/')
PortAngeles = tide.load_Port_Angeles('Data/')
PortTownsend = tide.load_Port_Townsend('Data/')
Tides = tide.create_tide_dataset(NeahBay,PortAngeles,PortTownsend)
%matplotlib inline
tide.plot_tide_data(Tides,'2016-10-01','2016-10-02')
Explanation: As the tide moves through the Strait, it creates a change in the elevation of the water surface. Below we'll cycle through a tidal cycle and look at how the tide moves through the Strait. Use the slider to move through the time series and look how the measured tide at a station relates to the other stations, and its effect on the water elevation.
End of explanation
quiz2.quiz()
Explanation: Take a look at the time series for each station. It looks like a wave. In fact, the tide is a wave. That wave propogates through the Strait, starting at Neah Bay and travelling to Port Townsend. This is reflected in the elevation, as the peak elevation moves from one station to the following station.
# Module 2 Quiz
End of explanation |
650 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a table of measured values for a quantity that depends on two parameters. So say I have a function fuelConsumption(speed, temperature), for which data on a mesh are known. | Problem:
import numpy as np
import scipy.interpolate
s = np.linspace(-1, 1, 50)
t = np.linspace(-2, 0, 50)
x, y = np.ogrid[-1:1:10j,-2:0:10j]
z = (x + y)*np.exp(-6.0 * (x * x + y * y))
spl = scipy.interpolate.RectBivariateSpline(x, y, z)
result = spl(s, t, grid=False) |
651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data were munged here.
Step1: <h5>First
Step2: <h3>When did River Grove open, when did the last owners take over, and how many companies have owned the facility?</h3>
Step3: <h3>How many visible complaints have there been under the current ownership?</h3>
Step4: <h3>How many online complaints have there been under previous ownership?</h3>
Step5: <h3>How many complaints occurred in the two years before the current owners took over?</h3>
Step6: <h3>What are the names River Grove is listed under on the public-facing website, and in what historical order?</h3> | Python Code:
import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
df = pd.read_csv('../../data/processed/complaints-3-29-scrape.csv')
owners = pd.read_csv('../../data/raw/APD_HistOwner.csv')
Explanation: Data were munged here.
End of explanation
owners.rename(columns={'HOW_IdNumber':'owner_id','HOW_CcmuNumber': 'fac_ccmunumber', 'HOW_DateActive':'license_date'}, inplace=True)
owners['license_date'] = pd.to_datetime(owners['license_date'])
owners = owners[['fac_ccmunumber','license_date','owner_id']]
Explanation: <h5>First: Prep ownership history table</h5>
End of explanation
#Last ownership change
owners[owners['fac_ccmunumber']=='50M132']
Explanation: <h3>When did River Grove open, when did the last owners take over, and how many companies have owned the facility?</h3>
End of explanation
#Slice of public River Grove complaints
rg = df[(df['facility_id']=='50M132') & (df['public']=='online')]
rg[rg['incident_date']>='2015-04-01'].count()[0]
Explanation: <h3>How many visible complaints have there been under the current ownership?</h3>
End of explanation
rg[rg['incident_date']<'2015-04-01'].count()[0]
Explanation: <h3>How many online complaints have there been under previous ownership?</h3>
End of explanation
rg[(rg['incident_date']<'2015-04-01') & (rg['incident_date']>'2013-04-01')].count()[0]
Explanation: <h3>How many complaints occurred in the two years before the current owners took over?</h3>
End of explanation
rg[['incident_date','online_fac_name']].drop_duplicates(subset='online_fac_name', keep='first').sort_values('incident_date', ascending=False)
Explanation: <h3>What are the names River Grove is listed under on the public-facing website, and in what historical order?</h3>
End of explanation |
652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Network Tour of Data Science
Pierre Vandergheynst, Full Professor, and Michaël Defferrard, PhD student, EPFL LTS2.
Exercise 5
Step1: 1 Graph
Goal
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: 2 Fourier Basis
Compute the eigendecomposition $L=U \Lambda U^t$ of the Laplacian, where $\Lambda$ is the diagonal matrix of eigenvalues $\Lambda_{\ell\ell} = \lambda_\ell$ and $U = [u_1, \ldots, u_n]^t$ is the graph Fourier basis.
Hint
Step8: Visualize the eigenvectors $u_\ell$ corresponding to the first eight non-zero eigenvalues $\lambda_\ell$.
Can you explain what you observe and relate it to the structure of the graph ?
Hint
Step9: 3 Graph Signals
Let $f(u)$ be a positive and non-increasing function of $u$.
Compute the graph signal $x$ whose graph Fourier transform satisfies $\hat{x}(\ell) = f(\lambda_\ell)$.
Visualize the result.
Can you interpret it ? How does the choice of $f$ influence the result ? | Python Code:
import numpy as np
import scipy.spatial
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: A Network Tour of Data Science
Pierre Vandergheynst, Full Professor, and Michaël Defferrard, PhD student, EPFL LTS2.
Exercise 5: Graph Signals and Fourier Transform
The goal of this exercise is to experiment with the notions of graph signals, graph Fourier transform and smoothness and illustrate these concepts in the light of clustering.
End of explanation
d = 2 # Dimensionality.
n = 100 # Number of samples.
c = 2 # Number of communities.
# Data matrix, structured in communities.
X = np.random.normal(0, 1, (n, d))
X += np.linspace(0, 2, c).repeat(n//c)[:, np.newaxis]
fig, ax = plt.subplots(1, 1, squeeze=True)
ax.scatter(X[:n//c, 0], X[:n//c, 1], c='b', s=40, linewidths=0, label='class 0');
ax.scatter(X[n//c:, 0], X[n//c:, 1], c='r', s=40, linewidths=0, label='class 1');
lim1 = X.min() - 0.5
lim2 = X.max() + 0.5
ax.set_xlim(lim1, lim2)
ax.set_ylim(lim1, lim2)
ax.set_aspect('equal')
ax.legend(loc='upper left');
Explanation: 1 Graph
Goal: compute the combinatorial Laplacian $L$ of a graph formed with $c=2$ clusters.
Step 1: construct and visualize a fabricated data matrix $X = [x_1, \ldots, x_n]^t \in \mathbb{R}^{n \times d}$ whose lines are $n$ samples embedded in a $d$-dimensional Euclidean space.
End of explanation
# Pairwise distances.
dist = YOUR CODE
plt.figure(figsize=(15, 5))
plt.hist(dist.flatten(), bins=40);
Explanation: Step 2: compute all $n^2$ pairwise euclidean distances $\operatorname{dist}(i, j) = \|x_i - x_j\|_2$.
Hint: you may use the function scipy.spatial.distance.pdist() and scipy.spatial.distance.squareform().
End of explanation
k = 10 # Miminum number of edges per node.
dist = YOUR CODE
assert dist.shape == (n, k)
Explanation: Step 3: order the distances and, for each sample, solely keep the $k=10$ closest samples to form a $k$ nearest neighbor ($k$-NN) graph.
Hint: you may sort a numpy array with np.sort() or np.argsort().
End of explanation
# Scaling factor.
sigma2 = np.mean(dist[:, -1])**2
# Weights with Gaussian kernel.
dist = YOUR CODE
plt.figure(figsize=(15, 5))
plt.hist(dist.flatten(), bins=40);
Explanation: Step 4: compute the weights using a Gaussian kernel, i.e. $$\operatorname{weight}(i, j) = \exp\left(-\frac{\operatorname{dist}(i,j)^2}{\sigma^2}\right) = \exp\left(-\frac{\|x_i - x_j\|_2^2}{\sigma^2}\right).$$
Hint: you may use the below definition of $\sigma^2$.
End of explanation
# Weight matrix.
I = YOUR CODE
J = YOUR CODE
V = YOUR CODE
W = scipy.sparse.coo_matrix((V, (I, J)), shape=(n, n))
# No self-connections.
W.setdiag(0)
# Non-directed graph.
bigger = W.T > W
W = W - W.multiply(bigger) + W.T.multiply(bigger)
assert type(W) == scipy.sparse.csr_matrix
print('n = |V| = {}, k|V| < |E| = {}'.format(n, W.nnz))
plt.spy(W, markersize=2, color='black');
Explanation: Step 5: construct and visualize the sparse weight matrix $W_{ij} = \operatorname{weight}(i, j)$.
Hint: you may use the function scipy.sparse.coo_matrix() to create a sparse matrix.
End of explanation
# Degree matrix.
D = YOUR CODE
# Laplacian matrix.
L = D - W
fig, axes = plt.subplots(1, 2, squeeze=True, figsize=(15, 5))
axes[0].spy(L, markersize=2, color='black');
axes[1].plot(D.diagonal(), '.');
Explanation: Step 6: compute the combinatorial graph Laplacian $L = D - W$ where $D$ is the diagonal degree matrix $D_{ii} = \sum_j W_{ij}$.
End of explanation
lamb, U = YOUR CODE
#print(lamb)
plt.figure(figsize=(15, 5))
plt.plot(lamb, '.-');
Explanation: 2 Fourier Basis
Compute the eigendecomposition $L=U \Lambda U^t$ of the Laplacian, where $\Lambda$ is the diagonal matrix of eigenvalues $\Lambda_{\ell\ell} = \lambda_\ell$ and $U = [u_1, \ldots, u_n]^t$ is the graph Fourier basis.
Hint: you may use the function np.linalg.eigh().
End of explanation
YOUR CODE
Explanation: Visualize the eigenvectors $u_\ell$ corresponding to the first eight non-zero eigenvalues $\lambda_\ell$.
Can you explain what you observe and relate it to the structure of the graph ?
Hint: you may use the function plt.scatter().
End of explanation
def f1(u):
YOUR CODE
return y
xhat = f(lamb)
x = YOUR CODE
Explanation: 3 Graph Signals
Let $f(u)$ be a positive and non-increasing function of $u$.
Compute the graph signal $x$ whose graph Fourier transform satisfies $\hat{x}(\ell) = f(\lambda_\ell)$.
Visualize the result.
Can you interpret it ? How does the choice of $f$ influence the result ?
End of explanation |
653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1><span style="color
Step1: Species tree model
Step2: Coalescent simulations
The SNPs output is saved to an HDF5 database file.
Step3: [optional] Build an IMAP dictionary
A dictionary mapping of population names to sample names.
Step4: calculate distances with missing values filtered and/or imputed, and corrected
The correction applies a model of sequence substitution where more complex models can apply a greater penalty for unobserved changes (e.g., HKY or GTR). This allows you to use either SNPs or SEQUENCES as input. Here we are using SNPs. More on this later... (TODO).
Step5: Infer a tree from distance matrix
Step6: Draw tree and distance matrix
Step7: save results
Step8: Draw the matrix
Step9: Draw matrix reordered to match groups in imap | Python Code:
# conda install ipyrad -c conda-forge -c bioconda
# conda install ipcoal -c conda-forge
import ipyrad.analysis as ipa
import ipcoal
import toyplot
import toytree
Explanation: <h1><span style="color:gray">ipyrad-analysis toolkit:</span> distance</h1>
Genetic distance matrices are used in many contexts to study the evolutionary divergence of samples or populations. The ipa.distance module provides a simple and convenient framework to implement several distance based metrics.
Key features:
Filter SNPs to reduce missing data.
Impute missing data using population allele frequencies.
Calculate pairwise genetic distances between samples (e.g., p-dist, JC, HKY, Fst)
(coming soon) sliding window measurements along chromosomes.
required software
End of explanation
# generate and draw an imbalanced 5 tip tree
tree = toytree.rtree.imbtree(ntips=5, treeheight=500000)
tree.draw(ts='p');
Explanation: Species tree model
End of explanation
# setup a model to simulate 8 haploid samples per species
model = ipcoal.Model(tree=tree, Ne=1e4, nsamples=8)
model.sim_loci(1000, 50)
model.write_snps_to_hdf5(name="test-dist", outdir="/tmp", diploid=True)
# the path to the HDF5 formatted snps file
SNPS = "/tmp/test-dist.snps.hdf5"
Explanation: Coalescent simulations
The SNPs output is saved to an HDF5 database file.
End of explanation
from itertools import groupby
# load sample names from SNPs file
tool = ipa.snps_extracter(SNPS)
# group names by prefix before '-'
groups = groupby(tool.names, key=lambda x: x.split("-")[0])
# arrange into a dictionary
IMAP = {i[0]: list(i[1]) for i in groups}
# show the dict
IMAP
Explanation: [optional] Build an IMAP dictionary
A dictionary mapping of population names to sample names.
End of explanation
dist = ipa.distance(
data=SNPS,
imap=IMAP,
minmap={i: 1 for i in IMAP},
mincov=0.5,
impute_method=None,
)
# infer the distance matrix from sequence data
dist.run()
# show the first few data cells
dist.dists.iloc[:5, :12]
Explanation: calculate distances with missing values filtered and/or imputed, and corrected
The correction applies a model of sequence substitution where more complex models can apply a greater penalty for unobserved changes (e.g., HKY or GTR). This allows you to use either SNPs or SEQUENCES as input. Here we are using SNPs. More on this later... (TODO).
End of explanation
tool = ipa.neighbor_joining(matrix=dist.dists)
Explanation: Infer a tree from distance matrix
End of explanation
# create a canvas
canvas = toyplot.Canvas(width=500, height=450);
# add tree
axes = canvas.cartesian(bounds=("10%", "35%", "10%", "90%"))
gtree.draw(axes=axes, tip_labels=True, tip_labels_align=True)
# add matrix
table = canvas.table(
rows=matrix.shape[0],
columns=matrix.shape[1],
margin=0,
bounds=("40%", "95%", "9%", "91%"),
)
colormap = toyplot.color.brewer.map("BlueRed")
# apply a color to each cell in the table
for ridx in range(matrix.shape[0]):
for cidx in range(matrix.shape[1]):
cell = table.cells.cell[ridx, cidx]
cell.style = {
"fill": colormap.colors(matrix.iloc[ridx, cidx], 0, 1),
}
dist.dists
# style the gaps between cells
table.body.gaps.columns[:] = 3
table.body.gaps.rows[:] = 3
# hide axes coordinates
axes.show = False
# load the snp data into distance tool with arguments
dist = Distance(
data=data,
imap=imap,
minmap=minmap,
mincov=0.5,
impute_method="sample",
subsample_snps=False,
)
dist.run()
Explanation: Draw tree and distance matrix
End of explanation
# save to a CSV file
dist.dists.to_csv("distances.csv")
# show the upper corner
dist.dists.head()
Explanation: save results
End of explanation
toyplot.matrix(
dist.dists,
bshow=False,
tshow=False,
rlocator=toyplot.locator.Explicit(
range(len(dist.names)),
sorted(dist.names),
));
Explanation: Draw the matrix
End of explanation
# get list of concatenated names from each group
ordered_names = []
for group in dist.imap.values():
ordered_names += group
# reorder matrix to match name order
ordered_matrix = dist.dists[ordered_names].T[ordered_names]
toyplot.matrix(
ordered_matrix,
bshow=False,
tshow=False,
rlocator=toyplot.locator.Explicit(
range(len(ordered_names)),
ordered_names,
));
Explanation: Draw matrix reordered to match groups in imap
End of explanation |
654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Remote Sensing Systems (RSS, http
Step2: Weight functions
Step3: Netcdf data
Step4: We need to calculate the element area (on a unit sphere) as follows
Step5: Let's create averaging weights that are normalized to 1 as follows
Step6: The temperature oscillates each year. To calculate the "anomaly", we subtract from each month its average temperature
Step7: We calculate linear fit
Step8: And compare against official graph + trend. As can be seen, the agreement is perfect | Python Code:
#!wget http://www.remss.com/data/msu/data/netcdf/uat4_tb_v03r03_avrg_chTLT_197812_201308.nc3.nc
#!mv uat4_tb_v03r03_avrg_chTLT_197812_201308.nc3.nc data/
#!wget http://www.remss.com/data/msu/data/netcdf/uat4_tb_v03r03_anom_chTLT_197812_201308.nc3.nc
#!mv uat4_tb_v03r03_anom_chTLT_197812_201308.nc3.nc data/
Explanation: Remote Sensing Systems (RSS, http://www.ssmi.com/) provide machine readable curated datasets of satellite measurements, and the website also explains how they were obtained, processed etc.
The temperature data is called MSU (Microwave Sounding Units), that operated between 1978-2005, and AMSU (Advanced Microwave Sounding Units) from 1998. They provide 4 main datasets:
TLT (Temperature Lower Troposphere): MSU channel 2 by subtracting measurements made at different angles from each other
TMT (Temperature Middle Troposphere): MSU channel 2
TTS (Temperature Troposphere Stratosphere): MSU channel 3
TLS (Temperature Lower Stratosphere): MSU channel 4
The AMSU also provides channels 10-14 (datasets available from RSS), which measure temperatures higher in the stratosphere than the highest MSU channel (4).
End of explanation
%pylab inline
import urllib2
import os
from IPython.display import Image
def download(url, dir):
Saves file 'url' into 'dir', unless it already exists.
filename = os.path.basename(url)
fullpath = os.path.join(dir, filename)
if os.path.exists(fullpath):
print "Already downloaded:", filename
else:
print "Downloading:", filename
open(fullpath, "w").write(urllib2.urlopen(url).read())
download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_TTS.txt", "data")
download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_TLS.txt", "data")
download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_tlt_land.txt", "data")
download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_tlt_ocean.txt", "data")
download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_tmt_land.txt", "data")
download("http://www.remss.com/data/msu/weighting_functions/std_atmosphere_wt_function_chan_tmt_ocean.txt", "data")
D = loadtxt("data/std_atmosphere_wt_function_chan_TTS.txt", skiprows=6)
h = D[:, 1]
wTTS = D[:, 5]
D = loadtxt("data/std_atmosphere_wt_function_chan_TLS.txt", skiprows=6)
assert max(abs(h-D[:, 1])) < 1e-12
wTLS = D[:, 5]
D = loadtxt("data/std_atmosphere_wt_function_chan_tlt_land.txt", skiprows=7)
assert max(abs(h-D[:, 1])) < 1e-12
wTLT_land = D[:, 5]
D = loadtxt("data/std_atmosphere_wt_function_chan_tlt_ocean.txt", skiprows=7)
assert max(abs(h-D[:, 1])) < 1e-12
wTLT_ocean = D[:, 5]
D = loadtxt("data/std_atmosphere_wt_function_chan_tmt_land.txt", skiprows=7)
assert max(abs(h-D[:, 1])) < 1e-12
wTMT_land = D[:, 5]
D = loadtxt("data/std_atmosphere_wt_function_chan_tmt_ocean.txt", skiprows=7)
assert max(abs(h-D[:, 1])) < 1e-12
wTMT_ocean = D[:, 5]
figure(figsize=(3, 8))
plot(wTLS, h/1000, label="TLS")
plot(wTTS, h/1000, label="TTS")
plot(wTMT_ocean, h/1000, label="TMT ocean")
plot(wTMT_land, h/1000, label="TMT land")
plot(wTLT_ocean, h/1000, label="TLT ocean")
plot(wTLT_land, h/1000, label="TLT land")
xlim([0, 0.2])
ylim([0, 50])
legend()
ylabel("Height [km]")
show()
Image(url="http://www.ssmi.com/msu/img/wt_func_plot_for_web_2012.all_channels2.png", embed=True)
Explanation: Weight functions
End of explanation
from netCDF4 import Dataset
from numpy.ma import average
rootgrp = Dataset('data/uat4_tb_v03r03_avrg_chtlt_197812_201504.nc3.nc')
list(rootgrp.variables)
# 144 values, interval [-180, 180]
longitude = rootgrp.variables["longitude"][:]
# 72 values, interval [-90, 90]
latitude = rootgrp.variables["latitude"][:]
# 144 rows of [min, max]
longitude_bounds = rootgrp.variables["longitude_bounds"][:]
# 72 rows of [min, max]
latitude_bounds = rootgrp.variables["latitude_bounds"][:]
# time in days, 1978 - today
time = rootgrp.variables["time"][:]
# time in years
years = time / 365.242 + 1978
# 12 values: time in days for 12 months in a year
time_climatology = rootgrp.variables["climatology_time"][:]
# (time, latitude, longitude)
brightness_temperature = rootgrp.variables["brightness_temperature"][:]
# (time_climatology, latitude, longitude)
brightness_temperature_climatology = rootgrp.variables["brightness_temperature_climatology"][:]
Explanation: Netcdf data
End of explanation
S_theta = pi / 36 * sin(pi/144) * cos(latitude*pi/180)
sum(144 * S_theta)-4*pi
Explanation: We need to calculate the element area (on a unit sphere) as follows:
$$
S_{\theta\phi} = \int_{\theta_{min}}^{\theta_{max}} \int_{\phi_{min}}^{\phi_{max}} \sin\theta\, d \theta d \phi
= (\cos\theta_{min} - \cos\theta_{max})(\phi_{max} - \phi_{min})
$$
Note that $-180 \le \phi \le 180$ is longitude and $0 \le \theta \le 180$ is something like latitude.
Introducing $\Delta\theta = \theta_{max} - \theta_{min}$, $\Delta\phi = \phi_{max} - \phi_{min}$ and $\theta = {\theta_{max} + \theta_{min} \over 2}$ we can write:
$$
S_{\theta\phi} = (\cos(\theta-{\Delta\theta\over2}) - \cos(\theta+{\Delta\theta\over 2})) \Delta \phi
= 2 \Delta\phi \sin\theta\, \sin{\Delta\theta\over 2}
$$
For $\Delta\theta = \Delta\phi = 2.5 {\pi\over 180} = {\pi\over 72}$ we finally obtain:
$$
S_\theta = 2 {\pi\over 72} \sin {\pi\over 2\cdot72} \, \sin\theta = {\pi\over 36} \sin {\pi\over 144} \, \sin\theta
$$
Finally, we would like to use $\theta$ for latitude, so we need to substitute $\theta \to \theta + {\pi\over 2}$:
$$
S_\theta = {\pi\over 36} \sin {\pi\over 144} \, \sin(\theta+{\theta\over 2})
= {\pi\over 36} \sin {\pi\over 144} \, \cos\theta
$$
As a check, we calculate the surface of the unit sphere (equal to $4\pi$):
$$
\sum_{\theta}144S_\theta = 4\pi
$$
End of explanation
w_theta = sin(pi/144) * cos(latitude*pi/180)
sum(w_theta)
Tavg = average(brightness_temperature, axis=2)
Tavg = average(Tavg, axis=1, weights=w_theta)
plot(years, Tavg-273.15)
xlabel("Year")
ylabel("T [C]")
title("TLT (Temperature Lower Troposphere)")
show()
Explanation: Let's create averaging weights that are normalized to 1 as follows:
$$
w_\theta = S_\theta {144\over 4\pi} = \sin{\pi\over144}\cos\theta
$$
$$
\sum_\theta w_\theta = 1
$$
End of explanation
Tanom = empty(Tavg.shape)
for i in range(12):
Tanom[i::12] = Tavg[i::12] - average(Tavg[i::12])
Explanation: The temperature oscillates each year. To calculate the "anomaly", we subtract from each month its average temperature:
End of explanation
from scipy.stats import linregress
# Skip the first year, start from 1979, that's why you see the "12" here and below:
n0 = 12 # use 276 for the year 2001
Y0 = years[n0]
a, b, _, _, adev = linregress(years[n0:]-Y0, Tanom[n0:])
print "par dev"
print a, adev
print b
Explanation: We calculate linear fit
End of explanation
from matplotlib.ticker import MultipleLocator
figure(figsize=(6.6, 3.5))
plot(years, Tanom, "b-", lw=0.7)
plot(years, a*(years-Y0)+b, "b-", lw=0.7, label="Trend = $%.3f \pm %.3f$ K/decade" % (a*10, adev*10))
xlim([1979, 2016])
ylim([-1.2, 1.2])
gca().xaxis.set_minor_locator(MultipleLocator(1))
legend()
xlabel("Year")
ylabel("Temperature Anomaly [K]")
title("TLT (Temperature Lower Troposphere)")
show()
Image(url="http://www.remss.com/data/msu/graphics/TLT/plots/RSS_TS_channel_TLT_Global_Land_And_Sea_v03_3.png", embed=True)
Explanation: And compare against official graph + trend. As can be seen, the agreement is perfect:
End of explanation |
655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recuperando Tweets
Para utilizar qualquer API do Twitter temos que importar os módulos e definir as chaves e tokens de acesso.
Step1: Com as chaves e tokens de acesso, iremos criar a autenticação e definir o token de acesso.
Step2: Com a autorização criada, vamos passar as credenciais de acesso para a API do Tweepy. Desta forma, teremos acesso aos métodos disponíveis na API.
Step3: Utilizar a home_timeline()
Esse método recupera as últimas 20 atualizações (inclue retweet) da timeline do usuário autenticada.
O retorno é um objeto parecido com uma lista que salva os resultados recuperados.
http
Step4: Além disso, podemos utilizar o parâmetro count para limitar a busca.
Step5: Utilizar a user_timeline()
Esse método recupera as últimas 20 atualizações do usuário autenticado ou do usuário especificado via parâmetro id.
O retorno é um objeto parecido com uma lista que salva os resultados recuperados.
http
Step6: Utilizar a retweets_of_me()
Esse método recupera os últimos 20 tweets do usuário autenticado que foi retweeted por outros.
O retorno é um objeto parecido com uma lista que salva os resultados recuperados.
http | Python Code:
import tweepy
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''
Explanation: Recuperando Tweets
Para utilizar qualquer API do Twitter temos que importar os módulos e definir as chaves e tokens de acesso.
End of explanation
autorizar = tweepy.OAuthHandler(consumer_key, consumer_secret)
autorizar.set_access_token(access_token, access_token_secret)
Explanation: Com as chaves e tokens de acesso, iremos criar a autenticação e definir o token de acesso.
End of explanation
api = tweepy.API(autorizar)
print(api)
Explanation: Com a autorização criada, vamos passar as credenciais de acesso para a API do Tweepy. Desta forma, teremos acesso aos métodos disponíveis na API.
End of explanation
tweets_publicos = api.home_timeline()
print(type(tweets_publicos))
for i, tweet in enumerate(tweets_publicos, start=1):
print("{} ---> {}".format(i, tweet.text))
Explanation: Utilizar a home_timeline()
Esse método recupera as últimas 20 atualizações (inclue retweet) da timeline do usuário autenticada.
O retorno é um objeto parecido com uma lista que salva os resultados recuperados.
http://docs.tweepy.org/en/v3.5.0/api.html?highlight=home_timeline#API.home_timeline
End of explanation
tweets_publicos = api.home_timeline(count=5)
for i, tweet in enumerate(tweets_publicos, start=1):
print("Tweet número: {}".format(i))
print("----------------")
print("Usuário @{} disse:".format(tweet.user.screen_name))
print(tweet.text)
print("id do usuário: {}".format(tweet.user.id))
print('\n')
Explanation: Além disso, podemos utilizar o parâmetro count para limitar a busca.
End of explanation
tweets_publicos_usuario = api.user_timeline(id='267283568', count=5)
for tweet in tweets_publicos_usuario:
print('----')
print(tweet.text)
print(tweet.id)
print(tweet.lang)
print(tweet.place)
print(tweet.retweet_count)
print(tweet.coordinates)
print(tweet.user.id)
Explanation: Utilizar a user_timeline()
Esse método recupera as últimas 20 atualizações do usuário autenticado ou do usuário especificado via parâmetro id.
O retorno é um objeto parecido com uma lista que salva os resultados recuperados.
http://docs.tweepy.org/en/v3.5.0/api.html?highlight=user_timeline#API.user_timeline
End of explanation
retweets = api.retweets_of_me(count=10)
for i, tweet in enumerate(retweets, start=1):
print("{} - {}".format(i, tweet.text))
Explanation: Utilizar a retweets_of_me()
Esse método recupera os últimos 20 tweets do usuário autenticado que foi retweeted por outros.
O retorno é um objeto parecido com uma lista que salva os resultados recuperados.
http://docs.tweepy.org/en/v3.5.0/api.html?highlight=retweets_of_me#API.retweets_of_me
End of explanation |
656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
User comparison tests
Table of Contents
Preparation
User data vectors
User lists
Sessions' checkpoints
Assembly
Time
Preparation
<a id=preparation />
Step1: Data vectors of users
<a id=userdatavectors />
Step2: getAllUserVectorData
Step3: Correlation Matrix
Step4: List of users and their sessions
<a id=userlists />
Step5: List of sessions with their checkpoints achievements
<a id=sessionscheckpoints />
Step6: Assembly of both
<a id=assembly />
Step7: Time analysis
<a id=time />
Step8: TODO
userTimes.loc[
Step9: user progress classification
tinkering | Python Code:
%run "../Functions/1. Google form analysis.ipynb"
%run "../Functions/4. User comparison.ipynb"
Explanation: User comparison tests
Table of Contents
Preparation
User data vectors
User lists
Sessions' checkpoints
Assembly
Time
Preparation
<a id=preparation />
End of explanation
#getAllResponders()
setAnswerTemporalities(gform)
Explanation: Data vectors of users
<a id=userdatavectors />
End of explanation
# small sample
#allData = getAllUserVectorData( getAllUsers( rmdf1522 )[:10] )
# complete set
#allData = getAllUserVectorData( getAllUsers( rmdf1522 ) )
# subjects who answered the gform
allData = getAllUserVectorData( getAllResponders() )
# 10 subjects who answered the gform
#allData = getAllUserVectorData( getAllResponders()[:10] )
efficiencies = allData.loc['efficiency'].sort_values()
efficiencies.index = range(0, len(allData.columns))
efficiencies.plot(title = 'efficiency')
efficiencies2 = allData.loc['efficiency'].sort_values()
efficiencies2 = efficiencies2[efficiencies2 != 0]
efficiencies2.index = range(0, len(efficiencies2))
efficiencies2 = np.log(efficiencies2)
efficiencies2.plot(title = 'efficiency log')
maxChapter = allData.loc['maxChapter'].sort_values()
maxChapter.index = range(0, len(allData.columns))
maxChapter.plot(title = 'maxChapter')
len(allData.columns)
userIds = getAllResponders()
_source = correctAnswers
# _source is used as correction source, if we want to include answers to these questions
#def getAllUserVectorData( userIds, _source = [] ):
# result
isInitialized = False
allData = []
f = FloatProgress(min=0, max=len(userIds))
display(f)
for userId in userIds:
#print(str(userId))
f.value += 1
if not isInitialized:
isInitialized = True
allData = getUserDataVector(userId, _source = _source)
else:
allData = pd.concat([allData, getUserDataVector(userId, _source = _source)], axis=1)
#print('done')
allData
userId
Explanation: getAllUserVectorData
End of explanation
methods = ['pearson', 'kendall', 'spearman']
_allUserVectorData = allData.T
_method = methods[0]
_title='RedMetrics Correlations'
_abs=True
_clustered=False
_figsize = (20,20)
#def plotAllUserVectorDataCorrelationMatrix(
# _allUserVectorData,
# _method = methods[0],
# _title='RedMetrics Correlations',
# _abs=False,
# _clustered=False,
# _figsize = (20,20)
#):
_progress = FloatProgress(min=0, max=3)
display(_progress)
# computation of correlation matrix
_m = _method
if(not (_method in methods)):
_m = methods[0]
_correlation = _allUserVectorData.astype(float).corr(_m)
_progress.value += 1
if(_abs):
_correlation = _correlation.abs()
_progress.value += 1
# plot
if(_clustered):
sns.clustermap(_correlation,cmap=plt.cm.jet,square=True,figsize=_figsize)
else:
_fig = plt.figure(figsize=_figsize)
_ax = plt.subplot(111)
_ax.set_title(_title)
sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True)
_progress.value += 1
gform[QTemporality].unique()
allData.loc['scoreundefined'].dropna()
getAllUsers(rmdf1522)[:10]
len(getAllUsers(rmdf1522))
Explanation: Correlation Matrix
End of explanation
userSessionsRelevantColumns = ['customData.localplayerguid', 'sessionId']
userSessions = rmdf1522[rmdf1522['type']=='start'].loc[:,userSessionsRelevantColumns]
userSessions = userSessions.rename(index=str, columns={'customData.localplayerguid': 'userId'})
userSessions.head()
#groupedUserSessions = userSessions.groupby('customData.localplayerguid')
#groupedUserSessions.head()
#groupedUserSessions.describe().head()
Explanation: List of users and their sessions
<a id=userlists />
End of explanation
checkpointsRelevantColumns = ['sessionId', 'customData.localplayerguid', 'type', 'section', 'userTime']
checkpoints = rmdf1522.loc[:, checkpointsRelevantColumns]
checkpoints = checkpoints[checkpoints['type']=='reach'].loc[:,['section','sessionId','userTime']]
checkpoints = checkpoints[checkpoints['section'].str.startswith('tutorial', na=False)]
#checkpoints = checkpoints.groupby("sessionId")
#checkpoints = checkpoints.max()
checkpoints.head()
Explanation: List of sessions with their checkpoints achievements
<a id=sessionscheckpoints />
End of explanation
#assembled = userSessions.combine_first(checkpoints)
assembled = pd.merge(userSessions, checkpoints, on='sessionId', how='outer')
assembled.head()
userSections = assembled.drop('sessionId', 1)
userSections.head()
userSections = userSections.dropna()
userSections.head()
checkpoints = userSections.groupby("userId")
checkpoints = checkpoints.max()
checkpoints.head()
Explanation: Assembly of both
<a id=assembly />
End of explanation
#userTimedSections = userSections.groupby("userId").agg({ "userTime": np.min })
#userTimedSections = userSections.groupby("userId")
userTimes = userSections.groupby("userId").agg({ "userTime": [np.min, np.max] })
userTimes["duration"] = pd.to_datetime(userTimes["userTime"]["amax"]) - pd.to_datetime(userTimes["userTime"]["amin"])
userTimes["duration"] = userTimes["duration"].map(lambda x: np.timedelta64(x, 's'))
userTimes = userTimes.sort_values(by=['duration'], ascending=[False])
userTimes.head()
Explanation: Time analysis
<a id=time />
End of explanation
sessionCount = 1
_rmDF = rmdf1522
sample = gform
before = False
after = True
gfMode = False
rmMode = True
#def getAllUserVectorDataCustom(before, after, gfMode = False, rmMode = True, sessionCount = 1, _rmDF = rmdf1522)
userIds = []
if (before and after):
userIds = getSurveysOfUsersWhoAnsweredBoth(sample, gfMode = gfMode, rmMode = rmMode)
elif before:
if rmMode:
userIds = getRMBefores(sample)
else:
userIds = getGFBefores(sample)
elif after:
if rmMode:
userIds = getRMAfters(sample)
else:
userIds = getGFormAfters(sample)
if(len(userIds) > 0):
userIds = userIds[localplayerguidkey]
allUserVectorData = getAllUserVectorData(userIds, _rmDF = _rmDF)
allUserVectorData = allUserVectorData.T
result = allUserVectorData[allUserVectorData['sessionsCount'] == sessionCount].T
else:
print("no matching user")
result = []
result
getAllUserVectorDataCustom(False, True)
userIdsBoth = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = True, rmMode = True)[localplayerguidkey]
allUserVectorData = getAllUserVectorData(userIdsBoth)
allUserVectorData = allUserVectorData.T
allUserVectorData[allUserVectorData['sessionsCount'] == 1]
Explanation: TODO
userTimes.loc[:,'duration']
userTimes = userTimes[4:]
userTimes["duration_seconds"] = userTimes["duration"].map(lambda x: pd.Timedelta(x).seconds)
maxDuration = np.max(userTimes["duration_seconds"])
userTimes["duration_rank"] = userTimes["duration_seconds"].rank(ascending=False)
userTimes.plot(x="duration_rank", y="duration_seconds")
plt.xlabel("game session")
plt.ylabel("time played (s)")
plt.legend('')
plt.xlim(0, 139)
plt.ylim(0, maxDuration)
userTimedSections = userSections.groupby("section").agg({ "userTime": np.min })
userTimedSections
userTimedSections["firstReached"] = pd.to_datetime(userTimedSections["userTime"])
userTimedSections.head()
userTimedSections.drop('userTime', 1)
userTimedSections.head()
userTimedSections["firstCompletionDuration"] = userTimedSections["firstReached"].diff()
userTimedSections.head()
End of explanation
testUser = "3685a015-fa97-4457-ad73-da1c50210fe1"
def getScoreFromBinarized(binarizedAnswers):
gformIndices = binarizedAnswers.index.map(lambda s: int(s.split(correctionsColumnNameStem)[1]))
return pd.Series(np.dot(binarizedAnswers, np.ones(binarizedAnswers.shape[1])), index=gform.loc[gformIndices, localplayerguidkey])
#allResponders = getAllResponders()
#gf_both = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = True, rmMode = False)
rm_both = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = False, rmMode = True)
#gfrm_both = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = True, rmMode = True)
sciBinarizedBefore = getAllBinarized(_form = getRMBefores(rm_both))
sciBinarizedAfter = getAllBinarized(_form = getRMAfters(rm_both))
scoresBefore = getScoreFromBinarized(sciBinarizedBefore)
scoresAfter = getScoreFromBinarized(sciBinarizedAfter)
medianBefore = np.median(scoresBefore)
medianAfter = np.median(scoresAfter)
maxScore = sciBinarizedBefore.shape[1]
indicators = pd.DataFrame()
indicators[answerTemporalities[0]] = scoresBefore
indicators[answerTemporalities[1]] = scoresAfter
indicators['delta'] = scoresAfter - scoresBefore
indicators['maxPotentialDelta'] = maxScore - scoresBefore
for index in indicators['maxPotentialDelta'].index:
if (indicators.loc[index, 'maxPotentialDelta'] == 0):
indicators.loc[index, 'maxPotentialDelta'] = 1
indicators['relativeBefore'] = scoresBefore / medianBefore
indicators['relativeAfter'] = scoresAfter / medianBefore
indicators['relativeDelta'] = indicators['delta'] / medianBefore
indicators['realizedPotential'] = indicators['delta'] / indicators['maxPotentialDelta']
indicators['increaseRatio'] = indicators[answerTemporalities[0]]
for index in indicators['increaseRatio'].index:
if (indicators.loc[index, 'increaseRatio'] == 0):
indicators.loc[index, 'increaseRatio'] = 1
indicators['increaseRatio'] = indicators['delta'] / indicators['increaseRatio']
indicators
(min(indicators['relativeBefore']), max(indicators['relativeBefore'])),\
(min(indicators['relativeDelta']), max(indicators['relativeDelta'])),\
medianBefore,\
np.median(indicators['relativeBefore']),\
np.median(indicators['relativeDelta'])\
indicatorX = 'relativeBefore'
indicatorY = 'relativeDelta'
def scatterPlotIndicators(indicatorX, indicatorY):
print(indicatorX + ' range: ' + str((min(indicators[indicatorX]), max(indicators[indicatorX]))))
print(indicatorY + ' range: ' + str((min(indicators[indicatorY]), max(indicators[indicatorY]))))
print(indicatorX + ' median: ' + str(np.median(indicators[indicatorX])))
print(indicatorY + ' median: ' + str(np.median(indicators[indicatorY])))
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(indicators[indicatorX], indicators[indicatorY])
plt.xlabel(indicatorX)
plt.ylabel(indicatorY)
# vertical line
plt.plot( [np.median(indicators[indicatorX]), np.median(indicators[indicatorX])],\
[min(indicators[indicatorY]), max(indicators[indicatorY])],\
'k-', lw=2)
# horizontal line
plt.plot( [min(indicators[indicatorX]), max(indicators[indicatorX])],\
[np.median(indicators[indicatorY]), np.median(indicators[indicatorY])],\
'k-', lw=2)
indicators.columns
scatterPlotIndicators('relativeBefore', 'relativeDelta')
scatterPlotIndicators('relativeBefore', 'realizedPotential')
scatterPlotIndicators('relativeBefore', 'increaseRatio')
scatterPlotIndicators('relativeBefore', 'relativeAfter')
scatterPlotIndicators('maxPotentialDelta', 'realizedPotential')
Explanation: user progress classification
tinkering
End of explanation |
657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Exploration of a publicly available dataset.
<img align="right" src="http
Step1: Two columns that are a mistaken copy of each other?...
We also suspect that the 'inactive' column and the 'country' column are exactly the same, also why is there one row in the inactive column with a value of '2'?
<pre>
"Ahhh, what an awful dream. Ones and zeroes everywhere... and I thought I saw a two [shudder]."
-- Bender
"It was just a dream, Bender. There's no such thing as two".
-- Fry
</pre>
Step2: Okay well lets try to get something out of this pile. We'd like to run some simple statistics to see what correlations the data might contain.
G-test is for goodness of fit to a distribution and for independence in contingency tables. It's related to chi-squared, multinomial and Fisher's exact test, please see http
Step3: So switching gears, perhaps we'll look at date range, volume over time, etc.
Pandas also has reasonably good functionality for date/range processing and plotting.
Step4: That doesn't look good...
The plot above shows the total volume of ALL newly submitted domains. We see from the plot that the taper is a general overall effect due to a drop in new domain submissions into the MDL database. Given the recent anemic volume there might be another data source that has more active submissions.
Well the anemic volume issue aside we're going to carry on by looking at the correlations in volume over time. In other words are the volume of reported exploits closely related to the volume of other exploits...
Correlations of Volume Over Time
<ul>
<li>**Prof. Farnsworth
Step5: Discussion of Correlation Matrix
The two sets of 3x3 red blocks on the lower right make intuitive sense, Zeus config file, drop zone and trojan show almost perfect volume over time correlation. | Python Code:
# This exercise is mostly for us to understand what kind of data we have and then
# run some simple stats on the fields/values in the data. Pandas will be great for that
import pandas as pd
pd.__version__
# Set default figure sizes
pylab.rcParams['figure.figsize'] = (14.0, 5.0)
# This data url can be a web location http://foo.bar.com/mydata.csv or it can be a
# a path to your disk where the data resides /full/path/to/data/mydata.csv
# Note: Be a good web citizen, download the data once and then specify a path to your local file :)
# For instance: > wget http://www.malwaredomainlist.com/mdlcsv.php -O mdl_data.csv
# data_url = 'http://www.malwaredomainlist.com/mdlcsv.php'
data_url = 'data/mdl_data.csv'
# Note: when the data was pulled it didn't have column names, so poking around
# on the website we found the column headers referenced so we're explicitly
# specifying them to the CSV reader:
# date,domain,ip,reverse,description,registrant,asn,inactive,country
dataframe = pd.read_csv(data_url, names=['date','domain','ip','reverse','description',
'registrant','asn','inactive','country'], header=None, error_bad_lines=False, low_memory=False)
dataframe.head(5)
dataframe.tail(5)
# We can see there's a blank row at the end that got filled with NaNs
# Thankfully Pandas is great about handling missing data.
print dataframe.shape
dataframe = dataframe.dropna()
dataframe.shape
# For this use case we're going to remove any rows that have a '-' in the data
# by replacing '-' with NaN and then running dropna() again
dataframe = dataframe.replace('-', np.nan)
dataframe = dataframe.dropna()
dataframe.shape
# Drilling down into one of the columns
dataframe['description']
# Pandas has a describe method
# For numerical data it give a nice set of summary statistics
# For categorical data it simply gives count, unique values
# and the most common value
dataframe['description'].describe()
# We can get a count of all the unique values by running value_counts()
dataframe['description'].value_counts()
# We noticed that the description values just differ by whitespace or captilization
dataframe['description'] = dataframe['description'].map(lambda x: x.strip().lower())
dataframe['description']
# First thing we noticed was that many of the 'submissions' had the exact same
# date, which we're guessing means some batch jobs just through a bunch of
# domains in and stamped them all with the same date.
# We also noticed that many values just differ by captilization (this is common)
dataframe = dataframe.applymap(lambda x: x.strip().lower() if not isinstance(x,float64) else x)
dataframe.head()
# The domain column looks to be full URI instead of just the domain
from urlparse import urlparse
dataframe['domain'] = dataframe['domain'].astype(str)
dataframe['domain'] = dataframe['domain'].apply(lambda x: "http://" + x)
dataframe['domain'] = dataframe['domain'].apply(lambda x: urlparse(x).netloc)
Explanation: Data Exploration of a publicly available dataset.
<img align="right" src="http://www.sharielf.com/gifs/zz032411pony.jpg" width="220px">
Data processing, cleaning and normalization is often 95% of the battle. Never underestimate this part of the process, if you're not careful about it your derrière will be sore later. Another good reason to spend a bit of time on understanding your data is that you may realize that the data isn't going to be useful for your task at hand. Quick pruning of fruitless branches is good.
Data as an analogy: Data is almost always a big pile of shit, the only real question is, "Is there a Pony inside?" and that's what data exploration and understanding is about.
For this exploration we're going to pull some data from the Malware Domain List website http://www.malwaredomainlist.com. We'd like to thank them for providing a great resourse and making their data available to the public. In general data is messy so even though we're going to be nit-picking quite a bit, we recognized that many datasets will have similar issues which is why we feel like this is a good 'real world' example of data.
Full database: http://www.malwaredomainlist.com/mdlcsv.php
End of explanation
# Using numpy.corrcoef to compute the correlation coefficient matrix
np.corrcoef(dataframe["inactive"], dataframe["country"])
# Pandas also has a correlation method on it's dataframe which has nicer output
dataframe.corr()
# Yeah perfectly correlated, so looks like 'country'
# is just the 'inactive' column duplicated.
# So what happened here? Seems bizarre to have a replicated column.
Explanation: Two columns that are a mistaken copy of each other?...
We also suspect that the 'inactive' column and the 'country' column are exactly the same, also why is there one row in the inactive column with a value of '2'?
<pre>
"Ahhh, what an awful dream. Ones and zeroes everywhere... and I thought I saw a two [shudder]."
-- Bender
"It was just a dream, Bender. There's no such thing as two".
-- Fry
</pre>
End of explanation
# The data hacking repository has a simple stats module we're going to use
import data_hacking.simple_stats as ss
# Spin up our g_test class
g_test = ss.GTest()
# Here we'd like to see how various exploits (description) are related to
# the ASN (Autonomous System Number) associated with the ip/domain.
(exploits, matches, cont_table) = g_test.highest_gtest_scores(
dataframe['description'], dataframe['asn'], N=5, matches=5)
ax = exploits.T.plot(kind='bar', stacked=True)
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('ASN (Autonomous System Number)')
patches, labels = ax.get_legend_handles_labels()
ax.legend(patches, labels, loc='upper right')
# The plot below is showing the number of times a particular exploit was associated with an ASN.
# Interesing to see whether exploits are highly correlated to particular ASNs.
# Now we use g_test with the 'reverse=True' argument to display those exploits
# that do not have a high correlation with a particular ASN.
exploits, matches, cont_table = g_test.highest_gtest_scores(dataframe['description'],
dataframe['asn'], N=7, reverse=True, min_volume=500, matches=15)
ax = exploits.T.plot(kind='bar', stacked=True)
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('ASN (Autonomous System Number)')
patches, labels = ax.get_legend_handles_labels()
ax.legend(patches, labels, loc='best')
# The plot below is showing exploits who aren't associated with any particular ASN.
# Interesing to see exploits that are spanning many ASNs.
exploits, matches, cont_table = g_test.highest_gtest_scores(dataframe['description'],
dataframe['domain'], N=5)
ax = exploits.T.plot(kind='bar', stacked=True) #, log=True)
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Domain')
patches, labels = ax.get_legend_handles_labels()
ax.legend(patches, labels, loc='best')
# The Contingency Table below is just showing the counts of the number of times
# a particular exploit was associated with an TLD.
# Drilling down on one particular exploit
banker = dataframe[dataframe['description']=='trojan banker'] # Subset dataframe
exploits, matches, cont_table = g_test.highest_gtest_scores(banker['description'], banker['domain'], N=5)
import pprint
pprint.pprint(["Domain: %s Count: %d" % (domain,count) for domain,count in exploits.iloc[0].iteritems()])
Explanation: Okay well lets try to get something out of this pile. We'd like to run some simple statistics to see what correlations the data might contain.
G-test is for goodness of fit to a distribution and for independence in contingency tables. It's related to chi-squared, multinomial and Fisher's exact test, please see http://en.wikipedia.org/wiki/G_test.
End of explanation
# Add the proper timestamps to the dataframe replacing the old ones
dataframe['date'] = dataframe['date'].apply(lambda x: str(x).replace('_','T'))
dataframe['date'] = pd.to_datetime(dataframe['date'])
# Now prepare the data for plotting by pivoting on the
# description to create a new column (series) for each value
# We're going to add a new column called value (needed for pivot). This
# is a bit dorky, but needed as the new columns that get created should
# really have a value in them, also we can use this as our value to sum over.
subset = dataframe[['date','description']]
subset['count'] = 1
pivot = pd.pivot_table(subset, values='count', rows=['date'], cols=['description'], fill_value=0)
by = lambda x: lambda y: getattr(y, x)
grouped = pivot.groupby([by('year'),by('month')]).sum()
# Only pull out the top 7 desciptions (exploit types)
topN = subset['description'].value_counts()[:7].index
grouped[topN].plot()
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Date Submitted')
# The plot below shows the volume of particular exploits impacting new domains.
# Tracking the ebb and flow of exploits over time might be useful
# depending on the type of analysis you're doing.
# The rise and fall of the different exploits is intriguing but
# the taper at the end is concerning, let look at total volume of
# new malicious domains coming into the MDL database.
total_mdl = dataframe['description']
total_mdl.index=dataframe['date']
total_agg = total_mdl.groupby([by('year'),by('month')]).count()
matplotlib.pyplot.figure()
total_agg.plot(label='New Domains in MDL Database')
pylab.ylabel('Total Exploits')
pylab.xlabel('Date Submitted')
matplotlib.pyplot.legend()
Explanation: So switching gears, perhaps we'll look at date range, volume over time, etc.
Pandas also has reasonably good functionality for date/range processing and plotting.
End of explanation
# Only pull out the top 20 desciptions (exploit types)
topN = subset['description'].value_counts()[:20].index
corr_df = grouped[topN].corr()
# Statsmodels has a correlation plot, we expect the diagonal to have perfect
# correlation (1.0) but anything high score off the diagonal means that
# the volume of different exploits are temporally correlated.
import statsmodels.api as sm
corr_df.sort(axis=0, inplace=True) # Just sorting so exploits names are easy to find
corr_df.sort(axis=1, inplace=True)
corr_matrix = corr_df.as_matrix()
pylab.rcParams['figure.figsize'] = (8.0, 8.0)
sm.graphics.plot_corr(corr_matrix, xnames=corr_df.index.tolist())
plt.show()
Explanation: That doesn't look good...
The plot above shows the total volume of ALL newly submitted domains. We see from the plot that the taper is a general overall effect due to a drop in new domain submissions into the MDL database. Given the recent anemic volume there might be another data source that has more active submissions.
Well the anemic volume issue aside we're going to carry on by looking at the correlations in volume over time. In other words are the volume of reported exploits closely related to the volume of other exploits...
Correlations of Volume Over Time
<ul>
<li>**Prof. Farnsworth:** Behold! The Deathclock!
<li>**Leela:** Does it really work?
<li>**Prof. Farnsworth:** Well, it's occasionally off by a few seconds, what with "free will" and all.
</ul>
End of explanation
pylab.rcParams['figure.figsize'] = (14.0, 3.0)
print grouped[['zeus v1 trojan','zeus v1 config file','zeus v1 drop zone']].corr()
grouped[['zeus v1 trojan','zeus v1 config file','zeus v1 drop zone']].plot()
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Date Submitted')
grouped[['zeus v2 trojan','zeus v2 config file','zeus v2 drop zone']].plot()
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Date Submitted')
# Drilling down on the correlation between 'trojan' and 'phoenix exploit kit'
print grouped[['trojan','phoenix exploit kit']].corr()
grouped[['trojan','phoenix exploit kit']].plot()
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Date Submitted')
Explanation: Discussion of Correlation Matrix
The two sets of 3x3 red blocks on the lower right make intuitive sense, Zeus config file, drop zone and trojan show almost perfect volume over time correlation.
End of explanation |
658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2l', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MIROC
Source ID: MIROC-ES2L
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Evaluation, Scoring Metrics, and Dealing with Imbalanced Classes
In the previous notebook, we already went into some detail on how to evaluate a model and how to pick the best model. So far, we assumed that we were given a performance measure, a measure of the quality of the model. What measure one should use is not always obvious, though.
The default scores in scikit-learn are accuracy for classification, which is the fraction of correctly classified samples, and r2 for regression, with is the coefficient of determination.
These are reasonable default choices in many scenarious; however, depending on our task, these are not always the definitive or recommended choices.
Let's take look at classification in more detail, going back to the application of classifying handwritten digits.
So, how about training a classifier and walking through the different ways we can evaluate it? Scikit-learn has many helpful methods in the sklearn.metrics module that can help us with this task
Step1: Here, we predicted 95.3% of samples correctly. For multi-class problems, it is often interesting to know which of the classes are hard to predict, and which are easy, or which classes get confused. One way to get more information about misclassifications is the confusion_matrix, which shows for each true class, how frequent a given predicted outcome is.
Step2: A plot is sometimes more readable
Step3: We can see that most entries are on the diagonal, which means that we predicted nearly all samples correctly. The off-diagonal entries show us that many eights were classified as ones, and that nines are likely to be confused with many other classes.
Another useful function is the classification_report which provides precision, recall, fscore and support for all classes.
Precision is how many of the predictions for a class are actually that class. With TP, FP, TN, FN standing for "true positive", "false positive", "true negative" and "false negative" repectively
Step4: These metrics are helpful in two particular cases that come up often in practice
Step5: As a toy example, let's say we want to classify the digits three against all other digits
Step6: Now we run cross-validation on a classifier to see how well it does
Step7: Our classifier is 90% accurate. Is that good? Or bad? Keep in mind that 90% of the data is "not three". So let's see how well a dummy classifier does, that always predicts the most frequent class
Step8: Also 90% (as expected)! So one might thing that means our classifier is not very good, it doesn't to better than a simple strategy that doesn't even look at the data.
That would be judging too quickly, though. Accuracy is simply not a good way to evaluate classifiers for imbalanced datasets!
Step9: ROC Curves
A much better measure is using the so-called ROC (Receiver operating characteristics) curve. A roc-curve works with uncertainty outputs of a classifier, say the "decision_function" of the SVC we trained above. Instead of making a cut-off at zero and looking at classification outcomes, it looks at every possible cut-off and records how many true positive predictions there are, and how many false positive predictions there are.
The following plot compares the roc curve of three parameter settings of our classifier on the "three vs rest" task.
Step10: With a very small decision threshold, there will be few false positives, but also few false negatives, while with a very high threshold, both true positive rate and false positive rate will be high. So in general, the curve will be from the lower left to the upper right. A diagonal line reflects chance performance, while the goal is to be as much in the top left corner as possible. This means giving a higher decision_function value to all positive samples than to any negative sample.
In this sense, this curve only considers the ranking of the positive and negative samples, not the actual value.
As you can see from the curves and the accuracy values in the legend, even though all classifiers have the same accuracy, 89%, which is even lower than the dummy classifier, one of them has a perfect roc curve, while one of them performs on chance level.
For doing grid-search and cross-validation, we usually want to condense our model evaluation into a single number. A good way to do this with the roc curve is to use the area under the curve (AUC).
We can simply use this in cross_val_score by specifying scoring="roc_auc"
Step11: Built-In and custom scoring functions
There are many more scoring methods available, which are useful for different kinds of tasks. You can find them in the "SCORERS" dictionary. The only documentation explains all of them.
Step12: It is also possible to define your own scoring metric. Instead of a string, you can provide a callable to as scoring parameter, that is an object with a __call__ method or a function.
It needs to take a model, a test-set features X_test and test-set labels y_test, and return a float. Higher floats are taken to mean better models.
Let's reimplement the standard accuracy score
Step13: <div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=2)
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
digits = load_digits()
X, y = digits.data, digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=1,
stratify=y,
test_size=0.25)
classifier = LinearSVC(random_state=1).fit(X_train, y_train)
y_test_pred = classifier.predict(X_test)
print("Accuracy: {}".format(classifier.score(X_test, y_test)))
Explanation: Model Evaluation, Scoring Metrics, and Dealing with Imbalanced Classes
In the previous notebook, we already went into some detail on how to evaluate a model and how to pick the best model. So far, we assumed that we were given a performance measure, a measure of the quality of the model. What measure one should use is not always obvious, though.
The default scores in scikit-learn are accuracy for classification, which is the fraction of correctly classified samples, and r2 for regression, with is the coefficient of determination.
These are reasonable default choices in many scenarious; however, depending on our task, these are not always the definitive or recommended choices.
Let's take look at classification in more detail, going back to the application of classifying handwritten digits.
So, how about training a classifier and walking through the different ways we can evaluate it? Scikit-learn has many helpful methods in the sklearn.metrics module that can help us with this task:
End of explanation
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_test_pred)
Explanation: Here, we predicted 95.3% of samples correctly. For multi-class problems, it is often interesting to know which of the classes are hard to predict, and which are easy, or which classes get confused. One way to get more information about misclassifications is the confusion_matrix, which shows for each true class, how frequent a given predicted outcome is.
End of explanation
plt.matshow(confusion_matrix(y_test, y_test_pred), cmap="Blues")
plt.colorbar(shrink=0.8)
plt.xticks(range(10))
plt.yticks(range(10))
plt.xlabel("Predicted label")
plt.ylabel("True label");
Explanation: A plot is sometimes more readable:
End of explanation
from sklearn.metrics import classification_report
print(classification_report(y_test, y_test_pred))
Explanation: We can see that most entries are on the diagonal, which means that we predicted nearly all samples correctly. The off-diagonal entries show us that many eights were classified as ones, and that nines are likely to be confused with many other classes.
Another useful function is the classification_report which provides precision, recall, fscore and support for all classes.
Precision is how many of the predictions for a class are actually that class. With TP, FP, TN, FN standing for "true positive", "false positive", "true negative" and "false negative" repectively:
Precision = TP / (TP + FP)
Recall is how many of the true positives were recovered:
Recall = TP / (TP + FN)
F1-score is the geometric average of precision and recall:
F1 = 2 x (precision x recall) / (precision + recall)
The values of all these values above are in the closed interval [0, 1], where 1 means a perfect score.
End of explanation
np.bincount(y) / y.shape[0]
Explanation: These metrics are helpful in two particular cases that come up often in practice:
1. Imbalanced classes, that is one class might be much more frequent than the other.
2. Asymmetric costs, that is one kind of error is much more "costly" than the other.
Let's have a look at 1. first. Say we have a class imbalance of 1:9, which is rather mild (think about ad-click-prediction where maybe 0.001% of ads might be clicked):
End of explanation
X, y = digits.data, digits.target == 3
Explanation: As a toy example, let's say we want to classify the digits three against all other digits:
End of explanation
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
cross_val_score(SVC(), X, y)
Explanation: Now we run cross-validation on a classifier to see how well it does:
End of explanation
from sklearn.dummy import DummyClassifier
cross_val_score(DummyClassifier("most_frequent"), X, y)
Explanation: Our classifier is 90% accurate. Is that good? Or bad? Keep in mind that 90% of the data is "not three". So let's see how well a dummy classifier does, that always predicts the most frequent class:
End of explanation
np.bincount(y) / y.shape[0]
Explanation: Also 90% (as expected)! So one might thing that means our classifier is not very good, it doesn't to better than a simple strategy that doesn't even look at the data.
That would be judging too quickly, though. Accuracy is simply not a good way to evaluate classifiers for imbalanced datasets!
End of explanation
from sklearn.metrics import roc_curve, roc_auc_score
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
for gamma in [.01, .05, 1]:
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate (recall)")
svm = SVC(gamma=gamma).fit(X_train, y_train)
decision_function = svm.decision_function(X_test)
fpr, tpr, _ = roc_curve(y_test, decision_function)
acc = svm.score(X_test, y_test)
auc = roc_auc_score(y_test, svm.decision_function(X_test))
plt.plot(fpr, tpr, label="acc:%.2f auc:%.2f" % (acc, auc), linewidth=3)
plt.legend(loc="best");
Explanation: ROC Curves
A much better measure is using the so-called ROC (Receiver operating characteristics) curve. A roc-curve works with uncertainty outputs of a classifier, say the "decision_function" of the SVC we trained above. Instead of making a cut-off at zero and looking at classification outcomes, it looks at every possible cut-off and records how many true positive predictions there are, and how many false positive predictions there are.
The following plot compares the roc curve of three parameter settings of our classifier on the "three vs rest" task.
End of explanation
from sklearn.model_selection import cross_val_score
cross_val_score(SVC(), X, y, scoring="roc_auc")
Explanation: With a very small decision threshold, there will be few false positives, but also few false negatives, while with a very high threshold, both true positive rate and false positive rate will be high. So in general, the curve will be from the lower left to the upper right. A diagonal line reflects chance performance, while the goal is to be as much in the top left corner as possible. This means giving a higher decision_function value to all positive samples than to any negative sample.
In this sense, this curve only considers the ranking of the positive and negative samples, not the actual value.
As you can see from the curves and the accuracy values in the legend, even though all classifiers have the same accuracy, 89%, which is even lower than the dummy classifier, one of them has a perfect roc curve, while one of them performs on chance level.
For doing grid-search and cross-validation, we usually want to condense our model evaluation into a single number. A good way to do this with the roc curve is to use the area under the curve (AUC).
We can simply use this in cross_val_score by specifying scoring="roc_auc":
End of explanation
from sklearn.metrics.scorer import SCORERS
print(SCORERS.keys())
Explanation: Built-In and custom scoring functions
There are many more scoring methods available, which are useful for different kinds of tasks. You can find them in the "SCORERS" dictionary. The only documentation explains all of them.
End of explanation
def my_accuracy_scoring(est, X, y):
return np.mean(est.predict(X) == y)
cross_val_score(SVC(), X, y, scoring=my_accuracy_scoring)
Explanation: It is also possible to define your own scoring metric. Instead of a string, you can provide a callable to as scoring parameter, that is an object with a __call__ method or a function.
It needs to take a model, a test-set features X_test and test-set labels y_test, and return a float. Higher floats are taken to mean better models.
Let's reimplement the standard accuracy score:
End of explanation
y_true = np.array([0, 0, 0, 1, 1, 1, 1, 1, 2, 2])
y_pred = np.array([0, 1, 1, 0, 1, 1, 2, 2, 2, 2])
confusion_matrix(y_true, y_pred)
# %load solutions/16A_avg_per_class_acc.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
In previous sections, we typically used the accuracy measure to evaluate the performance of our classifiers. A related measure that we haven't talked about, yet, is the average-per-class accuracy (APCA). As we remember, the accuracy is defined as
$$ACC = \frac{TP+TN}{n},$$
where *n* is the total number of samples. This can be generalized to
$$ACC = \frac{T}{n},$$
where *T* is the number of all correct predictions in multi-class settings.
</li>
</ul>
![](figures/average-per-class.png)
<li>
Given the following arrays of "true" class labels and predicted class labels, can you implement a function that uses the accuracy measure to compute the average-per-class accuracy as shown below?
</li>
</div>
End of explanation |
660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Summary" data-toc-modified-id="Summary-1"><span class="toc-item-num">1 </span>Summary</a></div><div class="lev1 toc-item"><a href="#Version-Control" data-toc-modified-id="Version-Control-2"><span class="toc-item-num">2 </span>Version Control</a></div><div class="lev1 toc-item"><a href="#Change-Log" data-toc-modified-id="Change-Log-3"><span class="toc-item-num">3 </span>Change Log</a></div><div class="lev1 toc-item"><a href="#Setup" data-toc-modified-id="Setup-4"><span class="toc-item-num">4 </span>Setup</a></div><div class="lev1 toc-item"><a href="#ExchangeList()" data-toc-modified-id="ExchangeList()-5"><span class="toc-item-num">5 </span>ExchangeList()</a></div><div class="lev2 toc-item"><a href="#Web-service-call" data-toc-modified-id="Web-service-call-51"><span class="toc-item-num">5.1 </span>Web service call</a></div><div class="lev3 toc-item"><a href="#Gather-elements" data-toc-modified-id="Gather-elements-511"><span class="toc-item-num">5.1.1 </span>Gather elements</a></div><div class="lev3 toc-item"><a href="#Get-data" data-toc-modified-id="Get-data-512"><span class="toc-item-num">5.1.2 </span>Get data</a></div><div class="lev3 toc-item"><a href="#Save-to-file" data-toc-modified-id="Save-to-file-513"><span class="toc-item-num">5.1.3 </span>Save to file</a></div><div class="lev3 toc-item"><a href="#Data-inspection" data-toc-modified-id="Data-inspection-514"><span class="toc-item-num">5.1.4 </span>Data inspection</a></div><div class="lev2 toc-item"><a href="#Helper-function" data-toc-modified-id="Helper-function-52"><span class="toc-item-num">5.2 </span>Helper function</a></div><div class="lev3 toc-item"><a href="#Usage" data-toc-modified-id="Usage-521"><span class="toc-item-num">5.2.1 </span>Usage</a></div><div class="lev2 toc-item"><a href="#Client-function" data-toc-modified-id="Client-function-53"><span class="toc-item-num">5.3 </span>Client function</a></div>
# Summary
Part of the blog series related to making web service calls to Eoddata.com. Overview of the web service can be found [here](http
Step1: Change Log
Date Created
Step2: ExchangeList()
Web service call
Step3: Gather elements
Step4: Get data
Step5: Save to file
Step6: Data inspection
Step7: Helper function
Step8: Usage
Step9: Client function | Python Code:
%run ../../code/version_check.py
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Summary" data-toc-modified-id="Summary-1"><span class="toc-item-num">1 </span>Summary</a></div><div class="lev1 toc-item"><a href="#Version-Control" data-toc-modified-id="Version-Control-2"><span class="toc-item-num">2 </span>Version Control</a></div><div class="lev1 toc-item"><a href="#Change-Log" data-toc-modified-id="Change-Log-3"><span class="toc-item-num">3 </span>Change Log</a></div><div class="lev1 toc-item"><a href="#Setup" data-toc-modified-id="Setup-4"><span class="toc-item-num">4 </span>Setup</a></div><div class="lev1 toc-item"><a href="#ExchangeList()" data-toc-modified-id="ExchangeList()-5"><span class="toc-item-num">5 </span>ExchangeList()</a></div><div class="lev2 toc-item"><a href="#Web-service-call" data-toc-modified-id="Web-service-call-51"><span class="toc-item-num">5.1 </span>Web service call</a></div><div class="lev3 toc-item"><a href="#Gather-elements" data-toc-modified-id="Gather-elements-511"><span class="toc-item-num">5.1.1 </span>Gather elements</a></div><div class="lev3 toc-item"><a href="#Get-data" data-toc-modified-id="Get-data-512"><span class="toc-item-num">5.1.2 </span>Get data</a></div><div class="lev3 toc-item"><a href="#Save-to-file" data-toc-modified-id="Save-to-file-513"><span class="toc-item-num">5.1.3 </span>Save to file</a></div><div class="lev3 toc-item"><a href="#Data-inspection" data-toc-modified-id="Data-inspection-514"><span class="toc-item-num">5.1.4 </span>Data inspection</a></div><div class="lev2 toc-item"><a href="#Helper-function" data-toc-modified-id="Helper-function-52"><span class="toc-item-num">5.2 </span>Helper function</a></div><div class="lev3 toc-item"><a href="#Usage" data-toc-modified-id="Usage-521"><span class="toc-item-num">5.2.1 </span>Usage</a></div><div class="lev2 toc-item"><a href="#Client-function" data-toc-modified-id="Client-function-53"><span class="toc-item-num">5.3 </span>Client function</a></div>
# Summary
Part of the blog series related to making web service calls to Eoddata.com. Overview of the web service can be found [here](http://ws.eoddata.com/data.asmx).
* ** View the master post of this series to build a secure credentials file.** It is used in all posts related to this series.
* Download this blog post as a [jupyter notebook](https://adriantorrie.github.io/downloads/notebooks/eoddata/eoddata_web_service_calls_exchange_list.ipynb)
* Download the [class definition file](https://adriantorrie.github.io/downloads/code/eoddata.py) for an easy to use client, which is demonstrated below
* This post covers the `ExchangeList` call: http://ws.eoddata.com/data.asmx?op=ExchangeList
# Version Control
End of explanation
%run ../../code/eoddata.py
import pandas as pd
import requests as r
ws = 'http://ws.eoddata.com/data.asmx'
ns='http://ws.eoddata.com/Data'
with (Client()) as eoddata:
token = eoddata.get_token()
Explanation: Change Log
Date Created: 2017-03-25
Date of Change Change Notes
-------------- ----------------------------------------------------------------
2017-03-25 Initial draft
2017-04-02 - Changed any references for `get_exchange_list()` to `exchange_list()`
- Client class function returns data in fixed order now
Setup
End of explanation
session = r.Session()
call = 'ExchangeList'
kwargs = {'Token': token,}
pattern = ".//{%s}EXCHANGE"
url = '/'.join((ws, call))
response = session.get(url, params=kwargs, stream=True)
if response.status_code == 200:
root = etree.parse(response.raw).getroot()
session.close()
Explanation: ExchangeList()
Web service call
End of explanation
elements = root.findall(pattern %(ns))
Explanation: Gather elements
End of explanation
exchanges = sorted(element.get('Code') for element in elements)
exchanges
Explanation: Get data
End of explanation
with open('../../data/exchanges.csv', 'w') as f:
for element in elements:
f.write('"%s"\n' % '","'.join(element.attrib.values()))
Explanation: Save to file
End of explanation
for item in root.items():
print (item)
for element in root.iter():
print(element.attrib)
Explanation: Data inspection
End of explanation
def ExchangeList(session, token):
call = 'ExchangeList'
kwargs = {'Token': token,}
pattern = ".//{%s}EXCHANGE"
url = '/'.join((ws, call))
response = session.get(url, params=kwargs, stream=True)
if response.status_code == 200:
root = etree.parse(response.raw).getroot()
return sorted(element.get('Code') for element in elements)
Explanation: Helper function
End of explanation
session = r.session()
exchanges = ExchangeList(session, token)
exchanges
session.close()
Explanation: Usage
End of explanation
# pandas dataframe is returned
df = eoddata.exchange_list()
df.head()
Explanation: Client function
End of explanation |
661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note
Step1: How old are the programmers that answered this survey?
Step2: What industries are these individuals working in?
Step3: What text editor do these individuals prefer?
Step4: What occupation were the people who answered this survey? What is the most popular occupation?
Step5: What is the least popular occupation?
Step6: Job Distribution by Gender -- woah too much data
Step7: How many males answer this survey? How many females answered this survey?
Step8: Is there a difference in year's of experience based on one's gender?
Step9: How many full-stack web developers are male versus female?
Step10: What is the age distribution by gender? | Python Code:
# workon dataanalysis - my virtual environment
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# df = pd.read_table('34933-0001-Data.tsv')
odf = pd.read_csv('accreditation_2016_03.csv')
odf.head()
odf.columns
odf['Campus_City'].value_counts().head(10)
top_cities = odf['Campus_City'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
top_cities.set_title('Top 10 College Cities (By Number of Colleges in State)')
top_cities.set_xlabel('City')
top_cities.set_ylabel('# of Colleges')
plt.savefig('topcollegecities.png')
top_cities = odf['Campus_State'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
top_cities.set_title('Top 10 College States (By Number of Campuses in State)')
top_cities.set_xlabel('City')
top_cities.set_ylabel('# of Colleges')
plt.savefig('topcollegecities.png')
odf['Accreditation_Status'].value_counts()
df = pd.read_csv('Full Results - Stack Overflow Developer Survey - 2015 2.csv', encoding ='mac_roman')
df.head()
df.columns
df.info()
Explanation: Note: you can find my iPython Notebook for Dataset 1 here -> https://github.com/M0nica/2016-new-coder-survey
End of explanation
df['Age'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
Explanation: How old are the programmers that answered this survey?
End of explanation
df['Industry'].value_counts().head(10).plot(kind="barh", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
Explanation: What industries are these individuals working in?
End of explanation
df['Preferred text editor'].value_counts().head(10).plot(kind="barh", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
df['Preferred text editor'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
# df['Training & Education: BS in CS'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
Explanation: What text editor do these individuals prefer?
End of explanation
df['Occupation'].value_counts()
Explanation: What occupation were the people who answered this survey? What is the most popular occupation?
End of explanation
df['Occupation'].value_counts(ascending = 'False')
Explanation: What is the least popular occupation?
End of explanation
df.groupby('Gender')['Occupation'].value_counts().plot(kind="bar", color = ['#599ad3', '#f9a65a']) # too mmuch data to appropriately display
Explanation: Job Distribution by Gender -- woah too much data
End of explanation
gender_df = df[(df['Gender'] == 'Male') | (df['Gender'] == 'Female')]
print(gender_df['Gender'].value_counts())
Explanation: How many males answer this survey? How many females answered this survey?
End of explanation
gender_df.groupby('Gender')['Years IT / Programming Experience'].value_counts().sort_values().plot(kind="bar", color = ['#599ad3', '#f9a65a'])
Explanation: Is there a difference in year's of experience based on one's gender?
End of explanation
gender_df.groupby('Gender')['Occupation'].value_counts()
gender_df = gender_df[gender_df['Occupation'] == "Full-stack web developer"]
gender_df.groupby('Gender')['Occupation'].value_counts().plot(kind="bar", color = ['#599ad3', '#f9a65a'])
#gender_df.groupby('Gender')['Years IT / Programming Experience'].value_counts().plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
df['Age'].value_counts()
Explanation: How many full-stack web developers are male versus female?
End of explanation
gender_df.groupby('Gender')['Age'].value_counts().sort_values().plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
df["AgeScale"] = df["Age"].apply(str).replace("< 20", "0").apply(str).replace("20-24", "1").apply(str).replace("25-29", "2").apply(str).replace("30-34", "3").apply(str).replace("30-34", "3").apply(str).replace("35-39", "4").apply(str).replace("40-50", "5").apply(str).replace("51-60", "6").apply(str).replace("> 60", "7")
print(df["AgeScale"].head(10))
years_df =df[df['AgeScale'] != "Prefer not to disclose"]
years_df['AgeScale'] = years_df['AgeScale'].astype(float)
print(years_df).head()
years_df['Years IT / Programming Experience'].value_counts()
years_df['ExperienceRank'] = years_df['Years IT / Programming Experience'].apply(str).replace("Less than 1 year", "0").apply(str).replace("1 - 2 years", "1").apply(str).replace("2 - 5 years", "2").apply(str).replace("6 - 10 years", "3").apply(str).replace("11+ years", "4").astype(float)
# years_df.head()
years_df['ExperienceRank'].value_counts()
years_df['AgeScale'].value_counts()
#years_df['ExperienceRank'] = float(years_df['ExperienceRank'])
# years_df['AgeScale'] = float(years_df['AgeScale'])
# years_df['AgeScale'] = years_df['AgeScale'].apply(int)
#years_df['ExperienceRank'] = parseInt(years_df['ExperienceRank'])
#years_df['ExperienceRank'] = pd.Series(years_df['ExperienceRank'])
#years_df['AgeScale'] = pd.Series(years_df['AgeScale'])
moneyScatter = years_df.plot(kind='scatter', x='ExperienceRank', y='AgeScale', alpha=0.2) # caegorical data dos not display well on scatter plots
#moneyScatter.set_title('Distribution of Money Spent Amongst Respondents to the Survey by Age')
#moneyScatter.set_xlabel('Months Programming')
#moneyScatter.set_ylabel('Hours Spent Learning Each Week')
#plt.savefig('studyingovertime.png')
years_df['ExperienceRank'].describe()
years_df[['ExperienceRank','AgeScale']] = years_df[['ExperienceRank','AgeScale']].apply(pd.to_numeric)
# years_df.apply(lambda x: pd.to_numeric(x, errors='ignore'))
years_df['ExperienceRank'].describe()
years_df['ExperienceRank'].head()
years_df['AgeScale'].head()
Explanation: What is the age distribution by gender?
End of explanation |
662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NOTAS
CALCULAR EN BASE AL MODELO DE CURVAS LAS DERIVADAS Y DONDE HACE EL PICO EN EDAD
DOWNLOAD DATA
Step1: NEW VARIABLES FOR MODEL
Graficos exploratorios
Step2: PLOTS FOR LnINCOME ~ EDUC AND AGE
Step3: Modelos
Tomo el de mejor performance para evaluar en el test set. Basicamente son dos posibiliades INDEC o ALTERNATIVO (que habiamos propuesto no cortar las edades y los años de escolaridad, sino usar las variables y directamente usar el cuadrado). Cada uno lo pruebo con ingresos laborales (con y sin constante) y con el log del ingreso laboral.
1 CEPAL con ingresos laborales
Step4: 2 - CEPAL con Log ingresos laborales
Step5: 3 - CEPAL con ingresos totales
Step6: 4 - CEPAL con Log ingresos totales
Step7: 5 - ALTERNATIVO con Log ingresos totales
Step8: 6 - ALTERNATIVO con log Income laboral | Python Code:
#get data
getEPHdbf('t310')
data1 = pd.read_csv('data/cleanDatat310.csv')
data2 = categorize.categorize(data1)
data3 = schoolYears.schoolYears(data2)
data = make_dummy.make_dummy(data3)
dataModel = functionsForModels.prepareDataForModel(data)
dataModel.head()
Explanation: NOTAS
CALCULAR EN BASE AL MODELO DE CURVAS LAS DERIVADAS Y DONDE HACE EL PICO EN EDAD
DOWNLOAD DATA
End of explanation
fig = plt.figure(figsize=(16,12))
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
ax4 = fig.add_subplot(2,2,4)
ax1.plot(dataModel.education,dataModel.P47T,'ro')
ax1.set_ylabel('Ingreso total')
ax1.set_xlabel('Educacion')
ax2.plot(dataModel.age,dataModel.P47T,'ro')
ax2.set_xlabel('Edad')
ax3.plot(dataModel.education,dataModel.P21,'bo')
ax3.set_ylabel('Ingreso Laboral')
ax3.set_xlabel('Educacion')
ax4.plot(dataModel.age,dataModel.P21,'bo')
ax4.set_xlabel('Edad')
fig = plt.figure(figsize=(16,12))
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
ax4 = fig.add_subplot(2,2,4)
sns.kdeplot(dataModel.P47T,ax=ax1,color = 'red')
sns.kdeplot(dataModel.lnIncomeT,ax=ax2,color = 'red')
sns.kdeplot(dataModel.P21,ax=ax3)
sns.kdeplot(dataModel.lnIncome,ax=ax4)
print 'mean:', dataModel.lnIncome.mean(), 'std:', dataModel.lnIncome.std()
print 'mean:', dataModel.P21.mean(), 'std:', dataModel.P21.std()
plt.boxplot(list(dataModel.P21), 0, 'gD')
Explanation: NEW VARIABLES FOR MODEL
Graficos exploratorios
End of explanation
g = sns.JointGrid(x="education", y="lnIncome", data=dataModel)
g.plot_joint(sns.regplot, order=2)
g.plot_marginals(sns.distplot)
g2 = sns.JointGrid(x="age", y="lnIncome", data=dataModel)
g2.plot_joint(sns.regplot, order=2)
g2.plot_marginals(sns.distplot)
Explanation: PLOTS FOR LnINCOME ~ EDUC AND AGE
End of explanation
dataModel1 = runModel(dataModel, income = 'P21')
Explanation: Modelos
Tomo el de mejor performance para evaluar en el test set. Basicamente son dos posibiliades INDEC o ALTERNATIVO (que habiamos propuesto no cortar las edades y los años de escolaridad, sino usar las variables y directamente usar el cuadrado). Cada uno lo pruebo con ingresos laborales (con y sin constante) y con el log del ingreso laboral.
1 CEPAL con ingresos laborales
End of explanation
dataModel2 = functionsForModels.runModel(dataModel, income = 'lnIncome', variables= [
'primary','secondary','university',
'male_14to24','male_25to34',
'female_14to24', 'female_25to34', 'female_35more'])
Explanation: 2 - CEPAL con Log ingresos laborales
End of explanation
dataModel3 = functionsForModels.runModel(dataModel, income = 'P47T')
Explanation: 3 - CEPAL con ingresos totales
End of explanation
dataModel4 = functionsForModels.runModel(dataModel, income = 'lnIncomeT')
Explanation: 4 - CEPAL con Log ingresos totales
End of explanation
dataModel5 = functionsForModels.runModel(dataModel, income = 'lnIncomeT', variables=['education','education2',
'age','age2','female'])
Explanation: 5 - ALTERNATIVO con Log ingresos totales
End of explanation
dataModel6 = functionsForModels.runModel(dataModel, income = 'lnIncome', variables=['education','education2',
'age','age2','female'])
Explanation: 6 - ALTERNATIVO con log Income laboral
End of explanation |
663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tensor Transformations
Step1: NOTE on notation
* _x, _y, _z, ...
Step2: Q2. Let X be a tensor [[1, 2], [3, 4]] of int32. Convert the data type of X to float64.
Step3: Q3. Let X be a tensor [[1, 2], [3, 4]] of int32. Convert the data type of X to float32.
Step4: Q4. Let X be a tensor [[1, 2], [3, 4]] of float32. Convert the data type of X to int32.
Step5: Q5. Let X be a tensor [[1, 2], [3, 4]] of float32. Convert the data type of X to int64.
Step6: Shapes and Shaping
Q6. Let X be a tensor of [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]. Create a tensor representing the shape of X.
Step7: Q7. Let X be a tensor of [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]) and y be a tensor [10, 20]. Create a list of tensors representing the shape of X and y.
Step8: Q8. Let X be a tensor of [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]. Create a tensor representing the size (=total number of elements) of X.
Step9: Q9. Let X be a tensor of [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]. Create a tensor representing the rank (=number of dimensions) of X.
Step10: Q10. Let X be tf.ones([10, 10, 3]). Reshape X so that the size of the second dimension equals 150.
Step11: Q11. Let X be tf.ones([10, 10, 1, 1]). Remove all the dimensions of size 1 in X.
Step12: Q12. Let X be tf.ones([10, 10, 1, 1]). Remove only the third dimension in X.
Step13: Q13. Let X be tf.ones([10, 10]). Add a dimension of 1 at the end of X.
Step14: Slicing and Joining
Q14. Let X be a tensor<br/>
[[[1, 1, 1], [2, 2, 2]],<br/>
[[3, 3, 3], [4, 4, 4]],<br/>
[[5, 5, 5], [6, 6, 6]]].<br/>
Extract the [[[3, 3, 3], [5, 5, 5]] from X.
Step15: Q15. Let X be a tensor of<br/>
[[ 1 2]<br />
[ 3 4]<br />
[ 5 6]<br />
[ 7 8]<br />
[ 9 10]].<br />
Extract the [[1, 2], [5, 6], [9, 10]]] from X.
Step16: Q16. Let X be a tensor of<br/>
[[ 1 2 3 4 5]<br />
[ 6 7 8 9 10]].<br />
Split X into 5 same-sized tensors along the second dimension.
Step17: Q17. Lex X be a tensor<br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/>
Create a tensor looking like <br/>
[[ 1 2 3 1 2 3 1 2 3 ]<br/>
[ 4 5 6 4 5 6 4 5 6 ]].
Step18: Q18. Lex X be a tensor <br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/>
Pad 2 * 0's before the first dimension, 3 * 0's after the second dimension.
Step19: Q19. Lex X be a tensor <br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/>
and Y be a tensor<br/>
[[ 7 8 9]<br/>
[10 11 12]].<br/>
Concatenate X and Y so that the new tensor looks like [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]].
Step20: Q20. Let x, y, and z be tensors [1, 4], [2, 5], and [3, 6], respectively. <br/>Create a single tensor from these such that it looks [[1, 2, 3], [4, 5, 6]].
Step21: Q21. Let X be a tensor [[1, 2, 3], [4, 5, 6]]. Convert X into Y such that Y looks like [[1, 4], [2, 5], [3, 6]].
Step22: Q22. Given X below, reverse the sequence along the second axis except the zero-paddings.
Step23: Q23. Given X below, reverse the last dimension.
Step24: Q24. Given X below, permute its dimensions such that the new tensor has shape (3, 1, 2).
Step25: Q25. Given X, below, get the first, and third rows.
Step26: Q26. Given X below, get the elements 5 and 7.
Step27: Q27. Let x be a tensor [2, 2, 1, 5, 4, 5, 1, 2, 3]. Get the tensors of unique elements and their counts.
Step28: Q28. Let x be a tensor [1, 2, 3, 4, 5]. Divide the elements of x into a list of tensors that looks like [[3, 5], [1], [2, 4]].
Step29: Q29. Let X be a tensor [[7, 8], [5, 6]] and Y be a tensor [[1, 2], [3, 4]]. Create a single tensor looking like [[1, 2], [3, 4], [5, 6], [7, 8]].
Step30: Q30. Let x be a tensor [0, 1, 2, 3] and y be a tensor [True, False, False, True].<br/>
Apply mask y to x.
Step31: Q31. Let X be a tensor [[0, 5, 3], [4, 2, 1]]. Convert X into one-hot. | Python Code:
from __future__ import print_function
import tensorflow as tf
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
sess = tf.InteractiveSession()
Explanation: Tensor Transformations
End of explanation
_X = np.array([["1.1", "2.2"], ["3.3", "4.4"]])
X = tf.constant(_X)
out = tf.string_to_number(X)
print(out.eval())
assert np.allclose(out.eval(), _X.astype(np.float32))
Explanation: NOTE on notation
* _x, _y, _z, ...: NumPy 0-d or 1-d arrays
* _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays
* x, y, z, ...: 0-d or 1-d tensors
* X, Y, Z, ...: 2-d or higher dimensional tensors
Casting
Q1. Let X be a tensor of [["1.1", "2.2"], ["3.3", "4.4"]]. Convert the datatype of X to float32.
End of explanation
_X = np.array([[1, 2], [3, 4]], dtype=np.int32)
X = tf.constant(_X)
out1 = tf.to_double(X)
out2 = tf.cast(X, tf.float64)
assert np.allclose(out1.eval(), out2.eval())
print(out1.eval())
assert np.allclose(out1.eval(), _X.astype(np.float64))
Explanation: Q2. Let X be a tensor [[1, 2], [3, 4]] of int32. Convert the data type of X to float64.
End of explanation
_X = np.array([[1, 2], [3, 4]], dtype=np.int32)
X = tf.constant(_X)
out1 = tf.to_float(X)
out2 = tf.cast(X, tf.float32)
assert np.allclose(out1.eval(), out2.eval())
print(out1.eval())
assert np.allclose(out1.eval(), _X.astype(np.float32))
Explanation: Q3. Let X be a tensor [[1, 2], [3, 4]] of int32. Convert the data type of X to float32.
End of explanation
_X = np.array([[1, 2], [3, 4]], dtype=np.float32)
X = tf.constant(_X)
out1 = tf.to_int32(X)
out2 = tf.cast(X, tf.int32)
assert np.allclose(out1.eval(), out2.eval())
print(out1.eval())
assert np.allclose(out1.eval(), _X.astype(np.int32))
Explanation: Q4. Let X be a tensor [[1, 2], [3, 4]] of float32. Convert the data type of X to int32.
End of explanation
_X = np.array([[1, 2], [3, 4]], dtype=np.float32)
X = tf.constant(_X)
out1 = tf.to_int64(X)
out2 = tf.cast(X, tf.int64)
assert np.allclose(out1.eval(), out2.eval())
print(out1.eval())
assert np.allclose(out1.eval(), _X.astype(np.int64))
Explanation: Q5. Let X be a tensor [[1, 2], [3, 4]] of float32. Convert the data type of X to int64.
End of explanation
_X = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]])
X = tf.constant(_X)
out = tf.shape(X)
print(out.eval())
assert np.allclose(out.eval(), _X.shape) # tf.shape() == np.ndarray.shape
Explanation: Shapes and Shaping
Q6. Let X be a tensor of [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]. Create a tensor representing the shape of X.
End of explanation
X = tf.constant([[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]])
y = tf.constant([10, 20])
out_X, out_y = tf.shape_n([X, y])
print(out_X.eval(), out_y.eval())
Explanation: Q7. Let X be a tensor of [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]) and y be a tensor [10, 20]. Create a list of tensors representing the shape of X and y.
End of explanation
_X = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]])
X = tf.constant(_X)
out = tf.size(X)
print(out.eval())
assert out.eval() == _X.size # tf.size() == np.ndarry.size
Explanation: Q8. Let X be a tensor of [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]. Create a tensor representing the size (=total number of elements) of X.
End of explanation
_X = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]])
X = tf.constant(_X)
out = tf.rank(X)
print(out.eval())
assert out.eval() == _X.ndim # tf.rank() == np.ndarray.ndim
Explanation: Q9. Let X be a tensor of [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]. Create a tensor representing the rank (=number of dimensions) of X.
End of explanation
X = tf.ones([10, 10, 3])
out = tf.reshape(X, [-1, 150])
print(out.eval())
assert np.allclose(out.eval(), np.reshape(np.ones([10, 10, 3]), [-1, 150]))
# tf.reshape(tensor, hape) == np.reshape(array, shape)
Explanation: Q10. Let X be tf.ones([10, 10, 3]). Reshape X so that the size of the second dimension equals 150.
End of explanation
X = tf.ones([10, 10, 1, 1])
out = tf.squeeze(X)
print(out.eval().shape)
assert np.allclose(out.eval(), np.squeeze(np.ones([10, 10, 1, 1])))
# tf.squeeze(tensor) == np.squeeze(array)
Explanation: Q11. Let X be tf.ones([10, 10, 1, 1]). Remove all the dimensions of size 1 in X.
End of explanation
X = tf.ones([10, 10, 1, 1])
out = tf.squeeze(X, [2])
print(out.eval().shape)
assert np.allclose(out.eval(), np.squeeze(np.ones([10, 10, 1, 1]), 2))
# tf.squeeze(tensor, axis) == np.squeeze(array, axis)
Explanation: Q12. Let X be tf.ones([10, 10, 1, 1]). Remove only the third dimension in X.
End of explanation
X = tf.ones([10, 10])
out = tf.expand_dims(X, -1)
print(out.eval().shape)
assert np.allclose(out.eval(), np.expand_dims(np.ones([10, 10]), -1))
# tf.expand_dims(tensor, axis) == np.expand_dims(array, axis)
Explanation: Q13. Let X be tf.ones([10, 10]). Add a dimension of 1 at the end of X.
End of explanation
_X = np.array([[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]],
[[5, 5, 5],
[6, 6, 6]]])
X = tf.constant(_X)
out = tf.slice(X, [1, 0, 0], [2, 1, 3])
print(out.eval())
Explanation: Slicing and Joining
Q14. Let X be a tensor<br/>
[[[1, 1, 1], [2, 2, 2]],<br/>
[[3, 3, 3], [4, 4, 4]],<br/>
[[5, 5, 5], [6, 6, 6]]].<br/>
Extract the [[[3, 3, 3], [5, 5, 5]] from X.
End of explanation
_X = np.arange(1, 11).reshape([5, 2])
X = tf.convert_to_tensor(_X)
out = tf.strided_slice(X, begin=[0], end=[5], strides=[2])
print(out.eval())
assert np.allclose(out.eval(), _X[[0, 2, 4]])
Explanation: Q15. Let X be a tensor of<br/>
[[ 1 2]<br />
[ 3 4]<br />
[ 5 6]<br />
[ 7 8]<br />
[ 9 10]].<br />
Extract the [[1, 2], [5, 6], [9, 10]]] from X.
End of explanation
_X = np.arange(1, 11).reshape([2, 5])
X = tf.convert_to_tensor(_X)
out = tf.split(X, 5, axis=1) # Note that the order of arguments has changed in TensorFlow 1.0
print([each.eval() for each in out])
comp = np.array_split(_X, 5, 1)
# tf.split(tensor, num_or_size_splits, axis) == np.array_split(array, indices_or_sections, axis=0)
assert np.allclose([each.eval() for each in out], comp)
Explanation: Q16. Let X be a tensor of<br/>
[[ 1 2 3 4 5]<br />
[ 6 7 8 9 10]].<br />
Split X into 5 same-sized tensors along the second dimension.
End of explanation
_X = np.arange(1, 7).reshape((2, 3))
X = tf.convert_to_tensor(_X)
out = tf.tile(X, [1, 3])
print(out.eval())
assert np.allclose(out.eval(), np.tile(_X, [1, 3]))
# tf.tile(tensor, multiples) == np.tile(array, reps)
Explanation: Q17. Lex X be a tensor<br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/>
Create a tensor looking like <br/>
[[ 1 2 3 1 2 3 1 2 3 ]<br/>
[ 4 5 6 4 5 6 4 5 6 ]].
End of explanation
_X = np.arange(1, 7).reshape((2, 3))
X = tf.convert_to_tensor(_X)
out = tf.pad(X, [[2, 0], [0, 3]])
print(out.eval())
assert np.allclose(out.eval(), np.pad(_X, [[2, 0], [0, 3]], 'constant', constant_values=[0, 0]))
Explanation: Q18. Lex X be a tensor <br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/>
Pad 2 * 0's before the first dimension, 3 * 0's after the second dimension.
End of explanation
_X = np.array([[1, 2, 3], [4, 5, 6]])
_Y = np.array([[7, 8, 9], [10, 11, 12]])
X = tf.constant(_X)
Y = tf.constant(_Y)
out = tf.concat([X, Y], 1) # Note that the order of arguments has changed in TF 1.0!
print(out.eval())
assert np.allclose(out.eval(), np.concatenate((_X, _Y), 1))
# tf.concat == np.concatenate
Explanation: Q19. Lex X be a tensor <br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/>
and Y be a tensor<br/>
[[ 7 8 9]<br/>
[10 11 12]].<br/>
Concatenate X and Y so that the new tensor looks like [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]].
End of explanation
x = tf.constant([1, 4])
y = tf.constant([2, 5])
z = tf.constant([3, 6])
out = tf.stack([x, y, z], 1)
print(out.eval())
Explanation: Q20. Let x, y, and z be tensors [1, 4], [2, 5], and [3, 6], respectively. <br/>Create a single tensor from these such that it looks [[1, 2, 3], [4, 5, 6]].
End of explanation
X = tf.constant([[1, 2, 3], [4, 5, 6]])
Y = tf.unstack(X, axis=1)
print([each.eval() for each in Y])
Explanation: Q21. Let X be a tensor [[1, 2, 3], [4, 5, 6]]. Convert X into Y such that Y looks like [[1, 4], [2, 5], [3, 6]].
End of explanation
X = tf.constant(
[[[0, 0, 1],
[0, 1, 0],
[0, 0, 0]],
[[0, 0, 1],
[0, 1, 0],
[1, 0, 0]]])
out = tf.reverse_sequence(X, [2, 3], seq_axis=1, batch_axis=0)
out.eval()
Explanation: Q22. Given X below, reverse the sequence along the second axis except the zero-paddings.
End of explanation
_X = np.arange(1, 1*2*3*4 + 1).reshape((1, 2, 3, 4))
X = tf.convert_to_tensor(_X)
out = tf.reverse(X, [-1]) #Note that tf.reverse has changed its behavior in TF 1.0.
print(out.eval())
assert np.allclose(out.eval(), _X[:, :, :, ::-1])
Explanation: Q23. Given X below, reverse the last dimension.
End of explanation
_X = np.ones((1, 2, 3))
X = tf.convert_to_tensor(_X)
out = tf.transpose(X, [2, 0, 1])
print(out.eval().shape)
assert np.allclose(out.eval(), np.transpose(_X))
Explanation: Q24. Given X below, permute its dimensions such that the new tensor has shape (3, 1, 2).
End of explanation
_X = np.arange(1, 10).reshape((3, 3))
X = tf.convert_to_tensor(_X)
out1 = tf.gather(X, [0, 2])
out2 = tf.gather_nd(X, [[0], [2]])
assert np.allclose(out1.eval(), out2.eval())
print(out1.eval())
assert np.allclose(out1.eval(), _X[[0, 2]])
Explanation: Q25. Given X, below, get the first, and third rows.
End of explanation
_X = np.arange(1, 10).reshape((3, 3))
X = tf.convert_to_tensor(_X)
out = tf.gather_nd(X, [[1, 1], [2, 0]])
print(out.eval())
assert np.allclose(out.eval(), _X[[1, 2], [1, 0]])
Explanation: Q26. Given X below, get the elements 5 and 7.
End of explanation
x = tf.constant([2, 2, 1, 5, 4, 5, 1, 2, 3])
out1, _, out2 = tf.unique_with_counts(x)
print(out1.eval(), out2.eval())
Explanation: Q27. Let x be a tensor [2, 2, 1, 5, 4, 5, 1, 2, 3]. Get the tensors of unique elements and their counts.
End of explanation
x = tf.constant([1, 2, 3, 4, 5])
out = tf.dynamic_partition(x, [1, 2, 0, 2, 0], 3)
print([each.eval() for each in out])
Explanation: Q28. Let x be a tensor [1, 2, 3, 4, 5]. Divide the elements of x into a list of tensors that looks like [[3, 5], [1], [2, 4]].
End of explanation
X = tf.constant([[7, 8], [5, 6]])
Y = tf.constant([[1, 2], [3, 4]])
out = tf.dynamic_stitch([[3, 2], [0, 1]], [X, Y])
print(out.eval())
Explanation: Q29. Let X be a tensor [[7, 8], [5, 6]] and Y be a tensor [[1, 2], [3, 4]]. Create a single tensor looking like [[1, 2], [3, 4], [5, 6], [7, 8]].
End of explanation
_x = np.array([0, 1, 2, 3])
_y = np.array([True, False, False, True])
x = tf.convert_to_tensor(_x)
y = tf.convert_to_tensor(_y)
out = tf.boolean_mask(x, y)
print(out.eval())
assert np.allclose(out.eval(), _x[_y])
Explanation: Q30. Let x be a tensor [0, 1, 2, 3] and y be a tensor [True, False, False, True].<br/>
Apply mask y to x.
End of explanation
X = tf.constant([[0, 5, 3], [4, 2, 1]])
out = tf.one_hot(x, 6)
print(out.eval())
Explanation: Q31. Let X be a tensor [[0, 5, 3], [4, 2, 1]]. Convert X into one-hot.
End of explanation |
664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Assignment 2 - Building CNNs
ASSIGNMENT DEADLINE
Step4: Convolution
Step5: FOR SUBMISSION
Step7: Aside
Step8: Convolution
Step9: ReLU layer
Step10: FOR SUBMISSION
Step11: Max pooling
Step12: Convolutional "sandwich" layers
Here we introduce the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file code_base/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks. With a modular design, it is very convenient to combine layers according to your network architecture.
The following code test the sandwich layers of conv_relu_pool_forward, conv_relu_pool_backward, conv_relu_forward and conv_relu_backward.
Step13: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file code_base/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug
Step14: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note
Step15: Solver
Following a modular design, for this assignment we have split the logic for training models into a separate class. Open the file code_base/solver.py and read through it to familiarize yourself with the API. We have provided the functions for the various optimization techniques such as sgd and Adam.
Overfit small data
A nice trick is to train your model with just a few training samples to check that your code is working. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
Step16: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting
Step17: Train the net on full CIFAR2 data
By training the three-layer convolutional network for one epoch, you should achieve about 80% on the validation set. You may have to wait about 2 minutes for training to be completed.
Step18: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following
Step19: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
Dropout forward pass
In the file code_base/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes. Refer to slide 19 of lecture 5 for the implementation details. p refers to the probability of setting a neuron to zero. We will follow the Caffe convention where we multiply the outputs by 1/(1-p) during training.
FOR SUBMISSION
Step20: Dropout backward pass
In the file code_base/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
FOR SUBMISSION | Python Code:
# A bit of setup
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from code_base.classifiers.cnn import *
from code_base.data_utils import get_CIFAR2_data
from code_base.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from code_base.layers import *
from code_base.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR2 (airplane and bird) data.
data = get_CIFAR2_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
Explanation: Assignment 2 - Building CNNs
ASSIGNMENT DEADLINE: 19 OCT 2017 (THU) 11.59PM
In this assignment we will be coding the building blocks for the convolutional neural network and putting them together to train a CNN on the CIFAR2 dataset (taking just 2 classes (airplane and bird) from the original 10 classes).
Please note that we have changed to using just 2 classes (airplane and bird) from the original CIFAR10 dataset. get_cifar2_data code in data_utils.py will load the 2-class data accordingly.
We would like to credit the Stanford CS231n team as much of our code backbone is from their Assignment 2. The teaching team at Stanford has kindly agreed for us to adapt their assignment and code. You will find that we adopt a modular design of the code. You will implement different layer types in isolation and then combine them together into models with different architectures.
For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
Receive inputs x and weights w
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
Submission details
Since we have not restricted the usage of other programming languages, our submission format will need to be in output text form (similar to the previous assignment). For each question, we will provide the input arguments and you have to provide a text file containing the corresponding output, to a certain precision.
This iPython notebook serves to:
- explain the questions
- explain the function APIs
- providing helper functions to piece functions together and check your code
- providing helper functions to load and save arrays as csv files for submission
Hence, we strongly encourage you to use Python for this assignment as you will only need to code the relevant parts and it will reduce your workload significantly. For non-Python users, some of the cells here are for illustration purpose, you do not have to replicate the demos.
The input files will be in the input_files folder, and your output files should go into output_files folder. Similar to assignment 1, use np.float32 if you are using Python and use at least 16 significant figures for your outputs. For Python users, if you use the accompanying printing functions when using np.float32 variables, you should be ok.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 2}
out, _ = conv_forward(x, w, b, conv_param)
correct_out = np.array([[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]])
# Compare your output to ours; difference should be around 2e-8
print('Testing conv_forward')
print('difference: ', rel_error(out, correct_out))
Explanation: Convolution: Forward pass
In the file code_base/layers.py, implement the forward pass for a convolutional layer in the function conv_forward.
The input consists of N data points, each with C channels, height H and width W. We convolve each input with F different filters, where each filter spans all C channels and has height HH and width HH.
Input:
- x: Input data of shape (N, C, H, W)
w: Filter weights of shape (F, C, HH, WW)
b: Biases, of shape (F,)
conv_param contains the stride and padding width:
'stride': The number of pixels between adjacent receptive fields in the horizontal and vertical directions.
'pad': The number of pixels that will be used to zero-pad the input in each x-y direction. We will use the same definition in lecture notes 3b, slide 13 (ie. same padding on both sides). Hence p=2 means a 1-pixel border of padding with zeros.
WARNING: Please implement the matrix product method of convolution as shown in Lecture notes 4, slide 38. The naive version of implementing a sliding window will be too slow when you try to train the whole CNN in later sections.
You can test your implementation by running the following:
End of explanation
x_shape = (2, 3, 6, 6)
w_shape = (3, 3, 4, 4)
x = np.loadtxt('./input_files/conv_forward_in_x.csv', delimiter=',')
x = x.reshape(x_shape)
w = np.loadtxt('./input_files/conv_forward_in_w.csv', delimiter=',')
w = w.reshape(w_shape)
b = np.loadtxt('./input_files/conv_forward_in_b.csv', delimiter=',')
conv_param = {'stride': 2, 'pad': 2}
out, _ = conv_forward(x, w, b, conv_param)
np.savetxt('./output_files/conv_forward_out.csv', out.ravel(), delimiter=',')
Explanation: FOR SUBMISSION: Submit the corresponding output from your foward convolution for the given input arguments. Load the files conv_forward_in_x.csv, conv_forward_in_w.csv and conv_forward_in_b.csv, they contain the input arguments for the x, w and b respectively and are flattened to a 1D array in C-style, row-major order (see numpy.ravel for details: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html).
For Python users, you can use the code below to load and reshape the arrays to feed into your conv_forward function. Code is also provided to flatten the array and save your output to a csv file. For users of other programming languages, you have to submit the output file conv_forward_out.csv which contains the flattened output of conv_forward. The array must be flattened in row-major order or else our automated scripts will mark your outputs as incorrect.
End of explanation
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d//2:-d//2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward(x, w, b, {'stride': 1, 'pad': 2})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
Explanation: Aside: Image processing via convolutions
In slide 32 of lecture 4, we mentioned that convolutions are able to perform low-level image processing such as edge detection. Here, we manually set up filters that perform common image processing operations (grayscale conversion and edge detection) and test them on two images. If your forward convolution pass works correctly, the visualization should make sense.
End of explanation
x_shape = (4, 3, 5, 5)
w_shape = (2, 3, 3, 3)
dout_shape = (4, 2, 5, 5)
x = np.loadtxt('./input_files/conv_backward_in_x.csv')
x = x.reshape(x_shape)
w = np.loadtxt('./input_files/conv_backward_in_w.csv')
w = w.reshape(w_shape)
b = np.loadtxt('./input_files/conv_backward_in_b.csv')
dout = np.loadtxt('./input_files/conv_backward_in_dout.csv')
dout = dout.reshape(dout_shape)
conv_param = {'stride': 1, 'pad': 2}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward(x, w, b, conv_param)
dx, dw, db = conv_backward(dout, cache)
np.savetxt('./output_files/conv_backward_out_dx.csv', dx.ravel())
np.savetxt('./output_files/conv_backward_out_dw.csv', dw.ravel())
np.savetxt('./output_files/conv_backward_out_db.csv', db.ravel())
# Your errors should be less than 1e-8'
print('Testing conv_backward function')
print('dx error: ', rel_error(dx, dx_num))
print('dw error: ', rel_error(dw, dw_num))
print('db error: ', rel_error(db, db_num))
Explanation: Convolution: Backward pass
Implement the backward pass for the convolution operation in the function conv_backward in the file code_base/layers.py.
When you are done, run the following to check your backward pass with a numeric gradient check.
In gradient checking, to get an approximate gradient for a parameter, we vary that parameter by a small amount (while keeping rest of parameters constant) and note the difference in the network loss. Dividing the difference in network loss by the amount we varied the parameter gives us an approximation for the gradient. We repeat this process for all the other parameters to obtain our numerical gradient. Note that gradient checking is a slow process (2 forward propagations per parameter) and should only be used to check your backpropagation!
More links on gradient checking:
http://ufldl.stanford.edu/tutorial/supervised/DebuggingGradientChecking/
https://www.coursera.org/learn/machine-learning/lecture/Y3s6r/gradient-checking
FOR SUBMISSION: Submit the corresponding output from your backward convolution for the given input arguments. Load the files conv_backward_in_x.csv, conv_backward_in_w.csv, conv_backward_in_b.csv and conv_backward_in_dout.csv, they contain the input arguments for the dx, dw, db and dout respectively and are flattened to a 1D array in C-style, row-major order.
The input arguments have the following dimensions:
- x: Input data of shape (N, C, H, W)
- w: Filter weights of shape (F, C, HH, WW)
- b: Biases, of shape (F,)
- dout: Upstream derivatives.
conv_param contains the stride and padding width:
'stride': The number of pixels between adjacent receptive fields in the horizontal and vertical directions.
'pad': The number of pixels that will be used to zero-pad the input in each x-y direction. We will use the same definition in lecture notes 3b, slide 13 (ie. same padding on both sides).
For Python users, you can use the code below to load and reshape the arrays. Note that the code runs conv_forward first and saves the relevant arrays in cache for conv_backward. Code is also provided flatten and save your output to a csv file. For users of other programming languages, you have to submit the output files conv_backward_out_dx.csv, conv_backward_out_dw.csv, conv_backward_out_db.csv which contains the flattened outputs of conv_backward. The array must be flattened in row-major order or else our automated scripts will mark your outputs as incorrect.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print('Testing max_pool_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: ReLU layer: forward and backward
A convolution layer is usually followed by an elementwise activation function. Since you have derived backpropagation for the ReLU activation function in Assignment 1, we will provide the functions relu_forward and relu_backward in code_base/layers.py. Read through the function code and make sure you understand the derivation. The code for affine (fully connected) layers to be used at the end of CNN is also provided.
Max pooling: Forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward in the file code_base/layers.py.
Check your implementation by running the following:
End of explanation
x_shape = (3, 3, 8, 8)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
x = np.loadtxt('./input_files/maxpool_forward_in_x.csv')
x = x.reshape(x_shape)
out, _ = max_pool_forward(x, pool_param)
np.savetxt('./output_files/maxpool_forward_out.csv', out.ravel())
Explanation: FOR SUBMISSION: Submit the corresponding output from your forward maxpool for the given input arguments.
Inputs:
- x: Input data, of shape (N, C, H, W)
- pool_param: dictionary with the following keys:
- 'pool_height': The height of each pooling region
- 'pool_width': The width of each pooling region
- 'stride': The distance between adjacent pooling regions
End of explanation
x_shape = (3, 2, 10, 10)
dout_shape = (3, 2, 5, 5)
x = np.loadtxt('./input_files/maxpool_backward_in_x.csv')
x = x.reshape(x_shape)
dout = np.loadtxt('./input_files/maxpool_backward_in_dout.csv')
dout = dout.reshape(dout_shape)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = max_pool_forward(x, pool_param)
dx = max_pool_backward(dout, cache)
np.savetxt('./output_files/maxpool_backward_out.csv', dx.ravel())
Explanation: Max pooling: Backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward in the file code_base/layers.py.
FOR SUBMISSION: Submit the corresponding output from your backward maxpool for the given input arguments.
Inputs:
- x: Input data, of shape (N, C, H, W)
- pool_param: dictionary with the following keys:
- 'pool_height': The height of each pooling region
- 'pool_width': The width of each pooling region
- 'stride': The distance between adjacent pooling regions
- dout: Upstream derivatives
End of explanation
from code_base.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
np.random.seed(231)
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 2}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print('Testing conv_relu_pool')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from code_base.layer_utils import conv_relu_forward, conv_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 2}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print('Testing conv_relu:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Convolutional "sandwich" layers
Here we introduce the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file code_base/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks. With a modular design, it is very convenient to combine layers according to your network architecture.
The following code test the sandwich layers of conv_relu_pool_forward, conv_relu_pool_backward, conv_relu_forward and conv_relu_backward.
End of explanation
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss)
model.reg = 0.5
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss)
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file code_base/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
np.random.seed(231)
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2.
End of explanation
np.random.seed(231)
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=15, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
Explanation: Solver
Following a modular design, for this assignment we have split the logic for training models into a separate class. Open the file code_base/solver.py and read through it to familiarize yourself with the API. We have provided the functions for the various optimization techniques such as sgd and Adam.
Overfit small data
A nice trick is to train your model with just a few training samples to check that your code is working. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
Explanation: Train the net on full CIFAR2 data
By training the three-layer convolutional network for one epoch, you should achieve about 80% on the validation set. You may have to wait about 2 minutes for training to be completed.
End of explanation
from code_base.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
x = np.loadtxt('./input_files/dropout_forward_in_x.csv')
# Larger p means more dropout
p = 0.3
out_train, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
np.savetxt('./output_files/dropout_forward_out_train.csv', out_train)
np.savetxt('./output_files/dropout_forward_out_test.csv', out_test)
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
Dropout forward pass
In the file code_base/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes. Refer to slide 19 of lecture 5 for the implementation details. p refers to the probability of setting a neuron to zero. We will follow the Caffe convention where we multiply the outputs by 1/(1-p) during training.
FOR SUBMISSION: Submit the corresponding output from your forward dropout for the given input arguments.
Inputs:
- x: Input data. The array in the given csv file is presented in 2D, no reshaping is required
- dropout_param: A dictionary with the following keys:
- p: Dropout parameter. We drop each neuron output with probability p.
- mode: 'test' or 'train'. If the mode is train, then perform dropout; if the mode is test, then just return the input.
Since we cannot control the random seed used for randomly dropping the nodes across all programming languages, there is no unique output for this code. What we will check is whether your output makes sense for the given p dropout value.
End of explanation
dout = np.loadtxt('./input_files/dropout_backward_in_dout.csv')
x = np.loadtxt('./input_files/dropout_backward_in_x.csv')
dropout_param = {'mode': 'train', 'p': 0.8}
out, cache = dropout_forward(x, dropout_param)
dx_train = dropout_backward(dout, cache)
np.savetxt('./output_files/dropout_backward_out_train.csv', dx_train)
dropout_param = {'mode': 'test', 'p': 0.8}
out, cache = dropout_forward(x, dropout_param)
dx_test = dropout_backward(dout, cache)
np.savetxt('./output_files/dropout_backward_out_test.csv', dx_test)
Explanation: Dropout backward pass
In the file code_base/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
FOR SUBMISSION: Submit the corresponding output from your backward dropout for the given input arguments.
End of explanation |
665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step2: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook
Step5: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step6: Use interactive to build a user interface for exploing the draw_circle function
Step7: Use the display function to show the widgets created by interactive | Python Code:
# YOUR CODE HERE
import matplotlib as plt
import numpy as np
import IPython as ipy
from IPython.display import SVG
from IPython.html.widgets import interactive, fixed
from IPython.html import widgets
from IPython.display import display
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
s =
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
SVG(s)
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
# YOUR CODE HERE
svg =
<svg width='%s' height='%s'>
<circle cx='%s' cy='%s' r='%s' fill='%s' />
</svg>
% (width, height, cx, cy, r, fill)
display(SVG(svg))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
# YOUR CODE HERE
w = interactive(draw_circle, width=fixed(300), height=fixed(300), cx=(0, 300, 1), cy=(0, 300, 1), r=(0, 50, 1), fill="red");
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
# YOUR CODE HERE
display(w)
#the sliders show but not the circle itself?
assert True # leave this to grade the display of the widget
Explanation: Use the display function to show the widgets created by interactive:
End of explanation |
666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programmatic Access to Genome Nexus
This notebook gives some examples in Python for programmatic access to http
Step1: Connect with cBioPortal API
cBioPortal also uses Swagger for their API.
Step2: Annotate cBioPortal mutations with Genome Nexus
For convenience sake we're using only SNVs here. Eventually there will be an endpoint to help convert pos, ref, alt to the hgvs notation.
Step3: Check overlap SIFT/PolyPhen-2 | Python Code:
from bravado.client import SwaggerClient
client = SwaggerClient.from_url('https://www.genomenexus.org/v2/api-docs',
config={"validate_requests":False,"validate_responses":False})
print(client)
dir(client)
for a in dir(client):
client.__setattr__(a[:-len('-controller')], client.__getattr__(a))
variant = client.annotation.fetchVariantAnnotationGET(variant='17:g.41242962_41242963insGA').result()
dir(variant)
tc1 = variant.transcript_consequences[0]
dir(tc1)
print(tc1)
Explanation: Programmatic Access to Genome Nexus
This notebook gives some examples in Python for programmatic access to http://genomenexus.org. You can run these examples after installing Jupyter. Easiest way for using Jupyter is installing the Python 3 version of anaconda: https://www.anaconda.com/download/. After having that you can install Jupyter with:
conda install jupyter
For these exampels we also require the Swagger API client reader Bravado. Unfortunately not yet available in anaconda, but you can get it through pip:
conda install pip
pip install bravado
Let's try connecting to the Genome Nexus API now:
End of explanation
import seaborn as sns
%matplotlib inline
sns.set_style("white")
sns.set_context('talk')
import matplotlib.pyplot as plt
cbioportal = SwaggerClient.from_url('https://www.cbioportal.org/api/api-docs',
config={"validate_requests":False,"validate_responses":False})
print(cbioportal)
for a in dir(cbioportal):
cbioportal.__setattr__(a.replace(' ', '_').lower(), cbioportal.__getattr__(a))
dir(cbioportal)
muts = cbioportal.mutations.getMutationsInMolecularProfileBySampleListIdUsingGET(
molecularProfileId="msk_impact_2017_mutations", # {study_id}_mutations gives default mutations profile for study
sampleListId="msk_impact_2017_all", # {study_id}_all includes all samples
projection="DETAILED" # include gene info
).result()
import pandas as pd
mdf = pd.DataFrame([dict(m.__dict__['_Model__dict'],
**m.__dict__['_Model__dict']['gene'].__dict__['_Model__dict']) for m in muts])
mdf.groupby('uniqueSampleKey').studyId.count().plot(kind='hist', bins=400, xlim=(0,30))
plt.xlabel('Number of mutations in sample')
plt.ylabel('Number of samples')
plt.title('Number of mutations across samples in MSK-IMPACT (2017)')
sns.despine(trim=True)
mdf.variantType.astype(str).value_counts().plot(kind='bar')
plt.title('Types of mutations in MSK-IMPACT (2017)')
sns.despine(trim=False)
Explanation: Connect with cBioPortal API
cBioPortal also uses Swagger for their API.
End of explanation
snvs = mdf[(mdf.variantType == 'SNP') & (mdf.variantAllele != '-') & (mdf.referenceAllele != '-')].copy()
# need query string like 9:g.22125503G>C
snvs['hgvs_for_gn'] = snvs.chromosome.astype(str) + ":g." + snvs.startPosition.astype(str) + snvs.referenceAllele + '>' + snvs.variantAllele
assert(snvs['hgvs_for_gn'].isnull().sum() == 0)
import time
qvariants = list(set(snvs.hgvs_for_gn))
gn_results = []
chunk_size = 500
print("Querying {} variants".format(len(qvariants)))
for n, qvar in enumerate([qvariants[i:i + chunk_size] for i in range(0, len(qvariants), chunk_size)]):
try:
gn_results += client.annotation.fetchVariantAnnotationPOST(variants=qvar,fields=['hotspots']).result()
print("Querying [{}, {}]: Success".format(n*chunk_size, min(len(qvariants), n*chunk_size+chunk_size)))
except Exception as e:
print("Querying [{}, {}]: Failed".format(n*chunk_size, min(len(qvariants), n*chunk_size+chunk_size)))
pass
time.sleep(1) # add a delay, to not overload server
gn_dict = {v.id:v for v in gn_results}
def is_sift_high(variant):
return variant in gn_dict and \
len(list(filter(lambda x: x.sift_prediction == 'deleterious', gn_dict[variant].transcript_consequences))) > 0
def is_polyphen_high(variant):
return variant in gn_dict and \
len(list(filter(lambda x: x.polyphen_prediction == 'probably_damaging', gn_dict[variant].transcript_consequences))) > 0
Explanation: Annotate cBioPortal mutations with Genome Nexus
For convenience sake we're using only SNVs here. Eventually there will be an endpoint to help convert pos, ref, alt to the hgvs notation.
End of explanation
snvs['is_sift_high'] = snvs.hgvs_for_gn.apply(is_sift_high)
snvs['is_polyphen_high'] = snvs.hgvs_for_gn.apply(is_polyphen_high)
from matplotlib_venn import venn2
venn2(subsets=((snvs.is_sift_high & (~snvs.is_polyphen_high)).sum(),
(snvs.is_polyphen_high & (~snvs.is_sift_high)).sum(),
(snvs.is_polyphen_high & snvs.is_sift_high).sum()), set_labels=["SIFT","PolyPhen-2"])
plt.title("Variants as predicted to have a high impact in MSK-IMPACT (2017)")
Explanation: Check overlap SIFT/PolyPhen-2
End of explanation |
667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Santandar Customer Satisfaction
Step 1
Step1: Exercise 1 Find column types for train and test.
Exercise 2 Find unique column types for train and test
Exercise 3 Find number of rows and columns in train and test
Exercise 4 Find the columns that has missing values
Hint
Step2: Step 4
Step3: Step 5
Step4: Exercise 6 Predict probability of each customer to be unsatisfied in the test dataset
Exercise 7 Fit L1 Regularization model. Evaluate the results
Exercise 8 Add the prediction to sample sub. Save it as csv. Submit solution to kaggle
Decision Trees | Python Code:
import numpy as np
import pandas as pd
#Read train, test and sample submission datasets
train = pd.read_csv("../data/train.csv")
test = pd.read_csv("../data/test.csv")
samplesub = pd.read_csv("../data/sample_submission.csv")
Explanation: Santandar Customer Satisfaction
Step 1: Frame
From frontline support teams to C-suites, customer satisfaction is a key measure of success. Unhappy customers don't stick around. What's more, unhappy customers rarely voice their dissatisfaction before leaving.
Santander Bank is asking Kagglers to help them identify dissatisfied customers early in their relationship. Doing so would allow Santander to take proactive steps to improve a customer's happiness before it's too late.
In this competition, you'll work with hundreds of anonymized features to predict if a customer is satisfied or dissatisfied with their banking experience.
Predict the probability of each customer to be unsatisfied
<img style="float:center" src="img/unhappy_customer.jpg" width=300/>
Step 2: Acquire
The competition is hosted on Kaggle
<img style="float:center" src="img/kaggle.jpg" width=800/>
<br>
<br>
The data section has three files:
1. train.csv Training dataset to create the model. It has the target column - indicating whether the customer was happy or not
2. test.csv Test dataset for which the predictions are the be made
3. sample_submission.csv Format for submitting the predictions on Kaggle's website
The datasets are downloaded and are available at the data folder
Step 3: Explore
Read the datasets
End of explanation
#Create the labels
labels=train.iloc[:,-1]
#Find number of unsatisfied customers using `labels`
Explanation: Exercise 1 Find column types for train and test.
Exercise 2 Find unique column types for train and test
Exercise 3 Find number of rows and columns in train and test
Exercise 4 Find the columns that has missing values
Hint: look up at the pandas function isnull
Exercise 5 Find number of unsatisfied customers in the train dataset
End of explanation
# Step 1: Find standard deviation
train_std =
# Step 2: Find columns that has standard deviation as 0
columns_with_0_variance =
#train.columns.values in columns_with_0_variance.index
train_columns = train.columns.values
columns_with_0_variance_columns = columns_with_0_variance.index.values
#Need to subset columns that are present in train but not in the dataset with 0 variance
selected_columns = np.in1d(train_columns, columns_with_0_variance_columns)
len(selected_columns)
#Create train and test
train_updated = train.iloc[:,~selected_columns[1:len(selected_columns)-1]]
test_updated = test.iloc[:,~selected_columns[1:len(selected_columns)-1]]
#Check if the number of columns in both the datasets are the same
print train_updated.shape, test_updated.shape
#Check if column names in train and test are the same
train_updated.columns.values in test_updated.columns.values
Explanation: Step 4: Refine
Exercise 5 Find features that show no variance
Question: Why is this important?
End of explanation
from sklearn import preprocessing
from sklearn import linear_model
from sklearn import cross_validation
y = np.array(labels)
#Why do we need scaling?
scaler = preprocessing.StandardScaler()
scaler = scaler.fit(train_updated)
train_scaled = scaler.transform(train_updated)
#Remember - need to use the same scaler function on test
test_scaled = scaler.transform(test_updated)
#lr = linear_model.LogisticRegression()
logReg = linear_model.LogisticRegression(tol=0.1, n_jobs=6)
%timeit -n 1 -r 1 logReg.fit(train_scaled, y)
logRegPrediction = logReg.predict(test_scaled)
Explanation: Step 5: Model
We will cover the following
Model 1: Logistic Regression (L1/L2)
Model 4: Decision Tree
Visualizing decision tree
Cross-validation
Error Metrics
Regularization
Regularization is tuning or selecting the preferred level of model complexity so your models are better at predicting (generalizing). If you don't do this your models may be too complex and overfit or too simple and underfit, either way giving poor predictions.
Logistic Regression(L1/L2)
End of explanation
from sklearn import tree
decisionTreeModel = tree.DecisionTreeClassifier()
decisionTreeModel.fit(train_updated, y)
Explanation: Exercise 6 Predict probability of each customer to be unsatisfied in the test dataset
Exercise 7 Fit L1 Regularization model. Evaluate the results
Exercise 8 Add the prediction to sample sub. Save it as csv. Submit solution to kaggle
Decision Trees
End of explanation |
668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research Documentation)
Recreating JPV 2019 / PVSC 2018 Fig. 13
Calculating and plotting shading from torque tube on 1-axis tracking for 1 day, which is figure 13 in
Step1: <a id='step1a'></a>
A. Baseline Case
Step2: <a id='step1b'></a>
B. ZGAP = 0.1
Step3: <a id='step1c'></a>
C. ZGAP = 0.2
Step4: <a id='step1d'></a>
D. ZGAP = 0.3
Step5: <a id='step2'></a>
2. Read-back the values and tabulate average values for unshaded, 10cm gap and 30cm gap
Step6: <a id='step3'></a>
3. plot spatial loss values for 10cm and 30cm data
Step7: <a id='step4'></a>
4. Overall Shading Loss Factor
To calculate shading loss factor, we can use the following equation | Python Code:
import os
from pathlib import Path
testfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_09')
if not os.path.exists(testfolder):
os.makedirs(testfolder)
print ("Your simulation will be stored in %s" % testfolder)
# VARIABLES of the simulation:
lat = 35.1 # ABQ
lon = -106.7 # ABQ
x=1
y = 2
numpanels=1
limit_angle = 45 # tracker rotation limit angle
backtrack = True
albedo = 'concrete' # ground albedo
hub_height = y*0.75 # H = 0.75
gcr = 0.35
pitch = y/gcr
#pitch = 1.0/gcr # Check from 1Axis_Shading_PVSC2018 file
cumulativesky = False # needed for set1axis and makeScene1axis so simulation is done hourly not with gencumsky.
limit_angle = 45 # tracker rotation limit angle
nMods=10
nRows=3
sensorsy = 200
module_type='test-module'
datewanted='06_24' # sunny day 6/24/1972 (index 4180 - 4195). Valid formats starting version 0.4.0 for full day sim: mm_dd
## Torque tube info
tubetype='round'
material = 'Metal_Grey'
diameter = 0.1
axisofrotationTorqueTube = False # Original PVSC version rotated around the modules like most other software.
# Variables that will get defined on each iteration below:
zgap = 0 # 0.2, 0.3 values tested. Re-defined on each simulation.
visible = False # baseline is no torque tube.
# Simulation Start.
import bifacial_radiance
import numpy as np
print(bifacial_radiance.__version__)
demo = bifacial_radiance.RadianceObj(path = testfolder)
demo.setGround(albedo)
epwfile = demo.getEPW(lat, lon)
metdata = demo.readWeatherFile(epwfile, starttime=datewanted, endtime=datewanted)
trackerdict = demo.set1axis(metdata, limit_angle = limit_angle, backtrack = backtrack, gcr = gcr, cumulativesky = cumulativesky)
trackerdict = demo.gendaylit1axis()
sceneDict = {'pitch':pitch,'hub_height':hub_height, 'nMods': nMods, 'nRows': nRows}
Explanation: 9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research Documentation)
Recreating JPV 2019 / PVSC 2018 Fig. 13
Calculating and plotting shading from torque tube on 1-axis tracking for 1 day, which is figure 13 in:
Ayala Pelaez S, Deline C, Greenberg P, Stein JS, Kostuk RK. Model and validation of single-axis tracking with bifacial PV. IEEE J Photovoltaics. 2019;9(3):715–21. https://ieeexplore.ieee.org/document/8644027 and https://www.nrel.gov/docs/fy19osti/72039.pdf (pre-print, conference version)
This is what we will re-create:
Use bifacial_radiance minimum v. 0.3.1 or higher. Many things have been updated since this paper, simplifying the generation of this plot:
<ul>
<li> Sensor position is now always generated E to W on N-S tracking systems, so same sensor positions can just be added for this calculation at the end without needing to flip the sensors. </li>
<li> Torquetubes get automatically generated in makeModule. Following PVSC 2018 paper, rotation is around the modules and not around the torque tube axis (which is a new feature) </li>
<li> Simulating only 1 day on single-axis tracking easier with cumulativesky = False and gendaylit1axis(startdate='06/24', enddate='06/24' </li>
<li> Sensors get generated very close to surface, so all results are from the module surface and not the torquetube for this 1-UP case. </li>
</ul>
Steps:
<ol>
<li> <a href='#step1'> Running the simulations for all the cases: </li>
<ol type='A'>
<li> <a href='#step1a'>Baseline Case: No Torque Tube </a></li>
<li> <a href='#step1b'> Zgap = 0.1 </a></li>
<li> <a href='#step1c'> Zgap = 0.2 </a></li>
<li> <a href='#step1d'> Zgap = 0.3 </a></li>
</ol>
<li> <a href='#step2'> Read-back the values and tabulate average values for unshaded, 10cm gap and 30cm gap </a></li>
<li> <a href='#step3'> Plot spatial loss values for 10cm and 30cm data </a></li>
<li> <a href='#step4'> Overall Shading Factor (for 1 day) </a></li>
</ol>
<a id='step1'></a>
1. Running the simulations for all the cases
End of explanation
#CASE 0 No torque tube
# When torquetube is False, zgap is the distance from axis of torque tube to module surface, but since we are rotating from the module's axis, this Zgap doesn't matter.
# zgap = 0.1 + diameter/2.0
torquetube = False
customname = '_NoTT'
module_NoTT = demo.makeModule(name=customname,x=x,y=y, numpanels=numpanels)
module_NoTT.addTorquetube(visible=False, axisofrotation=False, diameter=0)
trackerdict = demo.makeScene1axis(trackerdict, module_NoTT, sceneDict, cumulativesky = cumulativesky)
trackerdict = demo.makeOct1axis(trackerdict)
trackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)
Explanation: <a id='step1a'></a>
A. Baseline Case: No Torque Tube
When torquetube is False, zgap is the distance from axis of torque tube to module surface, but since we are rotating from the module's axis, this Zgap doesn't matter for this baseline case.
End of explanation
#ZGAP 0.1
zgap = 0.1
customname = '_zgap0.1'
tubeParams = {'tubetype':tubetype,
'diameter':diameter,
'material':material,
'axisofrotation':False,
'visible':True} # either pass this into makeModule, or separately into module.addTorquetube()
module_zgap01 = demo.makeModule(name=customname, x=x,y=y, numpanels=numpanels, zgap=zgap, tubeParams=tubeParams)
trackerdict = demo.makeScene1axis(trackerdict, module_zgap01, sceneDict, cumulativesky = cumulativesky)
trackerdict = demo.makeOct1axis(trackerdict)
trackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)
Explanation: <a id='step1b'></a>
B. ZGAP = 0.1
End of explanation
#ZGAP 0.2
zgap = 0.2
customname = '_zgap0.2'
tubeParams = {'tubetype':tubetype,
'diameter':diameter,
'material':material,
'axisofrotation':False,
'visible':True} # either pass this into makeModule, or separately into module.addTorquetube()
module_zgap02 = demo.makeModule(name=customname, x=x,y=y, numpanels=numpanels,zgap=zgap, tubeParams=tubeParams)
trackerdict = demo.makeScene1axis(trackerdict, module_zgap02, sceneDict, cumulativesky = cumulativesky)
trackerdict = demo.makeOct1axis(trackerdict)
trackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)
Explanation: <a id='step1c'></a>
C. ZGAP = 0.2
End of explanation
#ZGAP 0.3
zgap = 0.3
customname = '_zgap0.3'
tubeParams = {'tubetype':tubetype,
'diameter':diameter,
'material':material,
'axisofrotation':False,
'visible':True} # either pass this into makeModule, or separately into module.addTorquetube()
module_zgap03 = demo.makeModule(name=customname,x=x,y=y, numpanels=numpanels, zgap=zgap, tubeParams=tubeParams)
trackerdict = demo.makeScene1axis(trackerdict, module_zgap03, sceneDict, cumulativesky = cumulativesky)
trackerdict = demo.makeOct1axis(trackerdict)
trackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)
Explanation: <a id='step1d'></a>
D. ZGAP = 0.3
End of explanation
import glob
import pandas as pd
resultsfolder = os.path.join(testfolder, 'results')
print (resultsfolder)
filenames = glob.glob(os.path.join(resultsfolder,'*.csv'))
noTTlist = [k for k in filenames if 'NoTT' in k]
zgap10cmlist = [k for k in filenames if 'zgap0.1' in k]
zgap20cmlist = [k for k in filenames if 'zgap0.2' in k]
zgap30cmlist = [k for k in filenames if 'zgap0.3' in k]
# sum across all hours for each case
unsh_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in noTTlist]).sum(axis = 0)
cm10_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in zgap10cmlist]).sum(axis = 0)
cm20_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in zgap20cmlist]).sum(axis = 0)
cm30_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in zgap30cmlist]).sum(axis = 0)
unsh_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in noTTlist]).sum(axis = 0)
cm10_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in zgap10cmlist]).sum(axis = 0)
cm20_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in zgap20cmlist]).sum(axis = 0)
cm30_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in zgap30cmlist]).sum(axis = 0)
Explanation: <a id='step2'></a>
2. Read-back the values and tabulate average values for unshaded, 10cm gap and 30cm gap
End of explanation
import matplotlib.pyplot as plt
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.sans-serif'] = ['Helvetica']
plt.rcParams['axes.linewidth'] = 0.2 #set the value globally
fig = plt.figure()
fig.set_size_inches(4, 2.5)
ax = fig.add_axes((0.15,0.15,0.78,0.75))
#plt.rc('font', family='sans-serif')
plt.rc('xtick',labelsize=8)
plt.rc('ytick',labelsize=8)
plt.rc('axes',labelsize=8)
plt.plot(np.linspace(-1,1,unsh_back.__len__()),(cm30_back - unsh_back)/unsh_back*100, label = '30cm gap',color = 'black') #steelblue
plt.plot(np.linspace(-1,1,unsh_back.__len__()),(cm20_back - unsh_back)/unsh_back*100, label = '20cm gap',color = 'steelblue', linestyle = '--') #steelblue
plt.plot(np.linspace(-1,1,unsh_back.__len__()),(cm10_back - unsh_back)/unsh_back*100, label = '10cm gap',color = 'darkorange') #steelblue
#plt.ylabel('$G_{rear}$ vs unshaded [Wm-2]')#(r'$BG_E$ [%]')
plt.ylabel('$G_{rear}$ / $G_{rear,tubeless}$ -1 [%]')
plt.xlabel('Module X position [m]')
plt.legend(fontsize = 8,frameon = False,loc='best')
#plt.ylim([0, 15])
plt.title('Torque tube shading loss',fontsize=9)
#plt.annotate('South',xy=(-10,9.5),fontsize = 8); plt.annotate('North',xy=(8,9.5),fontsize = 8)
plt.show()
Explanation: <a id='step3'></a>
3. plot spatial loss values for 10cm and 30cm data
End of explanation
ShadingFactor = (1 - cm30_back.sum() / unsh_back.sum())*100
Explanation: <a id='step4'></a>
4. Overall Shading Loss Factor
To calculate shading loss factor, we can use the following equation:
<img src="../images_wiki/AdvancedJournals/Equation_ShadingFactor.PNG">
End of explanation |
669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collocational Analysis
"You shall know a word by the company it keeps!" These are the oft-quoted words of the linguist J.R. Firth in describing the meaning and spirit of collocational analysis. Collocation is a linguistic term for co-occuring. While most words have the possibility of co-occuring with most other words at some point in the English language, when there is a significant statistical relationship between two regularly co-occuring words, we can refer to these as collocates. One of the first, and most cited examples of collocational analysis concerns the words strong and powerful. While both words mean arguably the same thing, it is statistically more common to see the word strong co-occur with the word tea. Native speakers of English can immediately recognize the familiarity of strong tea as opposed to powerful tea, even though the two phrases both make sense in their own way (see Halliday, 1966 for more of this discussion). Interestingly, the same associations do not occur with the phrases strong men and powerful men, although in these instances, both phrases take on slightly different meanings.
These examples highlight the belief of Firthian linguists that the meaning of a word is not confined to the word itself, but in the associations that words have with other co-occuring words. Statistically significant collocates need not be adjacent, just proximal. The patterns of the words in a text, rather than the individual words themselves, have complex, relational units of meaning that allow us to ask questions about the use of lanugage in specific discourses.
In this exercise we will determine the statistical significance of the words that most often co-occur with privacy in an attempt to better understand the meaning of the word as it is used in the Hansard Corpus. We will count the actual frequency of the co-occurence, as well as use a number of different statistical tests of probability. These tests will be conducted first on one file from the corpus, then on the entire corpus itself.
Part 1
Step1: Collocational analysis is a frequency-based technique that uses word counts to determine significance. One of the problems with counting word frequencies, as we have seen in other sections, is that the most frequently occuring words in English are function words, like the, of, and and. For this reason, it is neccessary to remove these words in order to obtain meaningful results. In text analysis, these high frequency words are compiled into lists called stopwords. While standard stopword lists are provided by the NLTK module, for the Hansard Corpus it was necessary to remove other kinds of words, like proper nouns (names and place names), and other words common to parliamentary proceedings (like Prime Minister, Speaker, etc.). These words, along with the standard stopwords, can be seen below.
Here, we use our read_file function to read in a text file of custom stopwords, assigning the variable customStopwords. We tokenize the list using the split function and then create a variable called hansardStopwords that incorporates the NLTK stopword list, and adding the words from customStopwords if they don't already occur in the NLTK list.
Step2: Now, we use read_file to load the contents of the file for 2015. For consistency and to avoid file duplication, we're always reading the files from the same directory. Even though it was used for other sections, the data is the same. We read the contents of the text file, then remove the case and punctuation from the text, split the words into a list of tokens, and assign the words in each file to a list with the variable name text. What's new here, compared to other sections, is the additional removal of stopwords.
Step3: Another type of processing required for the generation of accurate collocational statistics is called lemmatization. In linguistics, a lemma is the grammatical base or stem of a word. For example, the word protect is the lemma of the verbs protecting and protected, while ethic is the lemma of the noun ethics. When we lemmatize a text, we are removing the grammatical inflections of the word forms (like ing or ed). The purpose of lemmatization for the Hansard Corpus is to obtain more accurate statistics for collocation by avoiding multiple entries for similar but different word forms (like protecting, protected, and protect). For the purpose of this text analysis, I have decided to lemmatize only the nouns and verbs in the Hansard Corpus, as the word privacy is not easily modified by adjectives (or at all by adverbs).
The lemmatizer I have used for this project was developed by Princeton and is called <a href="http
Step4: We need to make sure that the lemmatizer did something. Since we've only lemmatized for nouns and verbs, we check that here against the unlemmatized corpus, where text has not been lemmatized and lems has. Below we see that noun ethics appears <u>156 times</u> in the text variable and <u>0 times</u> in the lems variable. But the lemma for ethics
Step5: Here we check that the lemmatizer hasn't been over-zealous by determining the frequency for privacy before and after the lemmatizing function. The frequencies are the same, meaning we've not lost anything in the lemmatization.
Step6: Part 1.1
Step7: For reference, I ran an earlier test that shows the 10 most common bigrams without the stopwords removed. Duplicating this test only requires that stopwords not be removed as the text is being tokenized and cleaned. We can see that there is a clear difference in the types of results returned with and without stopwords applied. The list of words appearing above is much more interesting in terms of discourse analysis, when functional parliamentary phrases like Prime Minister and Parliamentary Secretary have been removed.
Here is a piece of code that shows how the ngram function works. It goes word by word through the text, pairing each word with the one that came before. That's why the last word in the first word pair becomes the first in the next word pair.
We assign our colText variable to the colBigrams variable by specifiying that we want to make a list of ngrams containing 2 words. We could obtain trigrams by changing the 2 in the first line of code to a 3. Then, in the second line of code, we display the first 5 results of the colBigrams variable with
Step8: Here we will check to make sure we've the bigram function has gone through and counted the entire text. Having one less ngram is correct because of the way in which the ngrams are generated word-by-word in the test above.
Step9: Part 1.2
Step10: Next, we will load our lemmatized corpus into the bigram collocation finder, apply a frequency filter that only considers bigrams that appear four or more times, and then apply our privacy filter to the results. The variable finder now contains a list of all the bigrams containing privacy that occur four or more times.
Step11: Distribution
Before I describe the statistical tests that we will use to determine the collocates for privacy, it is important to briefly discuss distribution. The chart below maps the distribution of the top 25 terms in the 2015 file.
This is important because some of the tests assume a normal distribution of words in the text. A normal distribution means that the majority of the words occur a majority of the time; it is represented in statistics as a bell curve. This means that 68% of the words would occur within one standard deviation of the mean (or average frequency of each word in the text), 95% within two standard deviations, and 99.7 within three standard deviations.
What this means, is that tests that assume a normal distribution will work, but have inaccurate statistics to back them up. I've chosen to describe all of the collocational tests here as a matter of instruction and description, but it's important to understand the tests and what they assume before making research claims based on their results.
The code below calls on the NLTK function FreqDist. The function calculates the frequency of all the words in the variable and charts them in order from highest to lowest. Here I've only requested the first 25, though more or less can be displayed by changing the number in the brackets. Additionally, in order to have the chart displayed inline (and not as a popup), I've called the <i>magic</i> function matplotlib inline. <i>iPython</i> magic functions are identifable by the <b>%</b> symbol.
Step12: As we can see from the chart above, work is the highest frequency word in our lemmatized corpus with stopwords applied, followed by right. The word privacy does not even occur in the list. The code below calculates the frequency and percentage of times these words occur in the text. While work makes up 0.56% of the total words in the text, privacy accounts for only 0.06%.
Step13: To calculate the mean, and standard deviation, we must count the frequency of all the words in the text and append those values to a list. Since the numbers in the list will actually be represented as text (not as integers), we must add an extra line of code to map those values so they can be used mathematically, calling on the map function.
Step14: Once we have our numbers in a list, as the variable numlist, we can use the built in statistics library for our calculations. Below we've calculated the mean, standard deviation, and the variance.
These numbers prove that the numerical data has a non-normal distribution. The mean is relatively low, compared to the highest frequency word, work, which appears a total of <u>7588</u> times.
The low mean is due to the high number of low frequency words; there are <u>5847</u> words that appear only once, totalling 30% of the unique words in the entire set. The standard deviation is higher than the mean, which predicts a high variance of numbers in the set, something that is proven by the variance calculation. A large variance shows that the numbers in the set are far apart from the mean, and each other.
Step15: Statistics
Raw Frequency
The frequency calculations determine both the actual number of occurences of the bigram in the corpus as well as the number of times the bigram occurs relative to the text as a whole (expressed as a percentage).
Student's-T
The Student's T-Score, also called the T-Score, measures the <b>confidence</b> of a claim of collocation and assigns a score based on that certainty. It is computed by subtracting the expected frequency of the bigram by the observed frequency of the bigram, and then dividing the result by the standard deviation which is calculated based on the overall size of the corpus.
The benefit of using the T-Score is that it considers the evidence for collocates based on the overall amount of evidence provided by the size of the corpus. This differs from the PMI score (described below) which only considers strength based on relative frequencies. The drawbacks to the T-Score include its reliance on a normal distribution (due to the incorporation of standard deviation in the calculation), as well as its dependence on the overall size of the corpus. T-scores can't be compared across corpora of different sizes.
Pointwise Mutual Information
The Pointwise Mutual Information Score (known as PMI or MI) measures the <b>strength</b> of a collocation and assigns it a score. It is a probability-based calculation that compares the number of actual bigrams to the expected number of bigrams based on the relative frequency counts of the words. The test compares the expected figure to the observed figure, converting the difference to a number indicating the strength of the collocation.
The benefit of using PMI is that the value of the score is not dependent on the overall size of the corpus, meaning that PMI scores can be compared across corpora of different sizes, unlike the T-score (described above).
The drawback to the PMI is that it tends to give high scores to low frequency words when they occur most often in the proximity another word.
Chi-square
The Chi-square (or x<sup>2</sup>) measures the observed and expected frequencies of bigrams and assigns a score based on the amount of difference between the two using the standard deviation. The Chi-square is another test that relies on a normal distribution.
The Chi-square shares the benefit of the T-score in taking into account the overall size of the corpus. The drawback of the Chi-square is that it doesn't do well with sparse data. This means that low-frequency (but significant) bigrams may not be represented very well, unlike the scores assigned by the PMI.
Log-Likelihood Ratio
The Log-likelihood ratio calculates the size and significance between the observed and expected frequencies of bigrams and assigns a score based on the result, taking into account the overall size of the corpus. The larger the difference between the observed and expected, the higher the score, and the more statistically significant the collocate is.
The Log-likelihood ratio is my preferred test for collocates because it does not rely on a normal distribution, and for this reason, it can account for sparse or low frequency bigrams (unlike the Chi-square). But unlike the PMI, it does not over-represent low frequency bigrams with inflated scores, as the test is only reporting how much more likely it is that the frequencies are different than they are the same. The drawback to the Log-likelihood ratio, much like the t-score, is that it cannot be used to compare scores across corpora.
The following code filters the results of the focused bigram search based on the statistical tests as described above, assigning the results to a new variable based on the test.
Step16: Below are the results for the Log-likelihood test. The bigrams are sorted in order of significance, and the order of the words in the word-pairs shows their placement in the text. This means that the most significant bigram in the Log-likelihood test contained the words digital privacy, in that order. The word digital appears later on in the list with a lower score when it occurs after the word privacy. Scores above 3.8 are considered to be significant for the Log-likelihood test.
Step17: Let's display this data as a table, and remove some of the extra decimal digits. Using the tabulate module, we call the variable log, set the table heading names (displayed in red), and set the number of decimal digits to 3 (indicated by floatfmt=".3f"), with the numbers aligned on the leftmost digit.
Step18: Here we print the results of this table to a CSV file.
Step19: While the table above is nice, it isn't formated exactly the way it could be, especially since we already know that privacy is one half of the bigram. I want to format the list so I can do some further processing in some spreadsheet software, including combining the scores of the bigrams (like digital privacy and privacy digital) so I can have one score for each word.
The code below sorts the lists generated by each test by the first word in the bigram, appending them to a dictionary called prefix_keys, where each word is a key and the score is the value. Then, we sort the keys by the value with the highest score, and assign the new list to a new variable with the word privacy removed. This code must be repeated for each test.
For the purposes of this analysis, we will only output the two frequency tests and the Log-likelihood test.
Step20: Let's take a look at the new list of scores for the Log-likelihood test, with the word privacy removed. Nothing has changed here except the formatting.
Step21: Again, just for reference, these are the 25 top Log-Likelhood scores for 2015 without the stopwords applied.
Here we will write the sorted results of the tests to a CSV file.
Step22: What is immediately apparent from the Log-likelihood scores is that there are distinct types of words that co-occur with the word privacy. The top 10 most frequently co-occuring words are digital, protect, ethic, access, right, protection, expectation, and information. Based on this list alone, we can deduce that privacy in the Hansard corpus is a serious topic; one that is concerned with ethics and rights, which are things commonly associated with the law. We can also see that privacy has both a digital and an informational aspect, which are things that have an expectation of both access and protection.
While it may seem obvious that these kinds of words would co-occur with privacy, we now have statistical evidence upon which to build our claim.
Part 2 | Python Code:
# This is where the modules are imported
import csv
import sys
import codecs
import nltk
import nltk.collocations
import collections
import statistics
from nltk.metrics.spearman import *
from nltk.collocations import *
from nltk.stem import WordNetLemmatizer
from os import listdir
from os.path import splitext
from os.path import basename
from tabulate import tabulate
# These functions iterate through the directory and create a list of filenames
def list_textfiles(directory):
"Return a list of filenames ending in '.txt'"
textfiles = []
for filename in listdir(directory):
if filename.endswith(".txt"):
textfiles.append(directory + "/" + filename)
return textfiles
def remove_ext(filename):
"Removes the file extension, such as .txt"
name, extension = splitext(filename)
return name
def remove_dir(filepath):
"Removes the path from the file name"
name = basename(filepath)
return name
def get_filename(filepath):
"Removes the path and file extension from the file name"
filename = remove_ext(filepath)
name = remove_dir(filename)
return name
# This function works on the contents of the files
def read_file(filename):
"Read the contents of FILENAME and return as a string."
infile = codecs.open(filename, 'r', 'utf-8')
contents = infile.read()
infile.close()
return contents
Explanation: Collocational Analysis
"You shall know a word by the company it keeps!" These are the oft-quoted words of the linguist J.R. Firth in describing the meaning and spirit of collocational analysis. Collocation is a linguistic term for co-occuring. While most words have the possibility of co-occuring with most other words at some point in the English language, when there is a significant statistical relationship between two regularly co-occuring words, we can refer to these as collocates. One of the first, and most cited examples of collocational analysis concerns the words strong and powerful. While both words mean arguably the same thing, it is statistically more common to see the word strong co-occur with the word tea. Native speakers of English can immediately recognize the familiarity of strong tea as opposed to powerful tea, even though the two phrases both make sense in their own way (see Halliday, 1966 for more of this discussion). Interestingly, the same associations do not occur with the phrases strong men and powerful men, although in these instances, both phrases take on slightly different meanings.
These examples highlight the belief of Firthian linguists that the meaning of a word is not confined to the word itself, but in the associations that words have with other co-occuring words. Statistically significant collocates need not be adjacent, just proximal. The patterns of the words in a text, rather than the individual words themselves, have complex, relational units of meaning that allow us to ask questions about the use of lanugage in specific discourses.
In this exercise we will determine the statistical significance of the words that most often co-occur with privacy in an attempt to better understand the meaning of the word as it is used in the Hansard Corpus. We will count the actual frequency of the co-occurence, as well as use a number of different statistical tests of probability. These tests will be conducted first on one file from the corpus, then on the entire corpus itself.
Part 1: Collocational analysis on one file
This section will determine the statistically significant collocates that accompany the word privacy in the file for 2015. Testing file-by-file allows us to track the diachronic (time-based) change and use of the words.
Again, we'll begin by calling on all the <span style="cursor:help;" title="a set of instructions that performs a specific task"><b>functions</b></span> we will need. Remember that the first few sentences are calling on pre-installed <i>Python</i> <span style="cursor:help;" title="packages of functions and code that serve specific purposes"><b>modules</b></span>, and anything with a def at the beginning is a custom function built specifically for these exercises. The text in red describes the purpose of the function.
End of explanation
stopwords = read_file('../HansardStopwords.txt')
customStopwords = stopwords.split()
#default stopwords with custom words added
hansardStopwords = nltk.corpus.stopwords.words('english')
hansardStopwords += customStopwords
print(hansardStopwords)
Explanation: Collocational analysis is a frequency-based technique that uses word counts to determine significance. One of the problems with counting word frequencies, as we have seen in other sections, is that the most frequently occuring words in English are function words, like the, of, and and. For this reason, it is neccessary to remove these words in order to obtain meaningful results. In text analysis, these high frequency words are compiled into lists called stopwords. While standard stopword lists are provided by the NLTK module, for the Hansard Corpus it was necessary to remove other kinds of words, like proper nouns (names and place names), and other words common to parliamentary proceedings (like Prime Minister, Speaker, etc.). These words, along with the standard stopwords, can be seen below.
Here, we use our read_file function to read in a text file of custom stopwords, assigning the variable customStopwords. We tokenize the list using the split function and then create a variable called hansardStopwords that incorporates the NLTK stopword list, and adding the words from customStopwords if they don't already occur in the NLTK list.
End of explanation
file = '../Counting Word Frequencies/data/2009.txt'
name = get_filename(file)
# opens, reads, and tokenizes the file
text = read_file(file)
words = text.split()
clean = [w.lower() for w in words if w.isalpha()]
# removes stopwords
text = [w for w in clean if w not in hansardStopwords]
Explanation: Now, we use read_file to load the contents of the file for 2015. For consistency and to avoid file duplication, we're always reading the files from the same directory. Even though it was used for other sections, the data is the same. We read the contents of the text file, then remove the case and punctuation from the text, split the words into a list of tokens, and assign the words in each file to a list with the variable name text. What's new here, compared to other sections, is the additional removal of stopwords.
End of explanation
# creates a variable for the lemmatizing function
wnl = WordNetLemmatizer()
# lemmatizes all of the verbs
lemm = []
for word in text:
lemm.append(wnl.lemmatize(word, 'v'))
# lemmatizes all of the nouns
lems = []
for word in lemm:
lems.append(wnl.lemmatize(word, 'n'))
print("Number of words:", len(lems))
Explanation: Another type of processing required for the generation of accurate collocational statistics is called lemmatization. In linguistics, a lemma is the grammatical base or stem of a word. For example, the word protect is the lemma of the verbs protecting and protected, while ethic is the lemma of the noun ethics. When we lemmatize a text, we are removing the grammatical inflections of the word forms (like ing or ed). The purpose of lemmatization for the Hansard Corpus is to obtain more accurate statistics for collocation by avoiding multiple entries for similar but different word forms (like protecting, protected, and protect). For the purpose of this text analysis, I have decided to lemmatize only the nouns and verbs in the Hansard Corpus, as the word privacy is not easily modified by adjectives (or at all by adverbs).
The lemmatizer I have used for this project was developed by Princeton and is called <a href="http://wordnetweb.princeton.edu/perl/webwn" target="_blank">WordNet</a>. Lemmas and their grammatical inflections can be searched using their web interface.
In the code below, I load the WordNetLemmatizer (another function included in the NLTK module) into the variable wnl. Then, I iterate through the text, first lematizing the verbs (shown as v), then the nouns (shown as n). Unfortunately, the WordNet function only takes one argument, so this code requires two pass-throughs of the text. I'm sure there is a more elegant way to construct this code, though I've not found it yet. This is another reason why I've decided only to lemmatize verbs and nouns, rather than including adjectives and adverbs.
End of explanation
print('NOUNS')
print('ethics:', text.count('ethics'))
print('ethics:', lems.count('ethics'))
print('ethic:', lems.count('ethic'))
print('\n')
print('VERBS')
print('protecting:', text.count('protecting'))
print('protecting:', lems.count('protecting'))
print('protected:', text.count('protected'))
print('protected:', lems.count('protected'))
print('protect:', lems.count('protect'))
Explanation: We need to make sure that the lemmatizer did something. Since we've only lemmatized for nouns and verbs, we check that here against the unlemmatized corpus, where text has not been lemmatized and lems has. Below we see that noun ethics appears <u>156 times</u> in the text variable and <u>0 times</u> in the lems variable. But the lemma for ethics: ethic, remains in the lems variable for a frequency of <u>161 times</u>. Similar values are repeated for the verb and variations of protect.
End of explanation
print('privacy:', text.count('privacy'))
print('privacy:', lems.count('privacy'))
Explanation: Here we check that the lemmatizer hasn't been over-zealous by determining the frequency for privacy before and after the lemmatizing function. The frequencies are the same, meaning we've not lost anything in the lemmatization.
End of explanation
# prints the 10 most common bigrams
colText = nltk.Text(lems)
colText.collocations(10)
Explanation: Part 1.1: Unfocused Bigram Search
Let's clarify some of the words we will be using in the rest of this exercise:
- ngram = catch-all term for multiple word occurences
- bigram = word pairs
- trigram = three-word phrases
After the stopwords have been removed and the nouns and verbs lemmatized, we are ready to determine statistics for co-occuring words, or collocates. Any collocational test requires four pieces of data: the length of the text in which the words appear, the number of times the words both seperately appear in the text, and the number of times the words occur together.
Before we focus our search on the word privacy, we will determine the 10 most commonly occuring bigrams (based on frequency) in the 2015 Hansard Corpus.
In this code we assign the lems variable to colText by adding the nltk.Text functionality. We can then use the NLTK function collocations to determine (in this case) the 10 most common bigrams. Changing the number in the brackets will change the number of results returned.
End of explanation
# creates a list of bigrams (ngrams of 2), printing the first 5
colBigrams = list(nltk.ngrams(colText, 2))
colBigrams[:5]
Explanation: For reference, I ran an earlier test that shows the 10 most common bigrams without the stopwords removed. Duplicating this test only requires that stopwords not be removed as the text is being tokenized and cleaned. We can see that there is a clear difference in the types of results returned with and without stopwords applied. The list of words appearing above is much more interesting in terms of discourse analysis, when functional parliamentary phrases like Prime Minister and Parliamentary Secretary have been removed.
Here is a piece of code that shows how the ngram function works. It goes word by word through the text, pairing each word with the one that came before. That's why the last word in the first word pair becomes the first in the next word pair.
We assign our colText variable to the colBigrams variable by specifiying that we want to make a list of ngrams containing 2 words. We could obtain trigrams by changing the 2 in the first line of code to a 3. Then, in the second line of code, we display the first 5 results of the colBigrams variable with :5. We could display the first 10 by changing the number in the square brackets to :10, or show the top 10 results again by removing the colon.
End of explanation
print("Number of words:", len(lems))
print("Number of bigrams:", len(colBigrams))
Explanation: Here we will check to make sure we've the bigram function has gone through and counted the entire text. Having one less ngram is correct because of the way in which the ngrams are generated word-by-word in the test above.
End of explanation
# loads bigram code from NLTK
bigram_measures = nltk.collocations.BigramAssocMeasures()
# ngrams with 'privacy' as a member
privacy_filter = lambda *w: 'privacy' not in w
Explanation: Part 1.2: Focused Bigram Search
In this section we will focus our search on bigrams that contain the word privacy. First, we'll load the bigram tests from the NLTK module, then, we will create a filter that only searches for bigrams containing privacy. To search for bigrams containing other words, the word privacy in the second line of code can be changed to something else.
End of explanation
# bigrams
finder = BigramCollocationFinder.from_words(lems, window_size = 2)
# only bigrams that appear 4+ times
finder.apply_freq_filter(4)
# only bigrams that contain 'privacy'
finder.apply_ngram_filter(privacy_filter)
Explanation: Next, we will load our lemmatized corpus into the bigram collocation finder, apply a frequency filter that only considers bigrams that appear four or more times, and then apply our privacy filter to the results. The variable finder now contains a list of all the bigrams containing privacy that occur four or more times.
End of explanation
%matplotlib inline
fd = nltk.FreqDist(colText)
fd.plot(25)
Explanation: Distribution
Before I describe the statistical tests that we will use to determine the collocates for privacy, it is important to briefly discuss distribution. The chart below maps the distribution of the top 25 terms in the 2015 file.
This is important because some of the tests assume a normal distribution of words in the text. A normal distribution means that the majority of the words occur a majority of the time; it is represented in statistics as a bell curve. This means that 68% of the words would occur within one standard deviation of the mean (or average frequency of each word in the text), 95% within two standard deviations, and 99.7 within three standard deviations.
What this means, is that tests that assume a normal distribution will work, but have inaccurate statistics to back them up. I've chosen to describe all of the collocational tests here as a matter of instruction and description, but it's important to understand the tests and what they assume before making research claims based on their results.
The code below calls on the NLTK function FreqDist. The function calculates the frequency of all the words in the variable and charts them in order from highest to lowest. Here I've only requested the first 25, though more or less can be displayed by changing the number in the brackets. Additionally, in order to have the chart displayed inline (and not as a popup), I've called the <i>magic</i> function matplotlib inline. <i>iPython</i> magic functions are identifable by the <b>%</b> symbol.
End of explanation
print('privacy:',fd['privacy'], 'times or','{:.2%}'.format(float(colText.count("privacy"))/(len(colText))))
print('right:',fd['right'], 'times or','{:.2%}'.format(float(colText.count("right"))/(len(colText))))
print('work:',fd['work'], 'times or','{:.2%}'.format(float(colText.count("work")/(len(colText)))))
Explanation: As we can see from the chart above, work is the highest frequency word in our lemmatized corpus with stopwords applied, followed by right. The word privacy does not even occur in the list. The code below calculates the frequency and percentage of times these words occur in the text. While work makes up 0.56% of the total words in the text, privacy accounts for only 0.06%.
End of explanation
fdnums = []
for sample in fd:
fdnums.append(fd[sample])
numlist = list(map(int, fdnums))
print("Total of unique words:", len(numlist))
print("Total of words that appear only once:", len(fd.hapaxes()))
print("Percentage of words that appear only once:",'{:.2%}'.format(len(fd.hapaxes())/len(numlist)))
Explanation: To calculate the mean, and standard deviation, we must count the frequency of all the words in the text and append those values to a list. Since the numbers in the list will actually be represented as text (not as integers), we must add an extra line of code to map those values so they can be used mathematically, calling on the map function.
End of explanation
datamean = statistics.mean(numlist)
print("Mean:", '{:.2f}'.format(statistics.mean(numlist)))
print("Standard Deviation:", '{:.2f}'.format(statistics.pstdev(numlist,datamean)))
print("Variance:", '{:.2f}'.format(statistics.pvariance(numlist,datamean)))
Explanation: Once we have our numbers in a list, as the variable numlist, we can use the built in statistics library for our calculations. Below we've calculated the mean, standard deviation, and the variance.
These numbers prove that the numerical data has a non-normal distribution. The mean is relatively low, compared to the highest frequency word, work, which appears a total of <u>7588</u> times.
The low mean is due to the high number of low frequency words; there are <u>5847</u> words that appear only once, totalling 30% of the unique words in the entire set. The standard deviation is higher than the mean, which predicts a high variance of numbers in the set, something that is proven by the variance calculation. A large variance shows that the numbers in the set are far apart from the mean, and each other.
End of explanation
# filter results based on statistical test
# calulates the raw frequency as an actual number and percentage of total words
act = finder.ngram_fd.items()
raw = finder.score_ngrams(bigram_measures.raw_freq)
# student's - t score
tm = finder.score_ngrams(bigram_measures.student_t)
# pointwise mutual information score
pm = finder.score_ngrams(bigram_measures.pmi)
# chi-square score
ch = finder.score_ngrams(bigram_measures.chi_sq)
# log-likelihood ratio
log = finder.score_ngrams(bigram_measures.likelihood_ratio)
Explanation: Statistics
Raw Frequency
The frequency calculations determine both the actual number of occurences of the bigram in the corpus as well as the number of times the bigram occurs relative to the text as a whole (expressed as a percentage).
Student's-T
The Student's T-Score, also called the T-Score, measures the <b>confidence</b> of a claim of collocation and assigns a score based on that certainty. It is computed by subtracting the expected frequency of the bigram by the observed frequency of the bigram, and then dividing the result by the standard deviation which is calculated based on the overall size of the corpus.
The benefit of using the T-Score is that it considers the evidence for collocates based on the overall amount of evidence provided by the size of the corpus. This differs from the PMI score (described below) which only considers strength based on relative frequencies. The drawbacks to the T-Score include its reliance on a normal distribution (due to the incorporation of standard deviation in the calculation), as well as its dependence on the overall size of the corpus. T-scores can't be compared across corpora of different sizes.
Pointwise Mutual Information
The Pointwise Mutual Information Score (known as PMI or MI) measures the <b>strength</b> of a collocation and assigns it a score. It is a probability-based calculation that compares the number of actual bigrams to the expected number of bigrams based on the relative frequency counts of the words. The test compares the expected figure to the observed figure, converting the difference to a number indicating the strength of the collocation.
The benefit of using PMI is that the value of the score is not dependent on the overall size of the corpus, meaning that PMI scores can be compared across corpora of different sizes, unlike the T-score (described above).
The drawback to the PMI is that it tends to give high scores to low frequency words when they occur most often in the proximity another word.
Chi-square
The Chi-square (or x<sup>2</sup>) measures the observed and expected frequencies of bigrams and assigns a score based on the amount of difference between the two using the standard deviation. The Chi-square is another test that relies on a normal distribution.
The Chi-square shares the benefit of the T-score in taking into account the overall size of the corpus. The drawback of the Chi-square is that it doesn't do well with sparse data. This means that low-frequency (but significant) bigrams may not be represented very well, unlike the scores assigned by the PMI.
Log-Likelihood Ratio
The Log-likelihood ratio calculates the size and significance between the observed and expected frequencies of bigrams and assigns a score based on the result, taking into account the overall size of the corpus. The larger the difference between the observed and expected, the higher the score, and the more statistically significant the collocate is.
The Log-likelihood ratio is my preferred test for collocates because it does not rely on a normal distribution, and for this reason, it can account for sparse or low frequency bigrams (unlike the Chi-square). But unlike the PMI, it does not over-represent low frequency bigrams with inflated scores, as the test is only reporting how much more likely it is that the frequencies are different than they are the same. The drawback to the Log-likelihood ratio, much like the t-score, is that it cannot be used to compare scores across corpora.
The following code filters the results of the focused bigram search based on the statistical tests as described above, assigning the results to a new variable based on the test.
End of explanation
print(log)
Explanation: Below are the results for the Log-likelihood test. The bigrams are sorted in order of significance, and the order of the words in the word-pairs shows their placement in the text. This means that the most significant bigram in the Log-likelihood test contained the words digital privacy, in that order. The word digital appears later on in the list with a lower score when it occurs after the word privacy. Scores above 3.8 are considered to be significant for the Log-likelihood test.
End of explanation
print(tabulate(log, headers = ["Collocate", "Log-Likelihood"], floatfmt=".3f", numalign="left"))
Explanation: Let's display this data as a table, and remove some of the extra decimal digits. Using the tabulate module, we call the variable log, set the table heading names (displayed in red), and set the number of decimal digits to 3 (indicated by floatfmt=".3f"), with the numbers aligned on the leftmost digit.
End of explanation
with open(name + 'CompleteLog.csv','w') as f:
w = csv.writer(f)
w.writerows(log)
Explanation: Here we print the results of this table to a CSV file.
End of explanation
##################################################################
################ sorts list of ACTUAL frequencies ################
##################################################################
# group bigrams by first and second word in bigram
prefix_keys = collections.defaultdict(list)
for key, a in act:
# first word
prefix_keys[key[0]].append((key[1], a))
# second word
prefix_keys[key[1]].append((key[0], a))
# sort keyed bigrams by strongest association.
for key in prefix_keys:
prefix_keys[key].sort(key = lambda x: -x[1])
# remove the word privacy and display the first 50 results
actkeys = prefix_keys['privacy'][:50]
##################################################################
#### sorts list of RAW (expressed as percentage) frequencies #####
##################################################################
# group bigrams by first and second word in bigram
prefix_keys = collections.defaultdict(list)
for key, r in raw:
# first word
prefix_keys[key[0]].append((key[1], r))
# second word
prefix_keys[key[1]].append((key[0], r))
# sort keyed bigrams by strongest association.
for key in prefix_keys:
prefix_keys[key].sort(key = lambda x: -x[1])
rawkeys = prefix_keys['privacy'][:50]
##################################################################
############### sorts list of log-likelihood scores ##############
##################################################################
# group bigrams by first and second word in bigram
prefix_keys = collections.defaultdict(list)
for key, l in log:
# first word
prefix_keys[key[0]].append((key[1], l))
# second word
prefix_keys[key[1]].append((key[0], l))
# sort bigrams by strongest association
for key in prefix_keys:
prefix_keys[key].sort(key = lambda x: -x[1])
logkeys = prefix_keys['privacy'][:50]
Explanation: While the table above is nice, it isn't formated exactly the way it could be, especially since we already know that privacy is one half of the bigram. I want to format the list so I can do some further processing in some spreadsheet software, including combining the scores of the bigrams (like digital privacy and privacy digital) so I can have one score for each word.
The code below sorts the lists generated by each test by the first word in the bigram, appending them to a dictionary called prefix_keys, where each word is a key and the score is the value. Then, we sort the keys by the value with the highest score, and assign the new list to a new variable with the word privacy removed. This code must be repeated for each test.
For the purposes of this analysis, we will only output the two frequency tests and the Log-likelihood test.
End of explanation
from tabulate import tabulate
print(tabulate(logkeys, headers = ["Collocate", "Log-Likelihood"], floatfmt=".3f", numalign="left"))
Explanation: Let's take a look at the new list of scores for the Log-likelihood test, with the word privacy removed. Nothing has changed here except the formatting.
End of explanation
with open(name + 'collocate_Act.csv','w') as f:
w = csv.writer(f)
w.writerows(actkeys)
with open(name + 'collocate_Raw.csv','w') as f:
w = csv.writer(f)
w.writerows(rawkeys)
with open(name + 'collocate_Log.csv','w') as f:
w = csv.writer(f)
w.writerows(logkeys)
Explanation: Again, just for reference, these are the 25 top Log-Likelhood scores for 2015 without the stopwords applied.
Here we will write the sorted results of the tests to a CSV file.
End of explanation
corpus = []
for filename in list_textfiles('../Counting Word Frequencies/data2'):
text_2 = read_file(filename)
words_2 = text_2.split()
clean_2 = [w.lower() for w in words_2 if w.isalpha()]
text_2 = [w for w in clean_2 if w not in hansardStopwords]
corpus.append(text_2)
lemm_2 = []
for doc in corpus:
for word in doc:
lemm_2.append(wnl.lemmatize(word, 'v'))
lems_2 = []
for word in lemm_2:
lems_2.append(wnl.lemmatize(word, 'n'))
# prints the 10 most common multi-word pairs (n-grams)
colText_2 = nltk.Text(lems_2)
colText_2.collocations(10)
# bigrams
finder_2 = BigramCollocationFinder.from_words(lems_2, window_size = 2)
# only bigrams that appear 10+ times
finder_2.apply_freq_filter(10)
# only bigrams that contain 'privacy'
finder_2.apply_ngram_filter(privacy_filter)
# filter results based on statistical test
act_2 = finder_2.ngram_fd.items()
raw_2 = finder_2.score_ngrams(bigram_measures.raw_freq)
log_2 = finder_2.score_ngrams(bigram_measures.likelihood_ratio)
##################################################################
################ sorts list of ACTUAL frequencies ################
##################################################################
# group bigrams by first and second word in bigram
prefix_keys = collections.defaultdict(list)
for key, a in act_2:
# first word
prefix_keys[key[0]].append((key[1], a))
# second word
prefix_keys[key[1]].append((key[0], a))
# sort keyed bigrams by strongest association.
for key in prefix_keys:
prefix_keys[key].sort(key = lambda x: -x[1])
# remove the word privacy and display the first 50 results
actkeys_2 = prefix_keys['privacy'][:50]
##################################################################
#### sorts list of RAW (expressed as percentage) frequencies #####
##################################################################
# group bigrams by first and second word in bigram
prefix_keys = collections.defaultdict(list)
for key, r in raw_2:
# first word
prefix_keys[key[0]].append((key[1], r))
# second word
prefix_keys[key[1]].append((key[0], r))
# sort keyed bigrams by strongest association.
for key in prefix_keys:
prefix_keys[key].sort(key = lambda x: -x[1])
rawkeys_2 = prefix_keys['privacy'][:50]
##################################################################
############### sorts list of log-likelihood scores ##############
##################################################################
# group bigrams by first and second word in bigram
prefix_keys = collections.defaultdict(list)
for key, l in log_2:
# first word
prefix_keys[key[0]].append((key[1], l))
# second word
prefix_keys[key[1]].append((key[0], l))
# sort bigrams by strongest association
for key in prefix_keys:
prefix_keys[key].sort(key = lambda x: -x[1])
logkeys_2 = prefix_keys['privacy'][:50]
from tabulate import tabulate
print(tabulate(logkeys_2, headers = ["Collocate", "Log-Likelihood"], floatfmt=".3f", numalign="left"))
with open('Allcollocate_Act.csv','w') as f:
w = csv.writer(f)
w.writerows(actkeys_2)
with open('Allcollocate_Raw.csv','w') as f:
w = csv.writer(f)
w.writerows(rawkeys_2)
with open('Allcollocate_Log.csv','w') as f:
w = csv.writer(f)
w.writerows(logkeys_2)
Explanation: What is immediately apparent from the Log-likelihood scores is that there are distinct types of words that co-occur with the word privacy. The top 10 most frequently co-occuring words are digital, protect, ethic, access, right, protection, expectation, and information. Based on this list alone, we can deduce that privacy in the Hansard corpus is a serious topic; one that is concerned with ethics and rights, which are things commonly associated with the law. We can also see that privacy has both a digital and an informational aspect, which are things that have an expectation of both access and protection.
While it may seem obvious that these kinds of words would co-occur with privacy, we now have statistical evidence upon which to build our claim.
Part 2: Reading the whole corpus
Here we repeat the above code, only instead of using one file, we will combine all of the files to obtain the scores for the entire corpus.
End of explanation |
670 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How do I apply sort to a pandas groupby operation? The command below returns an error saying that 'bool' object is not callable | Problem:
import pandas as pd
df = pd.DataFrame({'cokey':[11168155,11168155,11168155,11168156,11168156],
'A':[18,0,56,96,0],
'B':[56,18,96,152,96]})
def g(df):
return df.groupby('cokey').apply(pd.DataFrame.sort_values, 'A')
result = g(df.copy()) |
671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing the Gradient Descent Algorithm
In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
Step1: Reading and plotting the data
Step2: TODO
Step3: Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
Step4: Time to train the algorithm!
When we run the function, we'll obtain the following | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
Explanation: Implementing the Gradient Descent Algorithm
In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
End of explanation
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
Explanation: Reading and plotting the data
End of explanation
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
pass
# Output (prediction) formula
def output_formula(features, weights, bias):
pass
# Error (log-loss) formula
def error_formula(y, output):
pass
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
pass
Explanation: TODO: Implementing the basic functions
Here is your turn to shine. Implement the following formulas, as explained in the text.
- Sigmoid activation function
$$\sigma(x) = \frac{1}{1+e^{-x}}$$
Output (prediction) formula
$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$
Error function
$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$
The function that updates the weights
$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$
$$ b \longrightarrow b + \alpha (y - \hat{y})$$
End of explanation
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
Explanation: Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
End of explanation
train(X, y, epochs, learnrate, True)
Explanation: Time to train the algorithm!
When we run the function, we'll obtain the following:
- 10 updates with the current training loss and accuracy
- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.
- A plot of the error function. Notice how it decreases as we go through more epochs.
End of explanation |
672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2017 Google LLC.
Step1: # Creación y manipulación de tensores
Objetivos de aprendizaje
Step2: ## Suma de vectores
Puedes realizar muchas operaciones matemáticas en los tensores (TF API). El siguiente código crea y manipula dos vectores (tensores 1-D), cada uno con seis elementos
Step3: ### Formas de tensores
Las formas se usan para describir el tamaño y la cantidad de dimensiones de un tensor. La forma de un tensor se expresa como list, donde el elemento con índice i representa el tamaño en la dimensión i. La longitud de la lista indica la clasificación del tensor (p. ej., la cantidad de dimensiones).
Para obtener más información, consulta la documentación de TensorFlow.
Algunos ejemplos básicos
Step4: ### Emisión
En términos matemáticos, solo puedes realizar operaciones basadas en elementos (p. ej., suma e iguales) en los tensores de la misma forma. Sin embargo, en TensorFlow, puedes realizar operaciones en tensores que tradicionalmente eran incompatibles. TensorFlow es compatible con la emisión (un concepto acuñado de NumPy), donde la matriz más pequeña en una operación basada en elementos se amplía para que tenga la misma forma que la matriz más grande. Por ejemplo, mediante la emisión
Step5: ## Producto de arreglos
En álgebra lineal, cuando se multiplican dos arreglos, la cantidad de columnas del primer arreglo debe ser igual a la cantidad de filas en el segundo arreglo.
Es válido para multiplicar un arreglo de 3 × 4 por uno de 4 × 2. El resultado será un arreglo de 3 × 2.
Es inválido para multiplicar un arreglo de 4 × 2 por uno de 3 × 4.
Step6: ## Cambio de formas de tensores
Dado que la suma de tensores y el producto de arreglos tienen restricciones en los operandos, los programadores de TensorFlow con frecuencia deben cambiar la forma de los tensores.
Para cambiar la forma de un tensor, puedes usar el método tf.reshape.
Por ejemplo, puedes cambiar la forma de un tensor de 8 × 2 a uno de 2 × 8 o 4 × 4
Step7: También puedes usar tf.reshape para cambiar la cantidad de dimensiones (la "clasificación") del tensor.
Por ejemplo, puedes cambiar la forma de un tensor de 8 × 2 a uno de 3-D de 2 × 2 × 4 o uno de 1-D de 16 elementos.
Step8: ### Ejercicio n.º 1
Step9: ### Solución
Haz clic a continuación para obtener una solución.
Step10: ## Variables, Inicialización y asignación
Hasta el momento, todas las operaciones que realizamos fueron en valores estáticos (tf.constant); al invocar a eval(), el resultado fue siempre el mismo. TensorFlow permite definir objetos de Variable, cuyos valores pueden modificarse.
Cuando se crea una variable, puedes establecer un valor inicial de forma explícita o usar un inicializador (como una distribución)
Step11: Una particularidad de TensorFlow es que la inicialización de variables no es automática. Por ejemplo, el siguiente bloque generará un error
Step12: La forma más fácil de inicializar una variable es invocando a global_variables_initializer. Ten en cuenta el uso de Session.run(), que es prácticamente equivalente a eval().
Step13: Una vez inicializadas, las variables mantendrán su valor en la misma sesión (sin embargo, al iniciar una nueva sesión, deberás volver a inicializarlas)
Step14: Para cambiar el valor de una variable, usa el operando assign. Ten en cuenta que, con solo crear el operando assign, no se obtendrá ningún efecto. Al igual que con la inicialización, deberás ejecutar el operando de asignación para actualizar el valor de la variable
Step15: Hay muchos más temas sobre variables que no tratamos, como la carga y el almacenamiento. Para obtener más información, consulta los documentos de TensorFlow.
### Ejercicio n.º 2
Step16: ### Solución
Haz clic a continuación para obtener una solución. | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2017 Google LLC.
End of explanation
from __future__ import print_function
import tensorflow as tf
Explanation: # Creación y manipulación de tensores
Objetivos de aprendizaje:
* inicializar y asignar las variables de TensorFlow
* crear y manipular tensores
* repasar la suma y el producto en álgebra lineal (si estos temas son nuevos para ti, consulta una introducción a la suma y el producto
* familiarizarte con operaciones básicas de matemática y matrices de TensorFlow
End of explanation
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create another six-element vector. Each element in the vector will be
# initialized to 1. The first argument is the shape of the tensor (more
# on shapes below).
ones = tf.ones([6], dtype=tf.int32)
# Add the two vectors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
# Create a session to run the default graph.
with tf.Session() as sess:
print(just_beyond_primes.eval())
Explanation: ## Suma de vectores
Puedes realizar muchas operaciones matemáticas en los tensores (TF API). El siguiente código crea y manipula dos vectores (tensores 1-D), cada uno con seis elementos:
End of explanation
with tf.Graph().as_default():
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
with tf.Session() as sess:
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.eval())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.eval())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.eval())
Explanation: ### Formas de tensores
Las formas se usan para describir el tamaño y la cantidad de dimensiones de un tensor. La forma de un tensor se expresa como list, donde el elemento con índice i representa el tamaño en la dimensión i. La longitud de la lista indica la clasificación del tensor (p. ej., la cantidad de dimensiones).
Para obtener más información, consulta la documentación de TensorFlow.
Algunos ejemplos básicos:
End of explanation
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create a constant scalar with value 1.
ones = tf.constant(1, dtype=tf.int32)
# Add the two tensors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
with tf.Session() as sess:
print(just_beyond_primes.eval())
Explanation: ### Emisión
En términos matemáticos, solo puedes realizar operaciones basadas en elementos (p. ej., suma e iguales) en los tensores de la misma forma. Sin embargo, en TensorFlow, puedes realizar operaciones en tensores que tradicionalmente eran incompatibles. TensorFlow es compatible con la emisión (un concepto acuñado de NumPy), donde la matriz más pequeña en una operación basada en elementos se amplía para que tenga la misma forma que la matriz más grande. Por ejemplo, mediante la emisión:
Si un operando requiere un tensor de tamaño [6], un tensor de tamaño [1] o [] puede servir como operando.
Si un operando requiere un tensor de tamaño [4, 6], cualquiera de los siguientes tamaños puede servir como operando:
[1, 6]
[6]
[]
Si una operación requiere un tensor de tamaño [3, 5, 6], cualquiera de los siguientes tamaños puede servir como operando:
[1, 5, 6]
[3, 1, 6]
[3, 5, 1]
[1, 1, 1]
[5, 6]
[1, 6]
[6]
[1]
[]
NOTA: Cuando un tensor es de emisión, sus entradas se copian conceptualmente. (Por cuestiones de rendimiento, no se copian realmente; la emisión se creó como una optimización del rendimiento).
El conjunto de reglas completo de la emisión se describe en detalle en la documentación sobre la emisión de NumPy.
El siguiente código realiza la misma suma de tensores que antes, pero mediante la emisión:
End of explanation
with tf.Graph().as_default():
# Create a matrix (2-d tensor) with 3 rows and 4 columns.
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# Create a matrix with 4 rows and 2 columns.
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`.
# The resulting matrix will have 3 rows and 2 columns.
matrix_multiply_result = tf.matmul(x, y)
with tf.Session() as sess:
print(matrix_multiply_result.eval())
Explanation: ## Producto de arreglos
En álgebra lineal, cuando se multiplican dos arreglos, la cantidad de columnas del primer arreglo debe ser igual a la cantidad de filas en el segundo arreglo.
Es válido para multiplicar un arreglo de 3 × 4 por uno de 4 × 2. El resultado será un arreglo de 3 × 2.
Es inválido para multiplicar un arreglo de 4 × 2 por uno de 3 × 4.
End of explanation
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 2x8 matrix.
reshaped_2x8_matrix = tf.reshape(matrix, [2,8])
# Reshape the 8x2 matrix into a 4x4 matrix
reshaped_4x4_matrix = tf.reshape(matrix, [4,4])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.eval())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.eval())
Explanation: ## Cambio de formas de tensores
Dado que la suma de tensores y el producto de arreglos tienen restricciones en los operandos, los programadores de TensorFlow con frecuencia deben cambiar la forma de los tensores.
Para cambiar la forma de un tensor, puedes usar el método tf.reshape.
Por ejemplo, puedes cambiar la forma de un tensor de 8 × 2 a uno de 2 × 8 o 4 × 4:
End of explanation
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 3-D 2x2x4 tensor.
reshaped_2x2x4_tensor = tf.reshape(matrix, [2,2,4])
# Reshape the 8x2 matrix into a 1-D 16-element tensor.
one_dimensional_vector = tf.reshape(matrix, [16])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.eval())
print("1-D vector:")
print(one_dimensional_vector.eval())
Explanation: También puedes usar tf.reshape para cambiar la cantidad de dimensiones (la "clasificación") del tensor.
Por ejemplo, puedes cambiar la forma de un tensor de 8 × 2 a uno de 3-D de 2 × 2 × 4 o uno de 1-D de 16 elementos.
End of explanation
# Write your code for Task 1 here.
Explanation: ### Ejercicio n.º 1: Cambia la forma de dos tensores para poder multiplicarlos.
Los siguientes dos vectores son incompatibles para calcular el producto de arreglos:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
Cambia la forma de estos vectores para que sean operandos compatibles y posibilitar el producto de arreglos.
Luego, invoca una operación de producto de arreglos en los tensores a los que les cambiaste la forma.
End of explanation
with tf.Graph().as_default(), tf.Session() as sess:
# Task: Reshape two tensors in order to multiply them
# Here are the original operands, which are incompatible
# for matrix multiplication:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
# We need to reshape at least one of these operands so that
# the number of columns in the first operand equals the number
# of rows in the second operand.
# Reshape vector "a" into a 2-D 2x3 matrix:
reshaped_a = tf.reshape(a, [2,3])
# Reshape vector "b" into a 2-D 3x1 matrix:
reshaped_b = tf.reshape(b, [3,1])
# The number of columns in the first matrix now equals
# the number of rows in the second matrix. Therefore, you
# can matrix mutiply the two operands.
c = tf.matmul(reshaped_a, reshaped_b)
print(c.eval())
# An alternate approach: [6,1] x [1, 3] -> [6,3]
Explanation: ### Solución
Haz clic a continuación para obtener una solución.
End of explanation
g = tf.Graph()
with g.as_default():
# Create a variable with the initial value 3.
v = tf.Variable([3])
# Create a variable of shape [1], with a random initial value,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.Variable(tf.random_normal([1], mean=1.0, stddev=0.35))
Explanation: ## Variables, Inicialización y asignación
Hasta el momento, todas las operaciones que realizamos fueron en valores estáticos (tf.constant); al invocar a eval(), el resultado fue siempre el mismo. TensorFlow permite definir objetos de Variable, cuyos valores pueden modificarse.
Cuando se crea una variable, puedes establecer un valor inicial de forma explícita o usar un inicializador (como una distribución):
End of explanation
with g.as_default():
with tf.Session() as sess:
try:
v.eval()
except tf.errors.FailedPreconditionError as e:
print("Caught expected error: ", e)
Explanation: Una particularidad de TensorFlow es que la inicialización de variables no es automática. Por ejemplo, el siguiente bloque generará un error:
End of explanation
with g.as_default():
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
sess.run(initialization)
# Now, variables can be accessed normally, and have values assigned to them.
print(v.eval())
print(w.eval())
Explanation: La forma más fácil de inicializar una variable es invocando a global_variables_initializer. Ten en cuenta el uso de Session.run(), que es prácticamente equivalente a eval().
End of explanation
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# These three prints will print the same value.
print(w.eval())
print(w.eval())
print(w.eval())
Explanation: Una vez inicializadas, las variables mantendrán su valor en la misma sesión (sin embargo, al iniciar una nueva sesión, deberás volver a inicializarlas):
End of explanation
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# This should print the variable's initial value.
print(v.eval())
assignment = tf.assign(v, [7])
# The variable has not been changed yet!
print(v.eval())
# Execute the assignment op.
sess.run(assignment)
# Now the variable is updated.
print(v.eval())
Explanation: Para cambiar el valor de una variable, usa el operando assign. Ten en cuenta que, con solo crear el operando assign, no se obtendrá ningún efecto. Al igual que con la inicialización, deberás ejecutar el operando de asignación para actualizar el valor de la variable:
End of explanation
# Write your code for Task 2 here.
Explanation: Hay muchos más temas sobre variables que no tratamos, como la carga y el almacenamiento. Para obtener más información, consulta los documentos de TensorFlow.
### Ejercicio n.º 2: Simula 10 giros de dos dados.
Crea una simulación de dados, en la que se genere un tensor de 2-D de 10 × 3 con las siguientes características:
Que las columnas 1 y 2 incluyan una tirada de uno de los dados.
Que la columna 3 incluya la suma de las columnas 1 y 2 en la misma fila.
Por ejemplo, la primera fila puede tener los siguientes valores:
La columna 1 incluye 4
La columna 2 incluye 3
La columna 3 incluye 7
Consulta la documentación de TensorFlow para resolver esta tarea.
End of explanation
with tf.Graph().as_default(), tf.Session() as sess:
# Task 2: Simulate 10 throws of two dice. Store the results
# in a 10x3 matrix.
# We're going to place dice throws inside two separate
# 10x1 matrices. We could have placed dice throws inside
# a single 10x2 matrix, but adding different columns of
# the same matrix is tricky. We also could have placed
# dice throws inside two 1-D tensors (vectors); doing so
# would require transposing the result.
dice1 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
dice2 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
# We may add dice1 and dice2 since they share the same shape
# and size.
dice_sum = tf.add(dice1, dice2)
# We've got three separate 10x1 matrices. To produce a single
# 10x3 matrix, we'll concatenate them along dimension 1.
resulting_matrix = tf.concat(
values=[dice1, dice2, dice_sum], axis=1)
# The variables haven't been initialized within the graph yet,
# so let's remedy that.
sess.run(tf.global_variables_initializer())
print(resulting_matrix.eval())
Explanation: ### Solución
Haz clic a continuación para obtener una solución.
End of explanation |
673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This tutorial introduces the basic features for simulating titratable systems via the constant pH method.
The constant pH method is one of the methods implemented for simulating systems with chemical reactions within the Reaction Ensemble module. It is a Monte Carlo method designed to model an acid-base ionization reaction at a given (fixed) value of solution pH.
We will consider a homogeneous aqueous solution of a titratable acidic species $\mathrm{HA}$ that can dissociate in a reaction, that is characterized by the equilibrium constant $\mathrm{p}K_A=-\log_{10} K_A$
$$\mathrm{HA} \Leftrightarrow \mathrm{A}^- + \mathrm{H}^+$$
If $N_0 = N_{\mathrm{HA}} + N_{\mathrm{A}^-}$ is the number of titratable groups in solution, then we define the degree of dissociation $\alpha$ as
Step1: After defining the simulation parameters, we set up the system that we want to simulate. It is a polyelectrolyte chain with some added salt that is used to control the ionic strength of the solution. For the first run, we set up the system without any steric repulsion and without electrostatic interactions. In the next runs, we will add the steric repulsion and electrostatic interactions to observe their effect on the ionization.
Step2: After setting creating the particles we initialize the reaction ensemble by setting the temperature, exclusion radius and seed of the random number generator. We set the temperature to unity, that determines that our reduced unit of energy will be $\varepsilon=1k_{\mathrm{B}}T$. In an interacting system the exclusion radius ensures that particle insertions too close to other particles are not attempted. Such insertions would make the subsequent Langevin dynamics integration unstable. If the particles are not interacting, we can set the exclusion radius to $0.0$. Otherwise, $1.0$ is a good value. We set the seed to a constant value to ensure reproducible results.
Step3: The next step is to define the reaction system. The order in which species are written in the lists of reactants and products is very important for ESPResSo. When a reaction move is performed, identity of the first species in the list of reactants is changed to the first species in the list of products, the second reactant species is changed to the second product species, and so on. If the reactant list has more species than the product list, then excess reactant species are deleted from the system. If the product list has more species than the reactant list, then product the excess product species are created and randomly placed inside the simulation box. This convention is especially important if some of the species belong to a chain-like molecule, and cannot be placed at an arbitrary position.
In the example below, the order of reactants and products ensures that identity of $\mathrm{HA}$ is changed to $\mathrm{A^{-}}$ and vice versa, while $\mathrm{H^{+}}$ is inserted/deleted in the reaction move. Reversing the order of products in our reaction (i.e. from product_types=[TYPE_B, TYPE_A] to product_types=[TYPE_A, TYPE_B]), would result in a reaction move, where the identity HA would be changed to $\mathrm{H^{+}}$, while $\mathrm{A^{-}}$ would be inserted/deleted at a random position in the box. We also assign charges to each type because the charge will play an important role later, in simulations with electrostatic interactions.
Step4: Next, we perform simulations at different pH values. The system must be equilibrated at each pH before taking samples.
Calling RE.reaction(X) attempts in total X reactions (in both backward and forward direction).
Step5: Results
Finally we plot our results and compare them to the analytical results obtained from the Henderson-Hasselbalch equation.
Statistical Uncertainty
The molecular simulation produces a sequence of snapshots of the system, that
constitute a Markov chain. It is a sequence of realizations of a random process, where
the next value in the sequence depends on the preceding one. Therefore,
the subsequent values are correlated. To estimate statistical error of the averages
determined in the simulation, one needs to correct for the correlations.
Here, we will use a rudimentary way of correcting for correlations, termed the binning method.
We refer the reader to specialized literature for a more sophisticated discussion, for example Janke2002. The general idea is to group a long sequence of correlated values into a rather small number of blocks, and compute an average per each block. If the blocks are big enough, they
can be considered uncorrelated, and one can apply the formula for standard error of the mean of uncorrelated values. If the number of blocks is small, then they are uncorrelated but the obtained error estimates has a high uncertainty. If the number of blocks is high, then they are too short to be uncorrelated, and the obtained error estimates are systematically lower than the correct value. Therefore, the method works well only if the sample size is much greater than the autocorrelation time, so that it can be divided into a sufficient number of mutually uncorrelated blocks.
In the example below, we use a fixed number of 16 blocks to obtain the error estimates.
Step6: The simulation results for the non-interacting case very well compare with the analytical solution of Henderson-Hasselbalch equation. There are only minor deviations, and the estimated errors are small too. This situation will change when we introduce interactions.
It is useful to check whether the estimated errors are consistent with the assumptions that were used to obtain them. To do this, we follow Janke2002 to estimate the number of uncorrelated samples per block, and check whether each block contains a sufficient number of uncorrelated samples (we choose 10 uncorrelated samples per block as the threshold value).
Intentionally, we make our simulation slightly too short, so that it does not produce enough uncorrelated samples. We encourage the reader to vary the number of blocks or the number of samples to see how the estimated error changes with these parameters.
Step7: To look in more detail at the statistical accuracy, it is useful to plot the deviations from the analytical result. This provides another way to check the consistency of error estimates. About 68% of the results should be within one error bar from the analytical result, whereas about 95% of the results should be within two times the error bar. Indeed, if you plot the deviations by running the script below, you should observe that most of the results are within one error bar from the analytical solution, a smaller fraction of the results is slightly further than one error bar, and one or two might be about two error bars apart. Again, this situation will change when we introduce interactions because the ionization of the interacting system should deviate from the Henderson-Hasselbalch equation.
Step8: The Neutralizing Ion $\mathrm{B^+}$
Up to now we did not discuss the chemical nature the neutralizer $\mathrm{B^+}$. The added salt is not relevant in this context, therefore we omit it from the discussion. The simplest case to consider is what happens if you add the acidic polymer to pure water ($\mathrm{pH} = 7$). Some of the acid groups dissociate and release $\mathrm{H^+}$ ions into the solution. The pH decreases to a value that depends on $\mathrm{p}K_{\mathrm{A}}$ and on the concentration of ionizable groups. Now, three ionic species are present in the solution
Step9: The plot shows that at intermediate pH the concentration of $\mathrm{B^+}$ ions is approximately equal to the concentration of $\mathrm{M^+}$ ions. Only at one specific $\mathrm{pH}$ the concentration of $\mathrm{B^+}$ ions is equal to the concentration of $\mathrm{M^+}$ ions. This is the pH one obtains when dissolving the weak acid $\mathrm{A}$ in pure water.
In an ideal system, the ions missing in the simulation have no effect on the ionization degree. In an interacting system, the presence of ions in the box affects the properties of other parts of the system. Therefore, in an interacting system this discrepancy is harmless only at intermediate pH. The effect of the small ions on the rest of the system can be estimated from the overall the ionic strength.
$$ I = \frac{1}{2}\sum_i c_i z_i^2 $$ | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import scipy.constants # physical constants
import espressomd
import pint # module for working with units and dimensions
from espressomd import electrostatics, polymer, reaction_ensemble
from espressomd.interactions import HarmonicBond
ureg = pint.UnitRegistry()
# sigma=0.355 nm is a commonly used particle size in coarse-grained simulations
ureg.define('sigma = 0.355 * nm = sig')
sigma = 1.0 * ureg.sigma # variable that has the value and dimension of one sigma
# N_A is the numerical value of Avogadro constant in units 1/mole
N_A = scipy.constants.N_A/ureg.mole
Bjerrum = 0.715 * ureg.nanometer # Bjerrum length at 300K
# define that concentration is a quantity that must have a value and a unit
concentration = ureg.Quantity
# System parameters
#############################################################
# 0.01 mol/L is a reasonable concentration that could be used in experiments
c_acid = concentration(1e-3, 'mol/L')
# Using the constant-pH method is safe if Ionic_strength > max(10**(-pH), 10**(-pOH) ) and C_salt > C_acid
# additional salt to control the ionic strength
c_salt = concentration(2*c_acid)
# In the ideal system, concentration is arbitrary (see Henderson-Hasselbalch equation)
# but it is important in the interacting system
N_acid = 20 # number of titratable units in the box
PROB_REACTION = 0.5 # select the reaction move with 50% probability
# probability of the reaction is adjustable parameter of the method that affects the speed of convergence
# Simulate an interacting system with steric repulsion (Warning: it will be slower than without WCA!)
USE_WCA = False
# Simulate an interacting system with electrostatics (Warning: it will be very slow!)
USE_ELECTROSTATICS = False
# particle types of different species
TYPE_HA = 0
TYPE_A = 1
TYPE_B = 2
TYPE_Na = 3
TYPE_Cl = 4
q_HA = 0
q_A = -1
q_B = +1
q_Na = +1
q_Cl = -1
# acidity constant
pK = 4.88
K = 10**(-pK)
offset = 2.0 # range of pH values to be used pK +/- offset
num_pHs = 15 # number of pH values
pKw = 14.0 # autoprotolysis constant of water
# dependent parameters
Box_V = (N_acid/N_A/c_acid)
Box_L = np.cbrt(Box_V.to('m**3'))
if tuple(map(int, pint.__version__.split('.'))) < (0, 10):
Box_L *= ureg('m')
# we shall often need the numerical value of box length in sigma
Box_L_in_sigma = Box_L.to('sigma').magnitude
# unfortunately, pint module cannot handle cube root of m**3, so we need to explicitly set the unit
N_salt = int(c_salt*Box_V*N_A) # number of salt ion pairs in the box
# print the values of dependent parameters to check for possible rounding errors
print("N_salt: {0:.1f}, N_acid: {1:.1f}, N_salt/N_acid: {2:.7f}, c_salt/c_acid: {3:.7f}".format(
N_salt, N_acid, 1.0*N_salt/N_acid, c_salt/c_acid))
n_blocks = 16 # number of block to be used in data analysis
desired_block_size = 10 # desired number of samples per block
# number of reaction samples per each pH value
num_samples = int(n_blocks * desired_block_size / PROB_REACTION)
pHmin = pK-offset # lowest pH value to be used
pHmax = pK+offset # highest pH value to be used
pHs = np.linspace(pHmin, pHmax, num_pHs) # list of pH values
# Initialize the ESPResSo system
##############################################
system = espressomd.System(box_l=[Box_L_in_sigma] * 3)
system.time_step = 0.01
system.cell_system.skin = 0.4
system.thermostat.set_langevin(kT=1.0, gamma=1.0, seed=7)
np.random.seed(seed=10) # initialize the random number generator in numpy
Explanation: Introduction
This tutorial introduces the basic features for simulating titratable systems via the constant pH method.
The constant pH method is one of the methods implemented for simulating systems with chemical reactions within the Reaction Ensemble module. It is a Monte Carlo method designed to model an acid-base ionization reaction at a given (fixed) value of solution pH.
We will consider a homogeneous aqueous solution of a titratable acidic species $\mathrm{HA}$ that can dissociate in a reaction, that is characterized by the equilibrium constant $\mathrm{p}K_A=-\log_{10} K_A$
$$\mathrm{HA} \Leftrightarrow \mathrm{A}^- + \mathrm{H}^+$$
If $N_0 = N_{\mathrm{HA}} + N_{\mathrm{A}^-}$ is the number of titratable groups in solution, then we define the degree of dissociation $\alpha$ as:
$$\alpha = \dfrac{N_{\mathrm{A}^-}}{N_0}.$$
This is one of the key quantities that can be used to describe the acid-base equilibrium. Usually, the goal of the simulation is to predict the value of $\alpha$ under given conditions in a complex system with interactions.
The Chemical Equilibrium and Reaction Constant
The equilibrium reaction constant describes the chemical equilibrium of a given reaction. The values of equilibrium constants for various reactions can be found in tables. For the acid-base ionization reaction, the equilibrium constant is conventionally called the acidity constant, and it is defined as
\begin{equation}
K_A = \frac{a_{\mathrm{H}^+} a_{\mathrm{A}^-} } {a_{\mathrm{HA}}}
\end{equation}
where $a_i$ is the activity of species $i$. It is related to the chemical potential $\mu_i$ and to the concentration $c_i$
\begin{equation}
\mu_i = \mu_i^\mathrm{ref} + k_{\mathrm{B}}T \ln a_i
\,,\qquad
a_i = \frac{c_i \gamma_i}{c^{\ominus}}\,,
\end{equation}
where $\gamma_i$ is the activity coefficient, and $c^{\ominus}$ is the (arbitrary) reference concentration, often chosen to be the standard concentration, $c^{\ominus} = 1\,\mathrm{mol/L}$, and $\mu_i^\mathrm{ref}$ is the reference chemical potential.
Note that $K$ is a dimensionless quantity but its numerical value depends on the choice of $c^0$.
For an ideal system, $\gamma_i=1$ by definition, whereas for an interacting system $\gamma_i$ is a non-trivial function of the interactions. For an ideal system we can rewrite $K$ in terms of equilibrium concentrations
\begin{equation}
K_A \overset{\mathrm{ideal}}{=} \frac{c_{\mathrm{H}^+} c_{\mathrm{A}^-} } {c_{\mathrm{HA}} c^{\ominus}}
\end{equation}
The ionization degree can also be expressed via the ratio of concentrations:
\begin{equation}
\alpha
= \frac{N_{\mathrm{A}^-}}{N_0}
= \frac{N_{\mathrm{A}^-}}{N_{\mathrm{HA}} + N_{\mathrm{A}^-}}
= \frac{c_{\mathrm{A}^-}}{c_{\mathrm{HA}}+c_{\mathrm{A}^-}}
= \frac{c_{\mathrm{A}^-}}{c_{\mathrm{A}}}.
\end{equation}
where $c_{\mathrm{A}}=c_{\mathrm{HA}}+c_{\mathrm{A}^-}$ is the total concentration of titratable acid groups irrespective of their ionization state.
Then, we can characterize the acid-base ionization equilibrium using the ionization degree and pH, defined as
\begin{equation}
\mathrm{pH} = -\log_{10} a_{\mathrm{H^{+}}} \overset{\mathrm{ideal}}{=} -\log_{10} (c_{\mathrm{H^{+}}} / c^{\ominus})
\end{equation}
Substituting for the ionization degree and pH into the expression for $K_A$ we obtain the Henderson-Hasselbalch equation
\begin{equation}
\mathrm{pH}-\mathrm{p}K_A = \log_{10} \frac{\alpha}{1-\alpha}
\end{equation}
One result of the Henderson-Hasselbalch equation is that at a fixed pH value the ionization degree of an ideal acid is independent of concentration. Another implication is, that the degree of ionization does not depend on the absolute values of $\mathrm{p}K_A$ and $\mathrm{pH}$, but only on their difference, $\mathrm{pH}-\mathrm{p}K_A$.
Constant pH Method
The constant pH method Reed1992 is designed to simulate an acid-base ionization reaction at a given pH. It assumes that the simulated system is coupled to an implicit reservoir of $\mathrm{H^+}$ ions but exchange of ions with this reservoir is not explicitly simulated. Therefore, the concentration of ions in the simulation box is not equal to the concentration of $\mathrm{H^+}$ ions at the chosen pH. This may lead to artifacts when simulating interacting systems, especially at high of low pH values. Discussion of these artifacts is beyond the scope of this tutorial (see e.g. Landsgesell2019 for further details).
In ESPResSo, the forward step of the ionization reaction (from left to right) is implemented by
changing the chemical identity (particle type) of a randomly selected $\mathrm{HA}$ particle to $\mathrm{A}^-$, and inserting another particle that represents a neutralizing counterion. The neutralizing counterion is not necessarily an $\mathrm{H^+}$ ion. Therefore, we give it a generic name $\mathrm{B^+}$. In the reverse direction (from right to left), the chemical identity (particle type) of a randomly selected $\mathrm{A}^{-}$ is changed to $\mathrm{HA}$, and a randomly selected $\mathrm{B}^+$ is deleted from the simulation box. The probability of proposing the forward reaction step is $P_\text{prop}=N_\mathrm{HA}/N_0$, and probability of proposing the reverse step is $P_\text{prop}=N_\mathrm{A}/N_0$. The trial move is accepted with the acceptance probability
$$ P_{\mathrm{acc}} = \operatorname{min}\left(1, \exp(-\beta \Delta E_\mathrm{pot} \pm \ln(10) \cdot (\mathrm{pH - p}K_A) ) \right)$$
Here $\Delta E_\text{pot}$ is the potential energy change due to the reaction, while $\text{pH - p}K$ is an input parameter.
The signs $\pm 1$ correspond to the forward and reverse direction of the ionization reaction, respectively.
Setup
The inputs that we need to define our system in the simulation include
* concentration of the titratable units c_acid
* dissociation constant pK
* Bjerrum length Bjerrum
* system size (given by the number of titratable units) N_acid
* concentration of added salt c_salt_SI
* pH
From the concentration of titratable units and the number of titratable units we calculate the box length.
We create a system with this box size.
From the salt concentration we calculate the number of additional salt ion pairs that should be present in the system.
We set the dissociation constant of the acid to $\mathrm{p}K_A=4.88$, that is the acidity constant of propionic acid. We choose propionic acid because its structure is closest to the repeating unit of poly(acrylic acid), the most commonly used weak polyacid.
We will simulate multiple pH values, the range of which is determined by the parameters offset and num_pHs.
End of explanation
# create the particles
##################################################
# we need to define bonds before creating polymers
hb = HarmonicBond(k=30, r_0=1.0)
system.bonded_inter.add(hb)
# create the polymer composed of ionizable acid groups, initially in the ionized state
polymers = polymer.linear_polymer_positions(n_polymers=1,
beads_per_chain=N_acid,
bond_length=0.9, seed=23)
for polymer in polymers:
for index, position in enumerate(polymer):
id = len(system.part)
system.part.add(id=id, pos=position, type=TYPE_A, q=q_A)
if index > 0:
system.part[id].add_bond((hb, id - 1))
# add the corresponding number of H+ ions
for index in range(N_acid):
system.part.add(pos=np.random.random(3)*Box_L_in_sigma, type=TYPE_B, q=q_B)
# add salt ion pairs
for index in range(N_salt):
system.part.add(pos=np.random.random(
3)*Box_L_in_sigma, type=TYPE_Na, q=q_Na)
system.part.add(pos=np.random.random(
3)*Box_L_in_sigma, type=TYPE_Cl, q=q_Cl)
# set up the WCA interaction between all particle pairs
if USE_WCA:
types = [TYPE_HA, TYPE_A, TYPE_B, TYPE_Na, TYPE_Cl]
for type_1 in types:
for type_2 in types:
system.non_bonded_inter[type_1, type_2].lennard_jones.set_params(
epsilon=1.0, sigma=1.0,
cutoff=2**(1.0 / 6), shift="auto")
# run a steepest descent minimization to relax overlaps
system.integrator.set_steepest_descent(
f_max=0, gamma=0.1, max_displacement=0.1)
system.integrator.run(20)
system.integrator.set_vv() # to switch back to velocity Verlet
# short integration to let the system relax
system.integrator.run(steps=1000)
# if needed, set up and tune the Coulomb interaction
if USE_ELECTROSTATICS:
print("set up and tune p3m, please wait....")
p3m = electrostatics.P3M(prefactor=Bjerrum.to(
'sigma').magnitude, accuracy=1e-3)
system.actors.add(p3m)
p3m_params = p3m.get_params()
# for key in list(p3m_params.keys()):
# print("{} = {}".format(key, p3m_params[key]))
print(p3m.get_params())
print("p3m, tuning done")
else:
# this speeds up the simulation of dilute systems with small particle numbers
system.cell_system.set_n_square()
print("Done adding particles and interactions")
Explanation: After defining the simulation parameters, we set up the system that we want to simulate. It is a polyelectrolyte chain with some added salt that is used to control the ionic strength of the solution. For the first run, we set up the system without any steric repulsion and without electrostatic interactions. In the next runs, we will add the steric repulsion and electrostatic interactions to observe their effect on the ionization.
End of explanation
RE = reaction_ensemble.ConstantpHEnsemble(
temperature=1, exclusion_radius=1.0, seed=77)
Explanation: After setting creating the particles we initialize the reaction ensemble by setting the temperature, exclusion radius and seed of the random number generator. We set the temperature to unity, that determines that our reduced unit of energy will be $\varepsilon=1k_{\mathrm{B}}T$. In an interacting system the exclusion radius ensures that particle insertions too close to other particles are not attempted. Such insertions would make the subsequent Langevin dynamics integration unstable. If the particles are not interacting, we can set the exclusion radius to $0.0$. Otherwise, $1.0$ is a good value. We set the seed to a constant value to ensure reproducible results.
End of explanation
RE.add_reaction(gamma=K, reactant_types=[TYPE_HA], reactant_coefficients=[1],
product_types=[TYPE_A, TYPE_B], product_coefficients=[1, 1],
default_charges={TYPE_HA: q_HA, TYPE_A: q_A, TYPE_B: q_B})
print(RE.get_status())
Explanation: The next step is to define the reaction system. The order in which species are written in the lists of reactants and products is very important for ESPResSo. When a reaction move is performed, identity of the first species in the list of reactants is changed to the first species in the list of products, the second reactant species is changed to the second product species, and so on. If the reactant list has more species than the product list, then excess reactant species are deleted from the system. If the product list has more species than the reactant list, then product the excess product species are created and randomly placed inside the simulation box. This convention is especially important if some of the species belong to a chain-like molecule, and cannot be placed at an arbitrary position.
In the example below, the order of reactants and products ensures that identity of $\mathrm{HA}$ is changed to $\mathrm{A^{-}}$ and vice versa, while $\mathrm{H^{+}}$ is inserted/deleted in the reaction move. Reversing the order of products in our reaction (i.e. from product_types=[TYPE_B, TYPE_A] to product_types=[TYPE_A, TYPE_B]), would result in a reaction move, where the identity HA would be changed to $\mathrm{H^{+}}$, while $\mathrm{A^{-}}$ would be inserted/deleted at a random position in the box. We also assign charges to each type because the charge will play an important role later, in simulations with electrostatic interactions.
End of explanation
# the reference data from Henderson-Hasselbalch equation
def ideal_alpha(pH, pK):
return 1. / (1 + 10**(pK - pH))
# empty lists as placeholders for collecting data
numAs_at_each_pH = [] # number of A- species observed at each sample
# run a productive simulation and collect the data
print("Simulated pH values: ", pHs)
for pH in pHs:
print("Run pH {:.2f} ...".format(pH))
RE.constant_pH = pH
numAs_current = [] # temporary data storage for a given pH
RE.reaction(20*N_acid + 1) # pre-equilibrate to the new pH value
for i in range(num_samples):
if np.random.random() < PROB_REACTION:
# should be at least one reaction attempt per particle
RE.reaction(N_acid + 1)
elif USE_WCA:
system.integrator.run(steps=1000)
numAs_current.append(system.number_of_particles(type=TYPE_A))
numAs_at_each_pH.append(numAs_current)
print("measured number of A-: {0:.2f}, (ideal: {1:.2f})".format(
np.mean(numAs_current), N_acid*ideal_alpha(pH, pK)))
print("finished")
Explanation: Next, we perform simulations at different pH values. The system must be equilibrated at each pH before taking samples.
Calling RE.reaction(X) attempts in total X reactions (in both backward and forward direction).
End of explanation
# statistical analysis of the results
def block_analyze(input_data, n_blocks=16):
data = np.array(input_data)
block = 0
# this number of blocks is recommended by Janke as a reasonable compromise
# between the conflicting requirements on block size and number of blocks
block_size = int(data.shape[1] / n_blocks)
print("block_size:", block_size)
# initialize the array of per-block averages
block_average = np.zeros((n_blocks, data.shape[0]))
# calculate averages per each block
for block in range(0, n_blocks):
block_average[block] = np.average(
data[:, block * block_size: (block + 1) * block_size], axis=1)
# calculate the average and average of the square
av_data = np.average(data, axis=1)
av2_data = np.average(data * data, axis=1)
# calculate the variance of the block averages
block_var = np.var(block_average, axis=0)
# calculate standard error of the mean
err_data = np.sqrt(block_var / (n_blocks - 1))
# estimate autocorrelation time using the formula given by Janke
# this assumes that the errors have been correctly estimated
tau_data = np.zeros(av_data.shape)
for val in range(0, av_data.shape[0]):
if av_data[val] == 0:
# unphysical value marks a failure to compute tau
tau_data[val] = -1.0
else:
tau_data[val] = 0.5 * block_size * n_blocks / (n_blocks - 1) * block_var[val] \
/ (av2_data[val] - av_data[val] * av_data[val])
return av_data, err_data, tau_data, block_size
# estimate the statistical error and the autocorrelation time using the formula given by Janke
av_numAs, err_numAs, tau, block_size = block_analyze(numAs_at_each_pH)
print("av = ", av_numAs)
print("err = ", err_numAs)
print("tau = ", tau)
# calculate the average ionization degree
av_alpha = av_numAs/N_acid
err_alpha = err_numAs/N_acid
# plot the simulation results compared with the ideal titration curve
plt.figure(figsize=(10, 6), dpi=80)
plt.errorbar(pHs - pK, av_alpha, err_alpha, marker='o', linestyle='none',
label=r"simulation")
pHs2 = np.linspace(pHmin, pHmax, num=50)
plt.plot(pHs2 - pK, ideal_alpha(pHs2, pK), label=r"ideal")
plt.xlabel('pH-p$K$', fontsize=16)
plt.ylabel(r'$\alpha$', fontsize=16)
plt.legend(fontsize=16)
plt.show()
Explanation: Results
Finally we plot our results and compare them to the analytical results obtained from the Henderson-Hasselbalch equation.
Statistical Uncertainty
The molecular simulation produces a sequence of snapshots of the system, that
constitute a Markov chain. It is a sequence of realizations of a random process, where
the next value in the sequence depends on the preceding one. Therefore,
the subsequent values are correlated. To estimate statistical error of the averages
determined in the simulation, one needs to correct for the correlations.
Here, we will use a rudimentary way of correcting for correlations, termed the binning method.
We refer the reader to specialized literature for a more sophisticated discussion, for example Janke2002. The general idea is to group a long sequence of correlated values into a rather small number of blocks, and compute an average per each block. If the blocks are big enough, they
can be considered uncorrelated, and one can apply the formula for standard error of the mean of uncorrelated values. If the number of blocks is small, then they are uncorrelated but the obtained error estimates has a high uncertainty. If the number of blocks is high, then they are too short to be uncorrelated, and the obtained error estimates are systematically lower than the correct value. Therefore, the method works well only if the sample size is much greater than the autocorrelation time, so that it can be divided into a sufficient number of mutually uncorrelated blocks.
In the example below, we use a fixed number of 16 blocks to obtain the error estimates.
End of explanation
# check if the blocks contain enough data for reliable error estimates
print("uncorrelated samples per block:\nblock_size/tau = ",
block_size/tau)
threshold = 10. # block size should be much greater than the correlation time
if np.any(block_size / tau < threshold):
print("\nWarning: some blocks may contain less than ", threshold, "uncorrelated samples."
"\nYour error estimated may be unreliable."
"\nPlease, check them using a more sophisticated method or run a longer simulation.")
print("? block_size/tau > threshold ? :", block_size/tau > threshold)
else:
print("\nAll blocks seem to contain more than ", threshold, "uncorrelated samples.\
Error estimates should be OK.")
Explanation: The simulation results for the non-interacting case very well compare with the analytical solution of Henderson-Hasselbalch equation. There are only minor deviations, and the estimated errors are small too. This situation will change when we introduce interactions.
It is useful to check whether the estimated errors are consistent with the assumptions that were used to obtain them. To do this, we follow Janke2002 to estimate the number of uncorrelated samples per block, and check whether each block contains a sufficient number of uncorrelated samples (we choose 10 uncorrelated samples per block as the threshold value).
Intentionally, we make our simulation slightly too short, so that it does not produce enough uncorrelated samples. We encourage the reader to vary the number of blocks or the number of samples to see how the estimated error changes with these parameters.
End of explanation
# plot the deviations from the ideal result
plt.figure(figsize=(10, 6), dpi=80)
ylim = np.amax(abs(av_alpha-ideal_alpha(pHs, pK)))
plt.ylim((-1.5*ylim, 1.5*ylim))
plt.errorbar(pHs - pK, av_alpha-ideal_alpha(pHs, pK),
err_alpha, marker='o', linestyle='none', label=r"simulation")
plt.plot(pHs - pK, 0.0*ideal_alpha(pHs, pK), label=r"ideal")
plt.xlabel('pH-p$K$', fontsize=16)
plt.ylabel(r'$\alpha - \alpha_{ideal}$', fontsize=16)
plt.legend(fontsize=16)
plt.show()
Explanation: To look in more detail at the statistical accuracy, it is useful to plot the deviations from the analytical result. This provides another way to check the consistency of error estimates. About 68% of the results should be within one error bar from the analytical result, whereas about 95% of the results should be within two times the error bar. Indeed, if you plot the deviations by running the script below, you should observe that most of the results are within one error bar from the analytical solution, a smaller fraction of the results is slightly further than one error bar, and one or two might be about two error bars apart. Again, this situation will change when we introduce interactions because the ionization of the interacting system should deviate from the Henderson-Hasselbalch equation.
End of explanation
# average concentration of B+ is the same as the concentration of A-
av_c_Bplus = av_alpha*c_acid
err_c_Bplus = err_alpha*c_acid # error in the average concentration
full_pH_range = np.linspace(2, 12, 100)
ideal_c_Aminus = ideal_alpha(full_pH_range, pK)*c_acid
ideal_c_OH = np.power(10.0, -(pKw - full_pH_range))*ureg('mol/L')
ideal_c_H = np.power(10.0, -full_pH_range)*ureg('mol/L')
# ideal_c_M is calculated from electroneutrality
ideal_c_M = np.maximum((ideal_c_Aminus + ideal_c_OH - ideal_c_H).to(
'mol/L').magnitude, np.zeros_like(full_pH_range))*ureg('mol/L')
# plot the simulation results compared with the ideal results of the cations
plt.figure(figsize=(10, 6), dpi=80)
plt.errorbar(pHs,
av_c_Bplus.to('mol/L').magnitude,
err_c_Bplus.to('mol/L').magnitude,
marker='o', c="tab:blue", linestyle='none',
label=r"measured $c_{\mathrm{B^+}}$", zorder=2)
plt.plot(full_pH_range, ideal_c_H.to('mol/L').magnitude, c="tab:green",
label=r"ideal $c_{\mathrm{H^+}}$", zorder=0)
plt.plot(full_pH_range, ideal_c_M.to('mol/L').magnitude, c="tab:orange",
label=r"ideal $c_{\mathrm{M^+}}$", zorder=0)
plt.plot(full_pH_range, ideal_c_Aminus.to('mol/L').magnitude, c="tab:blue", ls=(0, (5, 5)),
label=r"ideal $c_{\mathrm{A^-}}$", zorder=1)
plt.yscale("log")
plt.ylim(1e-6,)
plt.xlabel('input pH', fontsize=16)
plt.ylabel(r'concentration $c$ $[\mathrm{mol/L}]$', fontsize=16)
plt.legend(fontsize=16)
plt.show()
Explanation: The Neutralizing Ion $\mathrm{B^+}$
Up to now we did not discuss the chemical nature the neutralizer $\mathrm{B^+}$. The added salt is not relevant in this context, therefore we omit it from the discussion. The simplest case to consider is what happens if you add the acidic polymer to pure water ($\mathrm{pH} = 7$). Some of the acid groups dissociate and release $\mathrm{H^+}$ ions into the solution. The pH decreases to a value that depends on $\mathrm{p}K_{\mathrm{A}}$ and on the concentration of ionizable groups. Now, three ionic species are present in the solution: $\mathrm{H^+}$, $\mathrm{A^-}$, and $\mathrm{OH^-}$. Because the reaction generates only one $\mathrm{B^+}$ ion in the simulation box, we conclude that in this case the $\mathrm{B^+}$ ions correspond to $\mathrm{H^+}$ ions. The $\mathrm{H^+}$ ions neutralize both the $\mathrm{A^-}$ and the $\mathrm{OH^-}$ ions. At acidic pH there are only very few $\mathrm{OH^-}$ ions and nearly all $\mathrm{H^+}$ ions act as a neutralizer for the $\mathrm{A^-}$ ions. Therefore, the concentration of $\mathrm{B^+}$ is very close to the concentration of $\mathrm{H^+}$ in the real aqueous solution. Only very few $\mathrm{OH^-}$ ions, and the $\mathrm{H^+}$ ions needed to neutralize them, are missing in the simulation box, when compared to the real solution.
To achieve a more acidic pH (with the same pK and polymer concentration), we need to add an acid to the system. We can do that by adding a strong acid, such as HCl or $\mathrm{HNO}_3$. We will denote this acid by a generic name $\mathrm{HX}$ to emphasize that in general its anion can be different from the salt anion $\mathrm{Cl^{-}}$. Now, there are 4 ionic species in the solution: $\mathrm{H^+}$, $\mathrm{A^-}$, $\mathrm{OH^-}$, and $\mathrm{X^-}$ ions. By the same argument as before, we conclude that $\mathrm{B^+}$ ions correspond to $\mathrm{H^+}$ ions. The $\mathrm{H^+}$ ions neutralize the $\mathrm{A^-}$, $\mathrm{OH^-}$, and the $\mathrm{X^-}$ ions. Because the concentration of $\mathrm{X^-}$ is not negligible anymore, the concentration of $\mathrm{B^+}$ in the simulation box differs from the $\mathrm{H^+}$ concentration in the real solution. Now, many more ions are missing in the simulation box, as compared to the real solution: Few $\mathrm{OH^-}$ ions, many $\mathrm{X^-}$ ions, and all the $\mathrm{H^+}$ ions that neutralize them.
To achieve a neutral pH we need to add some base to the system to neutralize the polymer.
In the simplest case we add an alkali metal hydroxide, such as $\mathrm{NaOH}$ or $\mathrm{KOH}$, that we will generically denote as $\mathrm{MOH}$. Now, there are 4 ionic species in the solution: $\mathrm{H^+}$, $\mathrm{A^-}$, $\mathrm{OH^-}$, and $\mathrm{M^+}$. In such situation, we can not clearly attribute a specific chemical identity to the $\mathrm{B^+}$ ions. However, only very few $\mathrm{H^+}$ and $\mathrm{OH^-}$ ions are present in the system at $\mathrm{pH} = 7$. Therefore, we can make the approximation that at this pH, all $\mathrm{A^-}$ are neutralized by the $\mathrm{M^+}$ ions, and the $\mathrm{B^+}$ correspond to $\mathrm{M^+}$. Then, the concentration of $\mathrm{B^+}$ also corresponds to the concentration of $\mathrm{M^+}$ ions. Now, again only few ions are missing in the simulation box, as compared to the real solution: Few $\mathrm{OH^-}$ ions, and few $\mathrm{H^+}$ ions.
To achieve a basic pH we need to add even more base to the system to neutralize the polymer.
Again, there are 4 ionic species in the solution: $\mathrm{H^+}$, $\mathrm{A^-}$, $\mathrm{OH^-}$, and $\mathrm{M^+}$ and we can not clearly attribute a specific chemical identity to the $\mathrm{B^+}$ ions. Because only very few $\mathrm{H^+}$ ions should be present in the solution, we can make the approximation that at this pH, all $\mathrm{A^-}$ ions are neutralized by the $\mathrm{M^+}$ ions, and therefore $\mathrm{B^+}$ ions in the simulation correspond to $\mathrm{M^+}$ ions in the real solution. Because additional $\mathrm{M^+}$ ions in the real solution neutralize the $\mathrm{OH^-}$ ions, the concentration of $\mathrm{B^+}$ does not correspond to the concentration of $\mathrm{M^+}$ ions. Now, again many ions are missing in the simulation box, as compared to the real solution: Few $\mathrm{H^+}$ ions, many $\mathrm{OH^-}$ ions, and a comparable amount of the $\mathrm{M^+}$ ions.
To further illustrate this subject, we compare the concentration of the neutralizer ion $\mathrm{B^+}$ calculated in the simulation with the expected number of ions of each species. At a given pH and pK we can calculate the expected degree of ionization from the Henderson Hasselbalch equation. Then we apply the electroneutrality condition
$$c_\mathrm{A^-} + c_\mathrm{OH^-} + c_\mathrm{X^-} = c_\mathrm{H^+} + c_\mathrm{M^+}$$
where we use either $c_\mathrm{X^-}=0$ or $c_\mathrm{M^+}=0$ because we always only add extra acid or base, but never both. Adding both would be equivalent to adding extra salt $\mathrm{MX}$.
We obtain the concentrations of $\mathrm{OH^-}$ and $\mathrm{H^+}$ from the input pH value, and substitute them to the electroneutrality equation to obtain
$$\alpha c_\mathrm{acid} + 10^{-(\mathrm{p}K_\mathrm{w} - \mathrm{pH})} + 10^{-\mathrm{pH}} = c_\mathrm{M^+} - c_\mathrm{X^-}$$
Depending on whether the left-hand side of this equation is positive or negative we know whether we should add $\mathrm{M^+}$ or $\mathrm{X^-}$ ions.
End of explanation
ideal_c_X = np.maximum(-(ideal_c_Aminus + ideal_c_OH - ideal_c_H).to(
'mol/L').magnitude, np.zeros_like(full_pH_range))*ureg('mol/L')
ideal_ionic_strength = 0.5 * \
(ideal_c_X + ideal_c_M + ideal_c_H + ideal_c_OH + 2*c_salt)
# in constant-pH simulation ideal_c_Aminus = ideal_c_Bplus
cpH_ionic_strength = 0.5*(ideal_c_Aminus + 2*c_salt)
cpH_ionic_strength_measured = 0.5*(av_c_Bplus + 2*c_salt)
cpH_error_ionic_strength_measured = 0.5*err_c_Bplus
plt.figure(figsize=(10, 6), dpi=80)
plt.errorbar(pHs,
cpH_ionic_strength_measured.to('mol/L').magnitude,
cpH_error_ionic_strength_measured.to('mol/L').magnitude,
c="tab:blue",
linestyle='none', marker='o',
label=r"measured", zorder=3)
plt.plot(full_pH_range,
cpH_ionic_strength.to('mol/L').magnitude,
c="tab:blue",
ls=(0, (5, 5)),
label=r"cpH", zorder=2)
plt.plot(full_pH_range,
ideal_ionic_strength.to('mol/L').magnitude,
c="tab:orange",
linestyle='-',
label=r"ideal", zorder=1)
plt.yscale("log")
plt.xlabel('input pH', fontsize=16)
plt.ylabel(r'Ionic Strength [$\mathrm{mol/L}$]', fontsize=16)
plt.legend(fontsize=16)
plt.show()
Explanation: The plot shows that at intermediate pH the concentration of $\mathrm{B^+}$ ions is approximately equal to the concentration of $\mathrm{M^+}$ ions. Only at one specific $\mathrm{pH}$ the concentration of $\mathrm{B^+}$ ions is equal to the concentration of $\mathrm{M^+}$ ions. This is the pH one obtains when dissolving the weak acid $\mathrm{A}$ in pure water.
In an ideal system, the ions missing in the simulation have no effect on the ionization degree. In an interacting system, the presence of ions in the box affects the properties of other parts of the system. Therefore, in an interacting system this discrepancy is harmless only at intermediate pH. The effect of the small ions on the rest of the system can be estimated from the overall the ionic strength.
$$ I = \frac{1}{2}\sum_i c_i z_i^2 $$
End of explanation |
674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Pipeline
In this notebook, we show how to run the flowers classification workflow as a pipeline
Set up
Step1: Build the container
Step2: Convert JPEG files to TF Records
Step3: Train model
To do it locally on the cluster instead of on CAIP, we'll use gcloud local training
<pre>
gcloud ai-platform local train --package-path $PACKAGE_PATH
--module-name $MODULE_NAME --job-dir ${JOB_DIR}_local
-- --num_training_examples 100 --with_color_distort False --crop_ratio 0.6
</pre>
Step4: The pipeline
Step5: Compile and submit pipeline | Python Code:
%pip install --upgrade --user kfp
# CHANGE AS needed
REGION = 'us-central1' # Change as needed to a region where you have quota
KFPHOST = 'https://40e09ee3a33a422-dot-us-central1.pipelines.googleusercontent.com' # Note name of launched Kubeflow Pipelines cluster
PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
print(PROJECT)
%env PROJECT = {PROJECT}
%env REGION = {REGION}
BUCKET = PROJECT + "-flowers-pipeline"
%env BUCKET = {BUCKET}
!gsutil mb -l {REGION} gs://{BUCKET}
Explanation: Machine Learning Pipeline
In this notebook, we show how to run the flowers classification workflow as a pipeline
Set up
End of explanation
%%capture --no-stderr
!../build_docker_image.sh
Explanation: Build the container
End of explanation
%%writefile components/create_dataset.yaml
name: create_dataset
description: Converts JPEG files to TensorFlow Records using Dataflow or Apache Beam
inputs:
- {name: runner, type: str, default: 'DirectRunner', description: 'DirectRunner or DataflowRunner'}
- {name: project_id, type: str, description: 'Project to bill Dataflow job to'}
- {name: region, type: str, description: 'Region to run Dataflow job in'}
- {name: input_csv, type: GCSPath, description: 'Path to CSV file'}
- {name: output_dir, type: GCSPath, description: 'Top-level directory for TF records'}
- {name: labels_dict, type: GCSPath, description: 'Dictionary file for class names'}
outputs:
- {name: tfrecords_topdir, type: GCSPath, description: 'Top-level directory for TF records'}
implementation:
container:
image: gcr.io/ai-analytics-solutions/practical-ml-vision-book:latest
command: [
"bash", "/src/practical-ml-vision-book/10_mlops/components/create_dataset.sh"
]
args: [
{inputValue: output_dir},
{outputPath: tfrecords_topdir},
"--all_data", {inputValue: input_csv},
"--labels_file", {inputValue: labels_dict},
"--project_id", {inputValue: project_id},
"--output_dir", {inputValue: output_dir},
"--runner", {inputValue: runner},
"--region", {inputValue: region},
]
%%writefile components/noop_create_dataset.yaml
name: noop_create_dataset
description: Converts JPEG files to TensorFlow Records using Dataflow or Apache Beam
inputs:
- {name: runner, type: str, default: 'DirectRunner', description: 'DirectRunner or DataflowRunner'}
- {name: project_id, type: str, description: 'Project to bill Dataflow job to'}
- {name: region, type: str, description: 'Region to run Dataflow job in'}
- {name: input_csv, type: GCSPath, description: 'Path to CSV file'}
- {name: output_dir, type: GCSPath, description: 'Top-level directory for TF records'}
- {name: labels_dict, type: GCSPath, description: 'Dictionary file for class names'}
outputs:
- {name: tfrecords_topdir, type: GCSPath, description: 'Top-level directory for TF records'}
implementation:
container:
image: gcr.io/ai-analytics-solutions/practical-ml-vision-book:latest
command: [
"bash", "/src/practical-ml-vision-book/10_mlops/components/noop_create_dataset.sh"
]
args: [
{inputValue: output_dir},
{outputPath: tfrecords_topdir}
]
Explanation: Convert JPEG files to TF Records
End of explanation
%%writefile components/train_model_kfp.yaml
name: train_model_kfp
description: Trains an ML model on KFP
inputs:
- {name: input_topdir, type: GCSPath, description: 'Top-level directory for TF records'}
- {name: region, type: str, description: 'Region (ignored)'}
- {name: job_dir, type: GCSPath, description: 'Top-level output directory'}
outputs:
- {name: trained_model, type: GCSPath, description: 'location of trained model'}
implementation:
container:
image: gcr.io/ai-analytics-solutions/practical-ml-vision-book:latest
command: [
"bash", "/src/practical-ml-vision-book/10_mlops/components/train_model_kfp.sh",
]
args: [
{inputValue: input_topdir},
{inputValue: region},
{inputValue: job_dir},
{outputPath: trained_model},
]
%%writefile components/train_model_caip.yaml
name: train_model_caip
description: Trains an ML model on CAIP
inputs:
- {name: input_topdir, type: GCSPath, description: 'Top-level directory for TF records'}
- {name: region, type: str, description: 'Region'}
- {name: job_dir, type: GCSPath, description: 'Top-level output directory'}
outputs:
- {name: trained_model, type: GCSPath, description: 'location of trained model'}
implementation:
container:
image: gcr.io/ai-analytics-solutions/practical-ml-vision-book:latest
command: [
"bash", "/src/practical-ml-vision-book/10_mlops/components/train_model_caip.sh",
]
args: [
{inputValue: input_topdir},
{inputValue: region},
{inputValue: job_dir},
{outputPath: trained_model},
]
Explanation: Train model
To do it locally on the cluster instead of on CAIP, we'll use gcloud local training
<pre>
gcloud ai-platform local train --package-path $PACKAGE_PATH
--module-name $MODULE_NAME --job-dir ${JOB_DIR}_local
-- --num_training_examples 100 --with_color_distort False --crop_ratio 0.6
</pre>
End of explanation
import kfp
import kfp.dsl as dsl
import json
import os
create_dataset_op = kfp.components.load_component_from_file(
#'components/noop_create_dataset.yaml'
'components/create_dataset.yaml'
)
train_model_op = kfp.components.load_component_from_file(
#'components/train_model_kfp.yaml'
'components/train_model_caip.yaml'
)
deploy_op = kfp.components.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/gcp/ml_engine/deploy/component.yaml')
@dsl.pipeline(
name='Flowers Transfer Learning Pipeline',
description='End-to-end pipeline'
)
def flowerstxf_pipeline(
project_id = PROJECT,
bucket = BUCKET,
region = REGION
):
# Step 1: Create dataset
create_dataset = create_dataset_op(
runner='DataflowRunner',
project_id=project_id,
region=region,
input_csv='gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/all_data.csv',
output_dir='gs://{}/data/flower_tfrecords'.format(bucket),
labels_dict='gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dict.txt'
)
create_dataset.execution_options.caching_strategy.max_cache_staleness = "P7D"
# Step 2: Train model
train_model = train_model_op(
input_topdir=create_dataset.outputs['tfrecords_topdir'],
region=region,
job_dir='gs://{}/trained_model'.format(bucket)
)
train_model.execution_options.caching_strategy.max_cache_staleness = "P0D"
# Step 3: Deploy trained model
deploy_model = deploy_op(
model_uri=train_model.outputs['trained_model'],
project_id=project_id,
model_id='flowers',
version_id='txf',
runtime_version='2.3',
python_version='3.7',
version={},
replace_existing_version='True',
set_default='True',
wait_interval='30')
Explanation: The pipeline
End of explanation
pipeline_func = flowerstxf_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
import kfp
client = kfp.Client(host=KFPHOST)
experiment = client.create_experiment('from_notebook')
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename,
{
'project_id': PROJECT,
'bucket': BUCKET,
'region': REGION
})
Explanation: Compile and submit pipeline
End of explanation |
675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Machine Learning to Predict Breast Cancer
Matt Massie, UC Berkeley Computer Sciences
Machine learning (ML) is data driven. Machine learning algorithms are constructed to learn from and make predictions on data instead of having strictly static instructions.
Supervised (e.g. classification) vs Unsupervised (e.g. anomaly detection) learning
In this short talk, we'll explore the freely available Breast Cancer Wisconsin Data Set on the University of California, Irvine Machine Learning site.
Data set creators
Step1: Training and Test Data Sets
Each patient record is randomly assigned to a "training" data set (80%) or a "test" dataset (20%). Best practices have a cross-validation set (60% training, 20% cross-validation, 20% test).
Step2: Linear Support Vector Machine Classification
This image shows how support vector machine searches for a "Maximum-Margin Hyperplane" in 2-dimensional space.
The breast cancer data set is 9-dimensional.
Image by User
Step3: Evaluating performance of the model | Python Code:
import numpy as np
import pandas as pd
def load_data(filename):
import csv
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=',')
df = pd.DataFrame([[-1 if el == '?' else int(el) for el in r] for r in csvreader])
df.columns=["patient_id", "radius", "texture", "perimeter", "smoothness", "compactness", "concavity", "concave_points", "symmetry", "fractal_dimension", "malignant"]
df['malignant'] = df['malignant'].map({2: 0, 4: 1})
return df
Explanation: Using Machine Learning to Predict Breast Cancer
Matt Massie, UC Berkeley Computer Sciences
Machine learning (ML) is data driven. Machine learning algorithms are constructed to learn from and make predictions on data instead of having strictly static instructions.
Supervised (e.g. classification) vs Unsupervised (e.g. anomaly detection) learning
In this short talk, we'll explore the freely available Breast Cancer Wisconsin Data Set on the University of California, Irvine Machine Learning site.
Data set creators:
Dr. William H. Wolberg, General Surgery Dept. University of Wisconsin, Clinical Sciences Center
W. Nick Street, Computer Sciences Dept. University of Wisconsin
Olvi L. Mangasarian, Computer Sciences Dept. University of Wisconsin
End of explanation
training_set = load_data("data/breast-cancer.train")
test_set = load_data("data/breast-cancer.test")
print "Training set has %d patients" % (training_set.shape[0])
print "Test set has %d patients\n" % (test_set.shape[0])
print training_set.iloc[:, 0:6].head(3)
print
print training_set.iloc[:, 6:11].head(3)
training_set_malignant = training_set['malignant']
training_set_features = training_set.iloc[:, 1:10]
test_set_malignant = test_set['malignant']
test_set_features = test_set.iloc[:, 1:10]
Explanation: Training and Test Data Sets
Each patient record is randomly assigned to a "training" data set (80%) or a "test" dataset (20%). Best practices have a cross-validation set (60% training, 20% cross-validation, 20% test).
End of explanation
from sklearn.preprocessing import MinMaxScaler
from sklearn import svm
# (1) Scale the 'training set'
scaler = MinMaxScaler()
scaled_training_set_features = scaler.fit_transform(training_set_features)
# (2) Create the model
model = svm.LinearSVC(C=0.1)
# (3) Fit the model using the 'training set'
model.fit(scaled_training_set_features, training_set_malignant)
# (4) Scale the 'test set' using the same scaler as the 'training set'
scaled_test_set_features = scaler.transform(test_set_features)
# (5) Use the model to predict malignancy the 'test set'
test_set_malignant_predictions = model.predict(scaled_test_set_features)
print test_set_malignant_predictions
Explanation: Linear Support Vector Machine Classification
This image shows how support vector machine searches for a "Maximum-Margin Hyperplane" in 2-dimensional space.
The breast cancer data set is 9-dimensional.
Image by User:ZackWeinberg, based on PNG version by User:Cyc [<a href="http://creativecommons.org/licenses/by-sa/3.0">CC BY-SA 3.0</a>], <a href="https://commons.wikimedia.org/wiki/File%3ASvm_separating_hyperplanes_(SVG).svg">via Wikimedia Commons</a>
Using scikit-learn to predict malignant tumors
End of explanation
from sklearn import metrics
accuracy = metrics.accuracy_score(test_set_malignant, \
test_set_malignant_predictions) * 100
((tn, fp), (fn, tp)) = metrics.confusion_matrix(test_set_malignant, \
test_set_malignant_predictions)
print "Accuracy: %.2f%%" % (accuracy)
print "True Positives: %d, True Negatives: %d" % (tp, tn)
print "False Positives: %d, False Negatives: %d" % (fp, fn)
Explanation: Evaluating performance of the model
End of explanation |
676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stiffness matrix for a perfectly square bilinear element.
Summary
This notebook describes the computational steps required in the computation of the displacement based finite element stiffness matrix for a perfectly square element of side $2h$.
Stiffness matrix
The displacement based finite elment stiffness matrix can be written like
Step3: In the displacement based finite element method we assume that the displacemnts vector at any point $x_i$ over the element are expressed, via interpolation, in terms of the nodal displacements like
Step5: We also need to evaluate the elastic constitutive matrix
Step6: Computation of the stiffness matrix | Python Code:
%matplotlib notebook
from __future__ import division
import numpy as np
import sympy as sym
import matplotlib.pyplot as plt
from IPython.display import Image
Explanation: Stiffness matrix for a perfectly square bilinear element.
Summary
This notebook describes the computational steps required in the computation of the displacement based finite element stiffness matrix for a perfectly square element of side $2h$.
Stiffness matrix
The displacement based finite elment stiffness matrix can be written like:
$${K^{QP}} = \int\limits_V {B_{ij}^Q{C_{ijkl}}B_{kl}^PdV} $$
where:
$$C = \frac{{E(1 - \nu )}}{{(1 + \nu )(1 - 2\nu )}}\left[ {\begin{array}{*{20}{c}}
1&{\frac{\nu }{{1 - \nu }}}&0\
{\frac{\nu }{{1 - \nu }}}&1&0\
0&0&{\frac{{1 - 2\nu }}{{2(1 - \nu )}}}
\end{array}} \right]$$
is the elstic tensor and $B_{ij}^Q$ is the contribution to the strain-displacement interpolator from the $Q$ degree of freedom.
The 4-noded perfectly square element of side $2h$ is shown in the figure below:
<center><img src="img/lado2h.png" alt="square element" style="width:250px"></center>
with each node having 2 degrees of freedom corresponding to the rectangular components of the displacement vector.
Finite element approach
End of explanation
def shape4(x , y , h):
Shape functions for the bi-lineal element.
Parameters
----------
x , y: Space variables.
h : Element halwidth.
Returns
-------
N : Array
N=sym.zeros(4)
N = 1/(4*h**2)*sym.Matrix([(h + x)*(h + y),
(h - x)*(h + y),
(h - x)*(h - y),
(h + x)*(h - y)])
return N
def stdm4(x , y , h ):
Strain-displacement interpolator for the bi-lineal element.
Parameters
----------
x , y: Space variables.
h : Element halwidth.
Returns
-------
B : Array
B = sym.zeros(3,8)
N = shape4(x , y , h)
dhdx=sym.zeros(2,4)
for i in range(4):
dhdx[0,i]=sym.diff(N[i],x)
dhdx[1,i]=sym.diff(N[i],y)
#
for i in range(4):
B[0, 2*i] = dhdx[0, i]
B[1, 2*i+1] = dhdx[1, i]
B[2, 2*i] = dhdx[1, i]
B[2, 2*i+1] = dhdx[0, i]
#
return B
Explanation: In the displacement based finite element method we assume that the displacemnts vector at any point $x_i$ over the element are expressed, via interpolation, in terms of the nodal displacements like:
$${u_i} = N_i^Q(r){{\hat u}^Q}$$
and where $N_i^K(r)$ is the shape function associated to the $k$-th degree of freedom.
For the computation of the stiffness matrix ${K^{QP}}$ we require the term $B_{ij}$ relating nodal displacements to strains. This interpolator can be written in terms of drivativs of the shape functions as follows:
$$B_{ij}^Q = \frac{1}{2}\left( {\frac{{\partial N_i^Q}}{{\partial {x_j}}} + \frac{{\partial N_j^Q}}{{\partial {x_i}}}} \right).$$
In the implementation this operator is computed by the subroutine strain-displacement matrix for a 4-noded element stdm4() as listed blow:
End of explanation
def umat(nu,E):
Plane stress constitutive tensor.
Parameters
----------
nu : Posisson ratio.
E : Young modulus.
Returns
-------
C : Array. Constitutive matrix.
#
C=sym.zeros(3,3)
G=E/(1-nu**2)
mnu=(1-nu)/2.0
C[0,0]=G
C[0,1]=nu*G
C[1,0]=C[0,1]
C[1,1]=G
C[2,2]=G*mnu
#
return C
Explanation: We also need to evaluate the elastic constitutive matrix:
$$C = \frac{{E(1 - \nu )}}{{(1 + \nu )(1 - 2\nu )}}\left[ {\begin{array}{*{20}{c}}
1&{\frac{\nu }{{1 - \nu }}}&0\
{\frac{\nu }{{1 - \nu }}}&1&0\
0&0&{\frac{{1 - 2\nu }}{{2(1 - \nu )}}}
\end{array}} \right]$$
by the subroutine umat().
End of explanation
C = sym.zeros(3,3)
B = sym.zeros(3,8)
K = sym.zeros(8,8)
x , y = sym.symbols('x y')
nu, E = sym.symbols('nu, E')
h = sym.symbols('h')
C = umat(nu,E)
B = stdm4(x , y , h)
K_int = B.T*C*B
nuu = 1.0/3.0
EE = 8.0/3.0
for i in range(8):
for j in range(8):
K[i,j] = sym.integrate(K_int[i,j], (x,-h,h), (y,-h,h))
kk=K.subs([(E, EE), (nu, nuu), (h, 1.00)])
print(sym.N(kk , 3))
from IPython.core.display import HTML
def css_styling():
styles = open('../styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: Computation of the stiffness matrix
End of explanation |
677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AutoGraph
Step1: Fibonacci numbers
https
Step2: Generated code
Step3: Fizz Buzz
https
Step4: Generated code
Step5: Conway's Game of Life
https
Step6: Game of Life for AutoGraph
Note
Step7: Note
Step8: Generated code | Python Code:
!pip install -U -q tf-nightly-2.0-preview
import tensorflow as tf
tf = tf.compat.v2
tf.enable_v2_behavior()
Explanation: AutoGraph: examples of simple algorithms
This notebook shows how you can use AutoGraph to compile simple algorithms and run them in TensorFlow.
It requires the nightly build of TensorFlow, which is installed below.
End of explanation
@tf.function
def fib(n):
f1 = 0
f2 = 1
for i in tf.range(n):
tmp = f2
f2 = f2 + f1
f1 = tmp
tf.print(i, ': ', f2)
return f2
_ = fib(tf.constant(10))
Explanation: Fibonacci numbers
https://en.wikipedia.org/wiki/Fibonacci_number
End of explanation
print(tf.autograph.to_code(fib.python_function))
Explanation: Generated code
End of explanation
import tensorflow as tf
@tf.function(experimental_autograph_options=tf.autograph.experimental.Feature.EQUALITY_OPERATORS)
def fizzbuzz(i, n):
while i < n:
msg = ''
if i % 3 == 0:
msg += 'Fizz'
if i % 5 == 0:
msg += 'Buzz'
if msg == '':
msg = tf.as_string(i)
tf.print(msg)
i += 1
return i
_ = fizzbuzz(tf.constant(10), tf.constant(16))
Explanation: Fizz Buzz
https://en.wikipedia.org/wiki/Fizz_buzz
End of explanation
print(tf.autograph.to_code(fizzbuzz.python_function))
Explanation: Generated code
End of explanation
NUM_STEPS = 1
Explanation: Conway's Game of Life
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
Testing boilerplate
End of explanation
#@test {"skip": true}
NUM_STEPS = 75
Explanation: Game of Life for AutoGraph
Note: the code may take a while to run.
End of explanation
import time
import traceback
import sys
from matplotlib import pyplot as plt
from matplotlib import animation as anim
import numpy as np
from IPython import display
@tf.autograph.experimental.do_not_convert
def render(boards):
fig = plt.figure()
ims = []
for b in boards:
im = plt.imshow(b, interpolation='none')
im.axes.get_xaxis().set_visible(False)
im.axes.get_yaxis().set_visible(False)
ims.append([im])
try:
ani = anim.ArtistAnimation(
fig, ims, interval=100, blit=True, repeat_delay=5000)
plt.close()
display.display(display.HTML(ani.to_html5_video()))
except RuntimeError:
print('Coult not render animation:')
traceback.print_exc()
return 1
return 0
def gol_episode(board):
new_board = tf.TensorArray(tf.int32, 0, dynamic_size=True)
for i in tf.range(len(board)):
for j in tf.range(len(board[i])):
num_neighbors = tf.reduce_sum(
board[tf.maximum(i-1, 0):tf.minimum(i+2, len(board)),
tf.maximum(j-1, 0):tf.minimum(j+2, len(board[i]))]
) - board[i][j]
if num_neighbors == 2:
new_cell = board[i][j]
elif num_neighbors == 3:
new_cell = 1
else:
new_cell = 0
new_board.append(new_cell)
final_board = new_board.stack()
final_board = tf.reshape(final_board, board.shape)
return final_board
@tf.function(experimental_autograph_options=(
tf.autograph.experimental.Feature.EQUALITY_OPERATORS,
tf.autograph.experimental.Feature.BUILTIN_FUNCTIONS,
tf.autograph.experimental.Feature.LISTS,
))
def gol(initial_board):
board = initial_board
boards = tf.TensorArray(tf.int32, size=0, dynamic_size=True)
i = 0
for i in tf.range(NUM_STEPS):
board = gol_episode(board)
boards.append(board)
boards = boards.stack()
tf.py_function(render, (boards,), (tf.int64,))
return i
# Gosper glider gun
# Adapted from http://www.cplusplus.com/forum/lounge/75168/
_ = 0
initial_board = tf.constant((
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,1,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,1,_,1,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,1,1,_,_,_,_,_,_,1,1,_,_,_,_,_,_,_,_,_,_,_,_,1,1,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,1,_,_,_,1,_,_,_,_,1,1,_,_,_,_,_,_,_,_,_,_,_,_,1,1,_ ),
( _,1,1,_,_,_,_,_,_,_,_,1,_,_,_,_,_,1,_,_,_,1,1,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,1,1,_,_,_,_,_,_,_,_,1,_,_,_,1,_,1,1,_,_,_,_,1,_,1,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,1,_,_,_,_,_,1,_,_,_,_,_,_,_,1,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,1,_,_,_,1,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,1,1,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
))
initial_board = tf.pad(initial_board, ((0, 10), (0, 5)))
_ = gol(initial_board)
Explanation: Note: This code uses a non-vectorized algorithm, which is quite slow. For 75 steps, it will take a few minutes to run.
End of explanation
print(tf.autograph.to_code(gol.python_function))
Explanation: Generated code
End of explanation |
678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: scikit-image advanced panorama tutorial
Enhanced from the original demo as featured in the scikit-image paper.
Multiple overlapping images of the same scene, combined into a single image, can yield amazing results. This tutorial will illustrate how to accomplish panorama stitching using scikit-image, from loading the images to cleverly stitching them together.
First things first
Import NumPy and matplotlib, then define a utility function to compare multiple images
Step2: Load data
The ImageCollection class provides an easy and efficient way to load and represent multiple images. Images in the ImageCollection are not only read from disk when accessed.
Load a series of images into an ImageCollection with a wildcard, as they share similar names.
Step3: Inspect these images using the convenience function compare() defined earlier
Step4: Credit
Step5: 1. Feature detection and matching
We need to estimate a projective transformation that relates these images together. The steps will be
Define one image as a target or destination image, which will remain anchored while the others are warped
Detect features in all three images
Match features from left and right images against the features in the center, anchored image.
In this three-shot series, the middle image pano1 is the logical anchor point.
We detect "Oriented FAST and rotated BRIEF" (ORB) features in both images.
Note
Step6: Match features from images 0 <-> 1 and 1 <-> 2.
Step7: Inspect these matched features side-by-side using the convenience function skimage.feature.plot_matches.
Step8: Most of these line up similarly, but it isn't perfect. There are a number of obvious outliers or false matches.
Step9: Similar to above, decent signal but numerous false matches.
2. Transform estimation
To filter out the false matches, we apply RANdom SAmple Consensus (RANSAC), a powerful method of rejecting outliers available in skimage.transform.ransac. The transformation is estimated using an iterative process based on randomly chosen subsets, finally selecting the model which corresponds best with the majority of matches.
We need to do this twice, once each for the transforms left -> center and right -> center.
Step10: The inliers returned from RANSAC select the best subset of matches. How do they look?
Step11: Most of the false matches are rejected!
3. Warping
Next, we produce the panorama itself. We must warp, or transform, two of the three images so they will properly align with the stationary image.
Extent of output image
The first step is to find the shape of the output image to contain all three transformed images. To do this we consider the extents of all warped images.
Step12: Apply estimated transforms
Warp the images with skimage.transform.warp according to the estimated models. A shift, or translation is needed to place as our middle image in the middle - it isn't truly stationary.
Values outside the input images are initially set to -1 to distinguish the "background", which is identified for later use.
Note
Step13: Warp left panel into place
Step14: Warp right panel into place
Step15: Inspect the warped images
Step16: 4. Combining images the easy (and bad) way
This method simply
sums the warped images
tracks how many images overlapped to create each point
normalizes the result.
Step17: Finally, view the results!
Step18: <div style="height
Step19: The surrounding flat gray is zero. A perfect overlap would show no structure!
Instead, the overlap region matches fairly well in the middle... but off to the sides where things start to look a little embossed, a simple average blurs the result. This caused the blurring in the previous, method (look again). Unfortunately, this is almost always the case for panoramas!
How can we fix this?
Let's attempt to find a vertical path through this difference image which stays as close to zero as possible. If we use that to build a mask, defining a transition between images, the result should appear seamless.
Seamless image stitching with Minimum-Cost Paths and skimage.graph
Among other things, skimage.graph allows you to
* start at any point on an array
* find the path to any other point in the array
* the path found minimizes the sum of values on the path.
The array is called a cost array, while the path found is a minimum-cost path or MCP.
To accomplish this we need
Starting and ending points for the path
A cost array (a modified difference image)
This method is so powerful that, with a carefully constructed cost array, the seed points are essentially irrelevant. It just works!
Define seed points
Step21: Construct cost array
This utility function exists to give a "cost break" for paths from the edge to the overlap region.
We will visually explore the results shortly. Examine the code later - for now, just use it.
Step22: Use this function to generate the cost array.
Step23: Allow the path to "slide" along top and bottom edges to the optimal horizontal position by setting top and bottom edges to zero cost.
Step24: Our cost array now looks like this
Step25: The tweak we made with generate_costs is subtle but important. Can you see it?
Find the minimum-cost path (MCP)
Use skimage.graph.route_through_array to find an optimal path through the cost array
Step26: Did it work?
Step27: That looks like a great seam to stitch these images together - the path looks very close to zero.
Irregularities
Due to the random element in the RANSAC transform estimation, everyone will have a slightly different blue path. Your path will look different from mine, and different from your neighbor's. That's expected! The awesome thing about MCP is that everyone just calculated the best possible path to stitch together their unique transforms!
Filling the mask
Turn that path into a mask, which will be 1 where we want the left image to show through and zero elsewhere. We need to fill the left side of the mask with ones over to our path.
Note
Step28: Ensure the path appears as expected
Step29: Label the various contiguous regions in the image using skimage.measure.label
Step30: Looks great!
Apply the same principles to images 1 and 2
Step31: Add an additional constraint this time, to prevent this path crossing the prior one!
Step32: Check the result
Step33: Your results may look slightly different.
Compute the minimal cost path
Step34: Verify a reasonable result
Step35: Initialize the mask by placing the path in a new array
Step36: Fill the right side this time, again using skimage.measure.label - the label of interest is 2
Step37: Final mask
The last mask for the middle image is one of exclusion - it will be displayed everywhere mask0 and mask2 are not.
Step39: Define a convenience function to place masks in alpha channels
Step40: Obtain final, alpha blended individual images and inspect them
Step41: What we have here is the world's most complicated and precisely-fitting jigsaw puzzle...
Plot all three together and view the results!
Step42: Fantastic! Without the black borders, you'd never know this was composed of separate images!
Bonus round
Step43: Apply the custom alpha channel masks
Step44: View the result!
Step45: Save the combined, color panorama locally as './pano-advanced-output.png'
Step46: <div style="height | Python Code:
import numpy as np
import matplotlib.pyplot as plt
def compare(*images, **kwargs):
Utility function to display images side by side.
Parameters
----------
image0, image1, image2, ... : ndarrray
Images to display.
labels : list
Labels for the different images.
f, axes = plt.subplots(1, len(images), **kwargs)
axes = np.array(axes, ndmin=1)
labels = kwargs.pop('labels', None)
if labels is None:
labels = [''] * len(images)
for n, (image, label) in enumerate(zip(images, labels)):
axes[n].imshow(image, interpolation='nearest', cmap='gray')
axes[n].set_title(label)
axes[n].axis('off')
f.tight_layout()
Explanation: scikit-image advanced panorama tutorial
Enhanced from the original demo as featured in the scikit-image paper.
Multiple overlapping images of the same scene, combined into a single image, can yield amazing results. This tutorial will illustrate how to accomplish panorama stitching using scikit-image, from loading the images to cleverly stitching them together.
First things first
Import NumPy and matplotlib, then define a utility function to compare multiple images
End of explanation
import skimage.io as io
pano_imgs = io.ImageCollection('../images/pano/JDW_03*')
Explanation: Load data
The ImageCollection class provides an easy and efficient way to load and represent multiple images. Images in the ImageCollection are not only read from disk when accessed.
Load a series of images into an ImageCollection with a wildcard, as they share similar names.
End of explanation
# compare(...)
Explanation: Inspect these images using the convenience function compare() defined earlier
End of explanation
from skimage.color import rgb2gray
# Make grayscale versions of the three color images in pano_imgs
# named pano0, pano1, and pano2
# View the results using compare()
Explanation: Credit: Images of Private Arch and the trail to Delicate Arch in Arches National Park, USA, taken by Joshua D. Warner.<br>
License: CC-BY 4.0
0. Pre-processing
This stage usually involves one or more of the following:
* Resizing, often downscaling with fixed aspect ratio
* Conversion to grayscale, as some feature descriptors are not defined for color images
* Cropping to region(s) of interest
For convenience our example data is already resized smaller, and we won't bother cropping. However, they are presently in color so coversion to grayscale with skimage.color.rgb2gray is appropriate.
End of explanation
from skimage.feature import ORB
# Initialize ORB
# This number of keypoints is large enough for robust results,
# but low enough to run within a few seconds.
orb = ORB(n_keypoints=800, fast_threshold=0.05)
# Detect keypoints in pano0
orb.detect_and_extract(pano0)
keypoints0 = orb.keypoints
descriptors0 = orb.descriptors
# Detect keypoints in pano1 and pano2
Explanation: 1. Feature detection and matching
We need to estimate a projective transformation that relates these images together. The steps will be
Define one image as a target or destination image, which will remain anchored while the others are warped
Detect features in all three images
Match features from left and right images against the features in the center, anchored image.
In this three-shot series, the middle image pano1 is the logical anchor point.
We detect "Oriented FAST and rotated BRIEF" (ORB) features in both images.
Note: For efficiency, in this tutorial we're finding 800 keypoints. The results are good but small variations are expected. If you need a more robust estimate in practice, run multiple times and pick the best result or generate additional keypoints.
End of explanation
from skimage.feature import match_descriptors
# Match descriptors between left/right images and the center
matches01 = match_descriptors(descriptors0, descriptors1, cross_check=True)
matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
Explanation: Match features from images 0 <-> 1 and 1 <-> 2.
End of explanation
from skimage.feature import plot_matches
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
# Best match subset for pano0 -> pano1
plot_matches(ax, pano0, pano1, keypoints0, keypoints1, matches01)
ax.axis('off');
Explanation: Inspect these matched features side-by-side using the convenience function skimage.feature.plot_matches.
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
# Best match subset for pano2 -> pano1
plot_matches(ax, pano1, pano2, keypoints1, keypoints2, matches12)
ax.axis('off');
Explanation: Most of these line up similarly, but it isn't perfect. There are a number of obvious outliers or false matches.
End of explanation
from skimage.transform import ProjectiveTransform
from skimage.measure import ransac
# Select keypoints from
# * source (image to be registered): pano0
# * target (reference image): pano1, our middle frame registration target
src = keypoints0[matches01[:, 0]][:, ::-1]
dst = keypoints1[matches01[:, 1]][:, ::-1]
model_robust01, inliers01 = ransac((src, dst), ProjectiveTransform,
min_samples=4, residual_threshold=1, max_trials=300)
# Select keypoints from
# * source (image to be registered): pano2
# * target (reference image): pano1, our middle frame registration target
src = keypoints2[matches12[:, 1]][:, ::-1]
dst = keypoints1[matches12[:, 0]][:, ::-1]
model_robust12, inliers12 = ransac((src, dst), ProjectiveTransform,
min_samples=4, residual_threshold=1, max_trials=300)
Explanation: Similar to above, decent signal but numerous false matches.
2. Transform estimation
To filter out the false matches, we apply RANdom SAmple Consensus (RANSAC), a powerful method of rejecting outliers available in skimage.transform.ransac. The transformation is estimated using an iterative process based on randomly chosen subsets, finally selecting the model which corresponds best with the majority of matches.
We need to do this twice, once each for the transforms left -> center and right -> center.
End of explanation
# Use plot_matches as before, but select only good matches with fancy indexing
# e.g., matches01[inliers01]
# Use plot_matches as before, but select only good matches with fancy indexing
# e.g., matches12[inliers12]
Explanation: The inliers returned from RANSAC select the best subset of matches. How do they look?
End of explanation
from skimage.transform import SimilarityTransform
# Shape of middle image, our registration target
r, c = pano1.shape[:2]
# Note that transformations take coordinates in (x, y) format,
# not (row, column), in order to be consistent with most literature
corners = np.array([[0, 0],
[0, r],
[c, 0],
[c, r]])
# Warp the image corners to their new positions
warped_corners01 = model_robust01(corners)
warped_corners12 = model_robust12(corners)
# Find the extents of both the reference image and the warped
# target image
all_corners = np.vstack((warped_corners01, warped_corners12, corners))
# The overall output shape will be max - min
corner_min = np.min(all_corners, axis=0)
corner_max = np.max(all_corners, axis=0)
output_shape = (corner_max - corner_min)
# Ensure integer shape with np.ceil and dtype conversion
output_shape = np.ceil(output_shape[::-1]).astype(int)
Explanation: Most of the false matches are rejected!
3. Warping
Next, we produce the panorama itself. We must warp, or transform, two of the three images so they will properly align with the stationary image.
Extent of output image
The first step is to find the shape of the output image to contain all three transformed images. To do this we consider the extents of all warped images.
End of explanation
from skimage.transform import warp
# This in-plane offset is the only necessary transformation for the middle image
offset1 = SimilarityTransform(translation= -corner_min)
# Translate pano1 into place
pano1_warped = warp(pano1, offset1.inverse, order=3,
output_shape=output_shape, cval=-1)
# Acquire the image mask for later use
pano1_mask = (pano1_warped != -1) # Mask == 1 inside image
pano1_warped[~pano1_mask] = 0 # Return background values to 0
Explanation: Apply estimated transforms
Warp the images with skimage.transform.warp according to the estimated models. A shift, or translation is needed to place as our middle image in the middle - it isn't truly stationary.
Values outside the input images are initially set to -1 to distinguish the "background", which is identified for later use.
Note: warp takes the inverse mapping as an input.
End of explanation
# Warp pano0 to pano1
transform01 = (model_robust01 + offset1).inverse
pano0_warped = warp(pano0, transform01, order=3,
output_shape=output_shape, cval=-1)
pano0_mask = (pano0_warped != -1) # Mask == 1 inside image
pano0_warped[~pano0_mask] = 0 # Return background values to 0
Explanation: Warp left panel into place
End of explanation
# Warp pano2 to pano1
transform12 = (model_robust12 + offset1).inverse
pano2_warped = warp(pano2, transform12, order=3,
output_shape=output_shape, cval=-1)
pano2_mask = (pano2_warped != -1) # Mask == 1 inside image
pano2_warped[~pano2_mask] = 0 # Return background values to 0
Explanation: Warp right panel into place
End of explanation
compare(pano0_warped, pano1_warped, pano2_warped, figsize=(12, 10));
Explanation: Inspect the warped images:
End of explanation
# Add the three warped images together. This could create dtype overflows!
# We know they are are floating point images after warping, so it's OK.
merged = ## Sum warped images
# Track the overlap by adding the masks together
overlap = ## Sum masks
# Normalize through division by `overlap` - but ensure the minimum is 1
normalized = merged / ## Divisor here
Explanation: 4. Combining images the easy (and bad) way
This method simply
sums the warped images
tracks how many images overlapped to create each point
normalizes the result.
End of explanation
fig, ax = plt.subplots(figsize=(12, 12))
ax.imshow(normalized, cmap='gray')
fig.tight_layout()
ax.axis('off');
Explanation: Finally, view the results!
End of explanation
fig, ax = plt.subplots(figsize=(12, 12))
# Generate difference image and inspect it
difference_image = pano0_warped - pano1_warped
ax.imshow(difference_image, cmap='gray')
ax.axis('off');
Explanation: <div style="height: 400px;"></div>
What happened?! Why are there nasty dark lines at boundaries, and why does the middle look so blurry?
The lines are artifacts (boundary effect) from the warping method. When the image is warped with interpolation, edge pixels containing part image and part background combine these values. We would have bright lines if we'd chosen cval=2 in the warp calls (try it!), but regardless of choice there will always be discontinuities.
...Unless you use order=0 in warp, which is nearest neighbor. Then edges are perfect (try it!). But who wants to be limited to an inferior interpolation method?
Even then, it's blurry! Is there a better way?
5. Stitching images along a minimum-cost path
Let's step back a moment and consider: Is it even reasonable to blend pixels?
Take a look at a difference image, which is just one image subtracted from the other.
End of explanation
ymax = output_shape[1] - 1
xmax = output_shape[0] - 1
# Start anywhere along the top and bottom, left of center.
mask_pts01 = [[0, ymax // 3],
[xmax, ymax // 3]]
# Start anywhere along the top and bottom, right of center.
mask_pts12 = [[0, 2*ymax // 3],
[xmax, 2*ymax // 3]]
Explanation: The surrounding flat gray is zero. A perfect overlap would show no structure!
Instead, the overlap region matches fairly well in the middle... but off to the sides where things start to look a little embossed, a simple average blurs the result. This caused the blurring in the previous, method (look again). Unfortunately, this is almost always the case for panoramas!
How can we fix this?
Let's attempt to find a vertical path through this difference image which stays as close to zero as possible. If we use that to build a mask, defining a transition between images, the result should appear seamless.
Seamless image stitching with Minimum-Cost Paths and skimage.graph
Among other things, skimage.graph allows you to
* start at any point on an array
* find the path to any other point in the array
* the path found minimizes the sum of values on the path.
The array is called a cost array, while the path found is a minimum-cost path or MCP.
To accomplish this we need
Starting and ending points for the path
A cost array (a modified difference image)
This method is so powerful that, with a carefully constructed cost array, the seed points are essentially irrelevant. It just works!
Define seed points
End of explanation
from skimage.measure import label
def generate_costs(diff_image, mask, vertical=True, gradient_cutoff=2.):
Ensures equal-cost paths from edges to region of interest.
Parameters
----------
diff_image : (M, N) ndarray of floats
Difference of two overlapping images.
mask : (M, N) ndarray of bools
Mask representing the region of interest in ``diff_image``.
vertical : bool
Control operation orientation.
gradient_cutoff : float
Controls how far out of parallel lines can be to edges before
correction is terminated. The default (2.) is good for most cases.
Returns
-------
costs_arr : (M, N) ndarray of floats
Adjusted costs array, ready for use.
if vertical is not True:
return tweak_costs(diff_image.T, mask.T, vertical=vertical,
gradient_cutoff=gradient_cutoff).T
# Start with a high-cost array of 1's
costs_arr = np.ones_like(diff_image)
# Obtain extent of overlap
row, col = mask.nonzero()
cmin = col.min()
cmax = col.max()
# Label discrete regions
cslice = slice(cmin, cmax + 1)
labels = label(mask[:, cslice])
# Find distance from edge to region
upper = (labels == 0).sum(axis=0)
lower = (labels == 2).sum(axis=0)
# Reject areas of high change
ugood = np.abs(np.gradient(upper)) < gradient_cutoff
lgood = np.abs(np.gradient(lower)) < gradient_cutoff
# Give areas slightly farther from edge a cost break
costs_upper = np.ones_like(upper, dtype=np.float64)
costs_lower = np.ones_like(lower, dtype=np.float64)
costs_upper[ugood] = upper.min() / np.maximum(upper[ugood], 1)
costs_lower[lgood] = lower.min() / np.maximum(lower[lgood], 1)
# Expand from 1d back to 2d
vdist = mask.shape[0]
costs_upper = costs_upper[np.newaxis, :].repeat(vdist, axis=0)
costs_lower = costs_lower[np.newaxis, :].repeat(vdist, axis=0)
# Place these in output array
costs_arr[:, cslice] = costs_upper * (labels == 0)
costs_arr[:, cslice] += costs_lower * (labels == 2)
# Finally, place the difference image
costs_arr[mask] = diff_image[mask]
return costs_arr
Explanation: Construct cost array
This utility function exists to give a "cost break" for paths from the edge to the overlap region.
We will visually explore the results shortly. Examine the code later - for now, just use it.
End of explanation
# Start with the absolute value of the difference image.
# np.abs necessary because we don't want negative costs!
costs01 = generate_costs(np.abs(pano0_warped - pano1_warped),
pano0_mask & pano1_mask)
Explanation: Use this function to generate the cost array.
End of explanation
# Set top and bottom edges to zero in `costs01`
# Remember (row, col) indexing!
costs01[0, :] = 0
costs01[-1, :] = 0
Explanation: Allow the path to "slide" along top and bottom edges to the optimal horizontal position by setting top and bottom edges to zero cost.
End of explanation
fig, ax = plt.subplots(figsize=(15, 12))
ax.imshow(costs01, cmap='gray', interpolation='none')
ax.axis('off');
Explanation: Our cost array now looks like this
End of explanation
from skimage.graph import route_through_array
# Arguments are:
# cost array
# start pt
# end pt
# can it traverse diagonally
pts, _ = route_through_array(costs01, mask_pts01[0], mask_pts01[1], fully_connected=True)
# Convert list of lists to 2d coordinate array for easier indexing
pts = np.array(pts)
Explanation: The tweak we made with generate_costs is subtle but important. Can you see it?
Find the minimum-cost path (MCP)
Use skimage.graph.route_through_array to find an optimal path through the cost array
End of explanation
fig, ax = plt.subplots(figsize=(12, 12))
# Plot the difference image
ax.imshow(pano0_warped - pano1_warped, cmap='gray')
# Overlay the minimum-cost path
ax.plot(pts[:, 1], pts[:, 0])
plt.tight_layout()
ax.axis('off');
Explanation: Did it work?
End of explanation
# Start with an array of zeros and place the path
mask0 = np.zeros_like(pano0_warped, dtype=np.uint8)
mask0[pts[:, 0], pts[:, 1]] = 1
Explanation: That looks like a great seam to stitch these images together - the path looks very close to zero.
Irregularities
Due to the random element in the RANSAC transform estimation, everyone will have a slightly different blue path. Your path will look different from mine, and different from your neighbor's. That's expected! The awesome thing about MCP is that everyone just calculated the best possible path to stitch together their unique transforms!
Filling the mask
Turn that path into a mask, which will be 1 where we want the left image to show through and zero elsewhere. We need to fill the left side of the mask with ones over to our path.
Note: This is the inverse of NumPy masked array conventions (numpy.ma), which specify a negative mask (mask == bad/missing) rather than a positive mask as used here (mask == good/selected).
Place the path into a new, empty array.
End of explanation
fig, ax = plt.subplots(figsize=(12, 12))
# View the path in black and white
ax.imshow(mask0, cmap='gray')
ax.axis('off');
Explanation: Ensure the path appears as expected
End of explanation
from skimage.measure import label
# Labeling starts with zero at point (0, 0)
mask0[label(mask0, connectivity=1) == 0] = 1
# The result
plt.imshow(mask0, cmap='gray');
Explanation: Label the various contiguous regions in the image using skimage.measure.label
End of explanation
# Start with the absolute value of the difference image.
# np.abs is necessary because we don't want negative costs!
costs12 = generate_costs(np.abs(pano1_warped - pano2_warped),
pano1_mask & pano2_mask)
# Allow the path to "slide" along top and bottom edges to the optimal
# horizontal position by setting top and bottom edges to zero cost
costs12[0, :] = 0
costs12[-1, :] = 0
Explanation: Looks great!
Apply the same principles to images 1 and 2: first, build the cost array
End of explanation
costs12[mask0 > 0] = 1
Explanation: Add an additional constraint this time, to prevent this path crossing the prior one!
End of explanation
fig, ax = plt.subplots(figsize=(8, 8))
ax.imshow(costs12, cmap='gray');
Explanation: Check the result
End of explanation
# Arguments are:
# cost array
# start pt
# end pt
# can it traverse diagonally
pts, _ = route_through_array(costs12, mask_pts12[0], mask_pts12[1], fully_connected=True)
# Convert list of lists to 2d coordinate array for easier indexing
pts = np.array(pts)
Explanation: Your results may look slightly different.
Compute the minimal cost path
End of explanation
fig, ax = plt.subplots(figsize=(12, 12))
# Plot the difference image
ax.imshow(pano1_warped - pano2_warped, cmap='gray')
# Overlay the minimum-cost path
ax.plot(pts[:, 1], pts[:, 0]);
ax.axis('off');
Explanation: Verify a reasonable result
End of explanation
mask2 = np.zeros_like(pano0_warped, dtype=np.uint8)
mask2[pts[:, 0], pts[:, 1]] = 1
Explanation: Initialize the mask by placing the path in a new array
End of explanation
mask2[label(mask2, connectivity=1) == 2] = 1
# The result
plt.imshow(mask2, cmap='gray');
Explanation: Fill the right side this time, again using skimage.measure.label - the label of interest is 2
End of explanation
mask1 = ~(mask0 | mask2).astype(bool)
Explanation: Final mask
The last mask for the middle image is one of exclusion - it will be displayed everywhere mask0 and mask2 are not.
End of explanation
def add_alpha(img, mask=None):
Adds a masked alpha channel to an image.
Parameters
----------
img : (M, N[, 3]) ndarray
Image data, should be rank-2 or rank-3 with RGB channels
mask : (M, N[, 3]) ndarray, optional
Mask to be applied. If None, the alpha channel is added
with full opacity assumed (1) at all locations.
from skimage.color import gray2rgb
if mask is None:
mask = np.ones_like(img)
if img.ndim == 2:
img = gray2rgb(img)
return np.dstack((img, mask))
Explanation: Define a convenience function to place masks in alpha channels
End of explanation
pano0_final = add_alpha(pano0_warped, mask0)
pano1_final = add_alpha(pano1_warped, mask1)
pano2_final = add_alpha(pano2_warped, mask2)
compare(pano0_final, pano1_final, pano2_final, figsize=(15, 15))
Explanation: Obtain final, alpha blended individual images and inspect them
End of explanation
fig, ax = plt.subplots(figsize=(12, 12))
# This is a perfect combination, but matplotlib's interpolation
# makes it appear to have gaps. So we turn it off.
ax.imshow(pano0_final, interpolation='none')
ax.imshow(pano1_final, interpolation='none')
ax.imshow(pano2_final, interpolation='none')
fig.tight_layout()
ax.axis('off');
Explanation: What we have here is the world's most complicated and precisely-fitting jigsaw puzzle...
Plot all three together and view the results!
End of explanation
# Identical transforms as before, except
# * Operating on original color images
# * filling with cval=0 as we know the masks
pano0_color = warp(pano_imgs[0], (model_robust01 + offset1).inverse, order=3,
output_shape=output_shape, cval=0)
pano1_color = warp(pano_imgs[1], offset1.inverse, order=3,
output_shape=output_shape, cval=0)
pano2_color = warp(pano_imgs[2], (model_robust12 + offset1).inverse, order=3,
output_shape=output_shape, cval=0)
Explanation: Fantastic! Without the black borders, you'd never know this was composed of separate images!
Bonus round: now, in color!
We converted to grayscale for ORB feature detection, back in the initial preprocessing steps. Since we stored our transforms and masks, adding color is straightforward!
Transform the colored images
End of explanation
pano0_final = add_alpha(pano0_color, mask0)
pano1_final = add_alpha(pano1_color, mask1)
pano2_final = add_alpha(pano2_color, mask2)
Explanation: Apply the custom alpha channel masks
End of explanation
fig, ax = plt.subplots(figsize=(12, 12))
# Turn off matplotlib's interpolation
ax.imshow(pano0_final, interpolation='none')
ax.imshow(pano1_final, interpolation='none')
ax.imshow(pano2_final, interpolation='none')
fig.tight_layout()
ax.axis('off');
Explanation: View the result!
End of explanation
from skimage.color import gray2rgb
# Start with empty image
pano_combined = np.zeros_like(pano0_color)
# Place the masked portion of each image into the array
# masks are 2d, they need to be (M, N, 3) to match the color images
pano_combined += pano0_color * gray2rgb(mask0)
pano_combined += pano1_color * gray2rgb(mask1)
pano_combined += pano2_color * gray2rgb(mask2)
# Save the output - precision loss warning is expected
# moving from floating point -> uint8
io.imsave('./pano-advanced-output.png', pano_combined)
Explanation: Save the combined, color panorama locally as './pano-advanced-output.png'
End of explanation
%reload_ext load_style
%load_style ../themes/tutorial.css
Explanation: <div style="height: 400px;"></div>
<div style="height: 400px;"></div>
Once more, from the top
I hear what you're saying. "But Josh, those were too easy! The panoramas had too much overlap! Does this still work in the real world?"
Go back to the top. Under "Load Data" replace the string 'data/JDW_03*' with 'data/JDW_9*', and re-run all of the cells in order.
<div style="height: 400px;"></div>
End of explanation |
679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: In this profile, diffusivity drops to 0 at $y=0.5$ and at $y=0$ and $y=1$. In the absence of advection, particles starting out in one half of the domain should remain confined to that half as they are unable to cross the points where the diffusivity drops to 0. The line $y=0.5$ should therefore provide an impermeable barrier.
Now we can put this idealized profile into a flat fieldset
Step2: We release 100 particles at ($x=0$, $y=0.75$).
Step3: Now we will simulate the advection and diffusion of the particles using the AdvectionDiffusionM1 kernel. We run the simulation for 0.3 seconds, with a numerical timestep $\Delta t = 0.001$s. We also write away particle locations at each timestep for plotting. Note that this will hinder a runtime comparison between kernels, since it will cause most time to be spent on I/O.
Step4: We can plot the individual coordinates of the particle trajectories against time ($x$ against $t$ and $y$ against $t$) to investigate how diffusion works along each axis.
Step5: We see that the along the meridional direction, particles remain confined to the ‘upper’ part of the domain, not crossing the impermeable barrier where the diffusivity drops to zero. In the zonal direction, particles follow random walks, since all terms involving gradients of the diffusivity are zero.
Now let's execute the simulation with the AdvectionDiffusionEM kernel instead.
Step6: The Wiener increments for both simulations are equal, as they are fixed through a random seed. As we can see, the Euler-Maruyama scheme performs worse than the Milstein scheme, letting particles cross the impermeable barrier at $y=0.5$. In contrast, along the zonal direction, particles follow the same random walk as in the Milstein scheme, which is expected since the extra terms in the Milstein scheme are zero in this case.
Example
Step7: Reading velocity fields from netcdf files
Step8: Adding parameters (cell_areas – areas of computational cells, and Cs – Smagorinsky constant) to fieldset that are needed for the smagdiff kernel
Step9: In the example, particles are released at one location periodically (every 12 hours)
Step10: If particles leave model area, they are deleted
Step11: Modeling the particles moving during 5 days using advection (AdvectionRK4) and diffusion (smagdiff) kernels.
Step12: Stop new particles appearing and continue the particleset execution for another 25 days
Step13: Save the output file and visualise the trajectories | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
from datetime import timedelta
from parcels import ParcelsRandom
from parcels import (FieldSet, Field, ParticleSet, JITParticle, AdvectionRK4, ErrorCode,
DiffusionUniformKh, AdvectionDiffusionM1, AdvectionDiffusionEM)
from parcels import plotTrajectoriesFile
K_bar = 0.5 # Average diffusivity
alpha = 1. # Profile steepness
L = 1. # Basin scale
Ny = 103 # Number of grid cells in y_direction (101 +2, one level above and one below, where fields are set to zero)
dy = 1.03/Ny # Spatial resolution
y = np.linspace(-0.01, 1.01, 103) # y-coordinates for grid
y_K = np.linspace(0., 1., 101) # y-coordinates used for setting diffusivity
beta = np.zeros(y_K.shape) # Placeholder for fraction term in K(y) formula
for yi in range(len(y_K)):
if y_K[yi] < L/2:
beta[yi] = y_K[yi]*np.power(L - 2*y_K[yi], 1/alpha)
elif y_K[yi] >= L/2:
beta[yi] = (L - y_K[yi])*np.power(2*y_K[yi] - L, 1/alpha)
Kh_meridional = 0.1*(2*(1+alpha)*(1+2*alpha))/(alpha**2*np.power(L, 1+1/alpha))*beta
Kh_meridional = np.concatenate((np.array([0]), Kh_meridional, np.array([0])))
plt.plot(Kh_meridional, y)
plt.ylabel("y")
plt.xlabel(r"$K_{meridional}$")
plt.show()
Explanation: Tutorial: advection-diffusion kernels in Parcels
In Eulerian ocean models, sub-grid scale dispersion of tracers such as heat, salt, or nutrients is often parameterized as a diffusive process. In Lagrangian particle simulations, sub-grid scale effects can be parameterized as a stochastic process, randomly displacing a particle position in proportion to the local eddy diffusivity (Van Sebille et al. 2018). Parameterizing sub-grid scale dispersion may be especially important when coarse velocity fields are used that do not resolve mesoscale eddies (Shah et al., 2017). This tutorial explains how to use a sub-grid scale parameterization in Parcels that is consistent with the advection-diffusion equation used in Eulerian models.
Stochastic differential equations (SDE) consistent with advection-diffusion
The time-evolution of a stochastic process is described by a stochastic differential equation. The time-evolution of the conditional probability density of a stochastic process is described by a Fokker-Planck equation (FPE). The advection-diffusion equation, describing the evolution of a tracer, can be written as a Fokker-Planck equation. Therefore, we can formulate a stochastic differential equation for a particle in the Lagrangian frame undergoing advection with stochastic noise proportional to the local diffusivity in a way that is consistent with advection-diffusion in the Eulerian frame. For details, see Shah et al., 2011 and van Sebille et al., 2018.
The stochastic differential equation for a particle trajectory including diffusion is
$$
\begin{aligned}
d\mathbf{X}(t) &\overset{\text{Îto}}{=} (\mathbf{u} + \nabla \cdot \mathbf{K}) dt + \mathbf{V}(t, \mathbf{X})\cdot d\mathbf{W}(t), \
\mathbf{X}(t_0) &= \mathbf{x}_0,
\end{aligned}
$$
where $\mathbf{X}$ is the particle position vector ($\mathbf{x}_0$ being the initial position vector), $\mathbf{u}$ the velocity vector, $\mathbf{K} = \frac{1}{2} \mathbf{V} \cdot \mathbf{V}^T$ the diffusivity tensor, and $d\mathbf{W}(t)$ a Wiener increment (normally distributed with zero mean and variance $dt$). Particle distributions obtained by solving the above equation are therefore consistent with Eulerian concentrations found by solving the advection-diffusion equation.
In three-dimensional ocean models diffusion operates along slopes of neutral buoyancy. To account for these slopes, the 3D diffusivity tensor $\mathbf{K}$ (and therefore $\mathbf{V}$) contains off-diagonal components. Three-dimensional advection-diffusion is not yet implemented in Parcels, but it is currently under development. Here we instead focus on the simpler case of diffusion in a horizontal plane, where diffusivity is specified only in the zonal and meridional direction, i.e.
$$\mathbf{K}(x,y)=\begin{bmatrix}
K_x(x,y) & 0\
0 & K_y(x,y)
\end{bmatrix}.$$
The above stochastic differential equation then becomes
$$
\begin{align}
dX(t) &= a_x dt + b_x dW_x(t), \quad &X(t_0) = x_0,\
dY(t) &= a_y dt + b_y dW_y(t), \quad &Y(t_0) = y_0,
\end{align}
$$
where $a_i = v_i + \partial_i K_i(x, y)$ is the deterministic drift term and $b_i = \sqrt{2K_i(x, y)}$ a stochastic noise term ($\partial_i$ denotes the partial derivative with respect to $i$).
Numerical Approximations of SDEs
The simplest numerical approximation of the above SDEs is obtained by replacing $dt$ by a finite time discrete step $\Delta t$ and $dW$ by a discrete increment $\Delta W$, yielding the Euler-Maruyama (EM) scheme (Maruyama, 1955):
$$
\begin{equation}
X_{n+1} = X_n + a_x \Delta t + b_x \Delta W_{n, x},
\end{equation}
$$
with a similar expression for $Y$.
A higher-order scheme is found by including extra terms from a Taylor expansion on our SDE, yielding the Milstein scheme of order 1 (M1):
$$
\begin{equation}
X_{n+1} = X_n + a_x \Delta t + b_x \Delta W_x + \frac{1}{2}b_x \partial_x b_x(\Delta W_{n, x}^2 - \Delta t),
\end{equation}
$$
which can be rewritten by explicitly writing $b_x\partial_x b_x$ as $\partial_x K_x(z)$:
$$
\begin{equation}
X_{n+1} = X_n + v_x \Delta t + \frac{1}{2}\partial_x K_x(\Delta W_{n, x}^2 + \Delta t) + b\Delta W_n.
\end{equation}
$$
The extra term in the M1 scheme provides extra accuracy at negligible computational cost.
The spatial derivatives in the EM and M1 schemes can be approximated by a central difference. Higher order numerical schemes (see Gräwe et al., 2012) include higher order derivatives. Since Parcels uses bilinear interpolation, these higher order derivatives cannot be computed, meaning that higher order numerical schemes cannot be used.
An overview of numerical approximations for SDEs in a particle tracking setting can be found in Gräwe (2011).
Using Advection-Diffusion Kernels in Parcels
The EM and M1 advection-diffusion approximations are available as AdvectionDiffusionEM and AdvectionDiffusionM1, respectively. The AdvectionDiffusionM1 kernel should be the default choice, as the increased accuracy comes at negligible computational cost.
The advection component of these kernels is similar to that of the Explicit Euler advection kernel (AdvectionEE). In the special case where diffusivity is constant over the entire domain, the diffusion-only kernel DiffusionUniformKh can be used in combination with an advection kernel of choice. Since the diffusivity here is space-independent, gradients are not calculated, increasing efficiency. The diffusion-step can in this case be computed after or before advection, thus allowing you to chain kernels using the + operator.
Just like velocities, diffusivities are passed to Parcels in the form of Field objects. When using DiffusionUniformKh, they should be added to the FieldSet object as constant fields, e.g. fieldset.add_constant_field("Kh_zonal", 1, mesh="flat").
To make a central difference approximation for computing the gradient in diffusivity, a resolution for this approximation dres is needed: Parcels approximates the gradients in diffusivities by using their values at the particle's location ± dres (in both $x$ and $y$). A value of dres must be specified and added to the FieldSet by the user (e.g. fieldset.add_constant("dres", 0.01)). Currently, it is unclear what the best value of dres is. From experience, its size of dres should be smaller than the spatial resolution of the data, but within reasonable limits of machine precision to avoid numerical errors. We are working on a method to compute gradients differently so that specifying dres is not necessary anymore.
Example: Impermeable Diffusivity Profile
Let's see the AdvectionDiffusionM1 in action and see why it's preferable over the AdvectionDiffusionEM kernel. To do so, we create an idealized profile with diffusivities $K_\text{zonal}$ uniform everywhere ($K_\text{zonal} = \bar{K}=0.5$) and $K_\text{meridional}$ constant in the zonal direction, while having the following profile in the meridional direction:
$$
K_\text{meridional}(y) = \bar{K}\frac{2(1+\alpha)(1+2\alpha)}{\alpha^2H^{1+1/\alpha}} \begin{cases}
y(L-2y)^{1/\alpha},\quad 0 \leq y \leq L/2,\
(L-y)(2y-1)^{1/a},\quad H/2 \leq y \leq L,
\end{cases}
$$
with $L$ being the basin length scale, $\alpha$ as a parameter determining the steepness in the gradient in the profile. This profile is similar to that used by Gräwe (2011), now used in the meridional direction for illustrative purposes.
Let's plot $K_\text{meridional}(y)$:
End of explanation
xdim, ydim = (1, Ny)
data = {'U': np.zeros(ydim),
'V': np.zeros(ydim),
'Kh_zonal': K_bar*np.ones(ydim),
'Kh_meridional': Kh_meridional}
dims = {'lon': 1,
'lat': np.linspace(-0.01, 1.01, ydim, dtype=np.float32)}
fieldset = FieldSet.from_data(data, dims, mesh='flat', allow_time_extrapolation=True)
fieldset.add_constant('dres', 0.00005)
Explanation: In this profile, diffusivity drops to 0 at $y=0.5$ and at $y=0$ and $y=1$. In the absence of advection, particles starting out in one half of the domain should remain confined to that half as they are unable to cross the points where the diffusivity drops to 0. The line $y=0.5$ should therefore provide an impermeable barrier.
Now we can put this idealized profile into a flat fieldset:
End of explanation
def get_test_particles():
return ParticleSet.from_list(fieldset,
pclass=JITParticle,
lon=np.zeros(100),
lat=np.ones(100)*0.75,
time=np.zeros(100),
lonlatdepth_dtype=np.float64)
Explanation: We release 100 particles at ($x=0$, $y=0.75$).
End of explanation
dt = 0.001
testParticles = get_test_particles()
output_file = testParticles.ParticleFile(name="M1_out.nc",
outputdt=timedelta(seconds=dt))
ParcelsRandom.seed(1636) # Random seed for reproducibility
testParticles.execute(AdvectionDiffusionM1,
runtime=timedelta(seconds=0.3),
dt=timedelta(seconds=dt),
output_file=output_file,
verbose_progress=True)
output_file.close() # to write the output to a netCDF file, since `output_file` does not close automatically when using notebooks
M1_out = xr.open_dataset("M1_out.nc")
Explanation: Now we will simulate the advection and diffusion of the particles using the AdvectionDiffusionM1 kernel. We run the simulation for 0.3 seconds, with a numerical timestep $\Delta t = 0.001$s. We also write away particle locations at each timestep for plotting. Note that this will hinder a runtime comparison between kernels, since it will cause most time to be spent on I/O.
End of explanation
fig, ax = plt.subplots(1, 2)
fig.set_figwidth(12)
for data, ai, dim, ystart, ylim in zip([M1_out.lat, M1_out.lon], ax, ('y', 'x'), (0.75, 0), [(0, 1), (-1, 1)]):
ai.plot(np.arange(0, 0.3002, 0.001), data.T, alpha=0.3)
ai.scatter(0, ystart, s=20, c='r', zorder=3)
ai.set_xlabel("t")
ai.set_ylabel(dim)
ai.set_xlim(0, 0.3)
ai.set_ylim(ylim)
fig.suptitle("`AdvectionDiffusionM1` Simulation: Particle trajectories in the x- and y-directions against time")
plt.show()
Explanation: We can plot the individual coordinates of the particle trajectories against time ($x$ against $t$ and $y$ against $t$) to investigate how diffusion works along each axis.
End of explanation
dt = 0.001
testParticles = get_test_particles()
output_file = testParticles.ParticleFile(name="EM_out.nc",
outputdt=timedelta(seconds=dt))
ParcelsRandom.seed(1636) # Random seed for reproducibility
testParticles.execute(AdvectionDiffusionEM,
runtime=timedelta(seconds=0.3),
dt=timedelta(seconds=dt),
output_file=output_file,
verbose_progress=True)
output_file.close() # to write the output to a netCDF file, since `output_file` does not close automatically when using notebooks
EM_out = xr.open_dataset("EM_out.nc")
fig, ax = plt.subplots(1, 2)
fig.set_figwidth(12)
for data, ai, dim, ystart, ylim in zip([EM_out.lat, EM_out.lon], ax, ('y', 'x'), (0.75, 0), [(0, 1), (-1, 1)]):
ai.plot(np.arange(0, 0.3002, 0.001), data.T, alpha=0.3)
ai.scatter(0, ystart, s=20, c='r', zorder=3)
ai.set_xlabel("t")
ai.set_ylabel(dim)
ai.set_xlim(0, 0.3)
ai.set_ylim(ylim)
fig.suptitle("`AdvectionDiffusionEM` Simulation: Particle trajectories in the x- and y-directions against time")
plt.show()
Explanation: We see that the along the meridional direction, particles remain confined to the ‘upper’ part of the domain, not crossing the impermeable barrier where the diffusivity drops to zero. In the zonal direction, particles follow random walks, since all terms involving gradients of the diffusivity are zero.
Now let's execute the simulation with the AdvectionDiffusionEM kernel instead.
End of explanation
def smagdiff(particle, fieldset, time):
dx = 0.01
# gradients are computed by using a local central difference.
dudx = (fieldset.U[time, particle.depth, particle.lat, particle.lon+dx]-fieldset.U[time, particle.depth, particle.lat, particle.lon-dx]) / (2*dx)
dudy = (fieldset.U[time, particle.depth, particle.lat+dx, particle.lon]-fieldset.U[time, particle.depth, particle.lat-dx, particle.lon]) / (2*dx)
dvdx = (fieldset.V[time, particle.depth, particle.lat, particle.lon+dx]-fieldset.V[time, particle.depth, particle.lat, particle.lon-dx]) / (2*dx)
dvdy = (fieldset.V[time, particle.depth, particle.lat+dx, particle.lon]-fieldset.V[time, particle.depth, particle.lat-dx, particle.lon]) / (2*dx)
A = fieldset.cell_areas[time, 0, particle.lat, particle.lon]
sq_deg_to_sq_m = (1852*60)**2*math.cos(particle.lat*math.pi/180)
A = A / sq_deg_to_sq_m
Kh = fieldset.Cs * A * math.sqrt(dudx**2 + 0.5*(dudy + dvdx)**2 + dvdy**2)
dlat = ParcelsRandom.normalvariate(0., 1.) * math.sqrt(2*math.fabs(particle.dt)* Kh)
dlon = ParcelsRandom.normalvariate(0., 1.) * math.sqrt(2*math.fabs(particle.dt)* Kh)
particle.lat += dlat
particle.lon += dlon
Explanation: The Wiener increments for both simulations are equal, as they are fixed through a random seed. As we can see, the Euler-Maruyama scheme performs worse than the Milstein scheme, letting particles cross the impermeable barrier at $y=0.5$. In contrast, along the zonal direction, particles follow the same random walk as in the Milstein scheme, which is expected since the extra terms in the Milstein scheme are zero in this case.
Example: Using horizontal diffusion calculated from velocity fields
In the case when velocity fields are available, there is the possibility to calculate coefficients of diffusion based on closure parameterizations. The Smagorinsky method (Smagorinsky, 1963), which was originally proposed as a parameterization for horizontal eddy viscosity, is often used to parameterize horizontal eddy diffusivity as well. It computes the eddy diffusivity as
$$
K = C_s \Delta x \Delta y \sqrt{\left(\frac{\partial u}{\partial x}\right)^2 + \left(\frac{\partial v}{\partial y}\right)^2 + \frac{1}{2}\left(\frac{\partial u}{\partial y} +\frac{\partial v}{\partial x}\right)^2},
$$
where $C_s$, the Smagorinsky constant, is a dimensionless tuning parameter. It uses the grid area $\Delta x \Delta y$ as its spatial scale, and the norm of the strain rate tensor as its time scale, given as the square-rooted term.
Let’s see the example of implementation of the Smagorinsky method to the GlobalCurrents files of the region around South Africa. For simplicity, we are not taking gradients in the Smagorinsky-computed diffusivity field into account here.
First, create a new kernel for Smagorinsky diffusion method:
End of explanation
filenames = {'U': 'GlobCurrent_example_data/20*.nc', 'V': 'GlobCurrent_example_data/20*.nc'}
variables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'}
dimensions = {'lat': 'lat', 'lon': 'lon', 'time': 'time'}
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions)
Explanation: Reading velocity fields from netcdf files
End of explanation
x = fieldset.U.grid.lon
y = fieldset.U.grid.lat
cell_areas = Field(name='cell_areas', data=fieldset.U.cell_areas(), lon=x, lat=y)
fieldset.add_field(cell_areas)
fieldset.add_constant('Cs', 0.1)
Explanation: Adding parameters (cell_areas – areas of computational cells, and Cs – Smagorinsky constant) to fieldset that are needed for the smagdiff kernel
End of explanation
lon = 29
lat = -33
repeatdt = timedelta(hours=12)
pset = ParticleSet(fieldset=fieldset, pclass=JITParticle,
lon=lon, lat=lat,
repeatdt=repeatdt)
Explanation: In the example, particles are released at one location periodically (every 12 hours)
End of explanation
def DeleteParticle(particle, fieldset, time):
particle.delete()
Explanation: If particles leave model area, they are deleted
End of explanation
kernels = pset.Kernel(AdvectionRK4) + pset.Kernel(smagdiff)
output_file = pset.ParticleFile(name='Global_smagdiff.nc', outputdt=timedelta(hours=6))
pset.execute(kernels, runtime=timedelta(days=5), dt=timedelta(minutes=5), output_file=output_file, recovery={ErrorCode.ErrorOutOfBounds: DeleteParticle})
pset.show(field=fieldset.U)
Explanation: Modeling the particles moving during 5 days using advection (AdvectionRK4) and diffusion (smagdiff) kernels.
End of explanation
pset.repeatdt = None
pset.execute(kernels, runtime=timedelta(days=25), dt=timedelta(minutes=5), output_file=output_file, recovery={ErrorCode.ErrorOutOfBounds: DeleteParticle})
pset.show(field=fieldset.U)
Explanation: Stop new particles appearing and continue the particleset execution for another 25 days
End of explanation
output_file.export()
plotTrajectoriesFile('Global_smagdiff.nc',
tracerfile='GlobCurrent_example_data/20020120000000-GLOBCURRENT-L4-CUReul_hs-ALT_SUM-v02.0-fv01.0.nc',
tracerlon='lon', tracerlat='lat', tracerfield='eastward_eulerian_current_velocity');
Explanation: Save the output file and visualise the trajectories
End of explanation |
680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Human Pose Classification with MoveNet and TensorFlow Lite
This notebook teaches you how to train a pose classification model using MoveNet and TensorFlow Lite. It consists of 3 parts
Step9: Code to run pose estimation using MoveNet
Step10: Part 1
Step11: Preprocess the TRAIN dataset
Step12: Preprocess the TEST dataset
Step13: Part 2
Step15: Load the preprocessed CSVs into TRAIN and TEST datasets.
Step16: Load and split the original TRAIN dataset into TRAIN (85% of the data) and VALIDATE (the remaining 15%).
Step21: Define functions to convert the pose landmarks to a pose embedding (a.k.a. feature vector) for pose classification
Next, convert the landmark coordinates to a feature vector by
Step22: Define a Keras model for pose classification
Our Keras model takes the detected pose landmarks, then calculates the pose embedding and predicts the pose class.
Step24: Draw the confusion matrix to better understand the model performance
Step25: (Optional) Investigate incorrect predictions
You can look at the poses from the TEST dataset that were incorrectly predicted to see whether the model accuracy can be improved.
Step26: Part 3
Step27: Then you'll write the label file which contains mapping from the class indexes to the human readable class names.
Step29: As you've applied quantization to reduce the model size, let's evaluate the quantized TFLite model to check whether the accuracy drop is acceptable.
Step30: Now you can download the TFLite model (model.tflite) and the label file (labels.txt) to classify custom poses. See the Android and Python/Raspberry Pi sample app for an end-to-end example of how to use the TFLite pose classification model. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
!pip install -q opencv-python
import csv
import cv2
import itertools
import numpy as np
import pandas as pd
import os
import sys
import tempfile
import tqdm
from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
Explanation: Human Pose Classification with MoveNet and TensorFlow Lite
This notebook teaches you how to train a pose classification model using MoveNet and TensorFlow Lite. It consists of 3 parts:
* Part 1: Preprocess the pose classification training images into a CSV containing the landmarks detected from the images, and the ground truth labels.
* Part 2: Train a pose classification model that takes the landmark coordinates as input, and output the predicted labels.
* Part 3: Convert the pose classification model to TFLite.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/pose_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/s?q=movenet"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Preparation
In this step, you'll import the necessary libraries and define several functions to preprocess the training images into a CSV that contains the landmark coordinates and ground truth labels.
The input images needs to be put into a folder structure as below:
* Training images of each class should be stored in its own folder.
* The folder name should be the class name.
For example:
yoga_poses
|__ downdog
|______ 00000128.jpg
|______ 00000181.bmp
|______ ...
|__ goddess
|______ 00000243.jpg
|______ 00000306.jpg
|______ ...
...
The preprocessing logic outputs a CSV file that contains the pose landmarks detected by MoveNet from each image and the labels that can be used for training a pose classification model later.
If you only want to create the CSV file without knowing all the details, you can just run this section and proceed.
End of explanation
#@title Functions to run pose estimation with MoveNet
#@markdown You'll download the MoveNet Thunder model from [TensorFlow Hub](https://www.google.com/url?sa=D&q=https%3A%2F%2Ftfhub.dev%2Fs%3Fq%3Dmovenet), and reuse some inference and visualization logic from the [MoveNet Raspberry Pi (Python)](https://github.com/tensorflow/examples/tree/master/lite/examples/pose_estimation/raspberry_pi) sample app to detect landmarks (ear, nose, wrist etc.) from the input images.
#@markdown *Note: You should use the most accurate pose estimation model (i.e. MoveNet Thunder) to detect the keypoints and use them to train the pose classification model to achieve the best accuracy. When running inference, you can use a pose estimation model of your choice (e.g. either MoveNet Lightning or Thunder).*
# Download model from TF Hub and check out inference code from GitHub
!wget -q -O movenet_thunder.tflite https://tfhub.dev/google/lite-model/movenet/singlepose/thunder/tflite/float16/4?lite-format=tflite
!git clone https://github.com/tensorflow/examples.git
pose_sample_rpi_path = os.path.join(os.getcwd(), 'examples/lite/examples/pose_estimation/raspberry_pi')
sys.path.append(pose_sample_rpi_path)
# Load MoveNet Thunder model
import utils
from movenet import Movenet
movenet = Movenet('movenet_thunder')
# Define function to run pose estimation using MoveNet Thunder.
# You'll apply MoveNet's cropping algorithm and run inference multiple times on
# the input image to improve pose estimation accuracy.
def detect(input_tensor, inference_count=3):
Runs detection on an input image.
Args:
input_tensor: A [1, height, width, 3] Tensor of type tf.float32.
Note that height and width can be anything since the image will be
immediately resized according to the needs of the model within this
function.
inference_count: Number of times the model should run repeatly on the
same input image to improve detection accuracy.
Returns:
A dict containing 1 Tensor of shape [1, 1, 17, 3] representing the
keypoint coordinates and scores.
image_height, image_width, channel = input_tensor.shape
# Detect pose using the full input image
movenet.detect(input_tensor.numpy(), reset_crop_region=True)
# Repeatedly using previous detection result to identify the region of
# interest and only croping that region to improve detection accuracy
for _ in range(inference_count - 1):
keypoint_with_scores = movenet.detect(input_tensor.numpy(),
reset_crop_region=False)
return keypoint_with_scores
#@title Functions to visualize the pose estimation results.
def draw_prediction_on_image(
image, keypoints_with_scores, crop_region=None, close_figure=True,
keep_input_size=False):
Draws the keypoint predictions on image.
Args:
image: An numpy array with shape [height, width, channel] representing the
pixel values of the input image.
keypoints_with_scores: An numpy array with shape [1, 1, 17, 3] representing
the keypoint coordinates and scores returned from the MoveNet model.
crop_region: Set the region to crop the output image.
close_figure: Whether to close the plt figure after the function returns.
keep_input_size: Whether to keep the size of the input image.
Returns:
An numpy array with shape [out_height, out_width, channel] representing the
image overlaid with keypoint predictions.
height, width, channel = image.shape
aspect_ratio = float(width) / height
fig, ax = plt.subplots(figsize=(12 * aspect_ratio, 12))
# To remove the huge white borders
fig.tight_layout(pad=0)
ax.margins(0)
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.axis('off')
im = ax.imshow(image)
line_segments = LineCollection([], linewidths=(2), linestyle='solid')
ax.add_collection(line_segments)
# Turn off tick labels
scat = ax.scatter([], [], s=60, color='#FF1493', zorder=2)
# Calculate visualization items from pose estimation result
(keypoint_locs, keypoint_edges,
edge_colors) = utils.keypoints_and_edges_for_display(
keypoints_with_scores, height, width)
edge_colors = [(r/255.0, g/255.0, b/255.0) for (r ,g , b) in edge_colors]
line_segments.set_segments(keypoint_edges)
line_segments.set_color(edge_colors)
if keypoint_edges.shape[0]:
line_segments.set_segments(keypoint_edges)
line_segments.set_color(edge_colors)
if keypoint_locs.shape[0]:
scat.set_offsets(keypoint_locs)
if crop_region is not None:
xmin = max(crop_region['x_min'] * width, 0.0)
ymin = max(crop_region['y_min'] * height, 0.0)
rec_width = min(crop_region['x_max'], 0.99) * width - xmin
rec_height = min(crop_region['y_max'], 0.99) * height - ymin
rect = patches.Rectangle(
(xmin,ymin),rec_width,rec_height,
linewidth=1,edgecolor='b',facecolor='none')
ax.add_patch(rect)
fig.canvas.draw()
image_from_plot = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
image_from_plot = image_from_plot.reshape(
fig.canvas.get_width_height()[::-1] + (3,))
if close_figure:
plt.close(fig)
if keep_input_size:
image_from_plot = cv2.resize(image_from_plot, dsize=(width, height),
interpolation=cv2.INTER_CUBIC)
return image_from_plot
#@title Code to load the images, detect pose landmarks and save them into a CSV file
class MoveNetPreprocessor(object):
Helper class to preprocess pose sample images for classification.
def __init__(self,
images_in_folder,
images_out_folder,
csvs_out_path):
Creates a preprocessor to detection pose from images and save as CSV.
Args:
images_in_folder: Path to the folder with the input images. It should
follow this structure:
yoga_poses
|__ downdog
|______ 00000128.jpg
|______ 00000181.bmp
|______ ...
|__ goddess
|______ 00000243.jpg
|______ 00000306.jpg
|______ ...
...
images_out_folder: Path to write the images overlay with detected
landmarks. These images are useful when you need to debug accuracy
issues.
csvs_out_path: Path to write the CSV containing the detected landmark
coordinates and label of each image that can be used to train a pose
classification model.
self._images_in_folder = images_in_folder
self._images_out_folder = images_out_folder
self._csvs_out_path = csvs_out_path
self._messages = []
# Create a temp dir to store the pose CSVs per class
self._csvs_out_folder_per_class = tempfile.mkdtemp()
# Get list of pose classes and print image statistics
self._pose_class_names = sorted(
[n for n in os.listdir(self._images_in_folder) if not n.startswith('.')]
)
def process(self, per_pose_class_limit=None, detection_threshold=0.1):
Preprocesses images in the given folder.
Args:
per_pose_class_limit: Number of images to load. As preprocessing usually
takes time, this parameter can be specified to make the reduce the
dataset for testing.
detection_threshold: Only keep images with all landmark confidence score
above this threshold.
# Loop through the classes and preprocess its images
for pose_class_name in self._pose_class_names:
print('Preprocessing', pose_class_name, file=sys.stderr)
# Paths for the pose class.
images_in_folder = os.path.join(self._images_in_folder, pose_class_name)
images_out_folder = os.path.join(self._images_out_folder, pose_class_name)
csv_out_path = os.path.join(self._csvs_out_folder_per_class,
pose_class_name + '.csv')
if not os.path.exists(images_out_folder):
os.makedirs(images_out_folder)
# Detect landmarks in each image and write it to a CSV file
with open(csv_out_path, 'w') as csv_out_file:
csv_out_writer = csv.writer(csv_out_file,
delimiter=',',
quoting=csv.QUOTE_MINIMAL)
# Get list of images
image_names = sorted(
[n for n in os.listdir(images_in_folder) if not n.startswith('.')])
if per_pose_class_limit is not None:
image_names = image_names[:per_pose_class_limit]
# Detect pose landmarks from each image
for image_name in tqdm.tqdm(image_names):
image_path = os.path.join(images_in_folder, image_name)
try:
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image)
except:
self._messages.append('Skipped ' + image_path + '. Invalid image.')
continue
else:
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image)
image_height, image_width, channel = image.shape
# Skip images that isn't RGB because Movenet requires RGB images
if channel != 3:
self._messages.append('Skipped ' + image_path +
'. Image isn\'t in RGB format.')
continue
keypoint_with_scores = detect(image)
# Save landmarks if all landmarks were detected
min_landmark_score = np.amin(keypoint_with_scores[:2])
should_keep_image = min_landmark_score >= detection_threshold
if not should_keep_image:
self._messages.append('Skipped ' + image_path +
'. No pose was confidentlly detected.')
continue
# Draw the prediction result on top of the image for debugging later
output_overlay = draw_prediction_on_image(
image.numpy().astype(np.uint8), keypoint_with_scores,
crop_region=None, close_figure=True, keep_input_size=True)
# Write detection result to into an image file
output_frame = cv2.cvtColor(output_overlay, cv2.COLOR_RGB2BGR)
cv2.imwrite(os.path.join(images_out_folder, image_name), output_frame)
# Get landmarks and scale it to the same size as the input image
pose_landmarks = np.array(
[[lmk[0] * image_width, lmk[1] * image_height, lmk[2]]
for lmk in keypoint_with_scores],
dtype=np.float32)
# Write the landmark coordinates to its per-class CSV file
coordinates = pose_landmarks.flatten().astype(np.str).tolist()
csv_out_writer.writerow([image_name] + coordinates)
# Print the error message collected during preprocessing.
print('\n'.join(self._messages))
# Combine all per-class CSVs into a single output file
all_landmarks_df = self._all_landmarks_as_dataframe()
all_landmarks_df.to_csv(self._csvs_out_path, index=False)
def class_names(self):
List of classes found in the training dataset.
return self._pose_class_names
def _all_landmarks_as_dataframe(self):
Merge all per-class CSVs into a single dataframe.
total_df = None
for class_index, class_name in enumerate(self._pose_class_names):
csv_out_path = os.path.join(self._csvs_out_folder_per_class,
class_name + '.csv')
per_class_df = pd.read_csv(csv_out_path, header=None)
# Add the labels
per_class_df['class_no'] = [class_index]*len(per_class_df)
per_class_df['class_name'] = [class_name]*len(per_class_df)
# Append the folder name to the filename column (first column)
per_class_df[per_class_df.columns[0]] = (os.path.join(class_name, '')
+ per_class_df[per_class_df.columns[0]].astype(str))
if total_df is None:
# For the first class, assign its data to the total dataframe
total_df = per_class_df
else:
# Concatenate each class's data into the total dataframe
total_df = pd.concat([total_df, per_class_df], axis=0)
list_name = [[key + '_x', key + '_y', key + '_score']
for key in utils.KEYPOINT_DICT.keys()]
header_name = []
for columns_name in list_name:
header_name += columns_name
header_name = ['file_name'] + header_name
header_map = {total_df.columns[i]: header_name[i]
for i in range(len(header_name))}
total_df.rename(header_map, axis=1, inplace=True)
return total_df
#@title (Optional) Code snippet to try out the Movenet pose estimation logic
#@markdown You can download an image from the internet, run the pose estimation logic on it and plot the detected landmarks on top of the input image.
#@markdown *Note: This code snippet is also useful for debugging when you encounter an image with bad pose classification accuracy. You can run pose estimation on the image and see if the detected landmarks look correct or not before investigating the pose classification logic.*
test_image_url = "https://cdn.pixabay.com/photo/2017/03/03/17/30/yoga-2114512_960_720.jpg" #@param {type:"string"}
!wget -O /tmp/image.jpeg {test_image_url}
if len(test_image_url):
image = tf.io.read_file('/tmp/image.jpeg')
image = tf.io.decode_jpeg(image)
keypoint_with_scores = detect(image)
_ = draw_prediction_on_image(image, keypoint_with_scores, crop_region=None,
close_figure=False, keep_input_size=True)
Explanation: Code to run pose estimation using MoveNet
End of explanation
!wget -O yoga_poses.zip http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
!unzip -q yoga_poses.zip -d yoga_cg
Explanation: Part 1: Preprocess the input images
You'll use the functions defined earlier to preprocess the input images into a CSV file that contains the detected landmarks and ground truth labels. You'll also plot the pose landmarks onto the input image to make debugging easier later on.
The code snippet used to preprocess the input image is as below.
```python
images_in_folder = 'yoga_poses'
images_out_folder = 'yoga_poses_with_landmarks'
csvs_out_path = 'landmarks.csv'
Create a preprocessor object
preprocessor = MoveNetPreprocessor(
images_in_folder=images_in_folder,
images_out_folder=images_out_folder,
csvs_out_path=csvs_out_path,
)
Start preprocessing the input images.
You can set per_pose_class_limit to a small number for debugging.
preprocessor.process(per_pose_class_limit=None)
```
In this tutorial, you'll use a CG-generated yoga pose dataset. It contains images of multiple CG-generated models doing 5 different yoga poses. There's a TRAIN dataset and a TEST dataset. You start by downloading the dataset to Colab and preprocess it.
Note: It takes about 15 minutes to finish this preprocessing step. If you don't want to wait and just want to run through this tutorial, you can skip to step 2, uncomment the code in (Optional) Download the preprocessed dataset if you didn't run part 1 section to download the CSV files which are the same as those will be created in this preprocessing step.
End of explanation
images_in_train_folder = 'yoga_cg/train'
images_out_train_folder = 'poses_images_out_train'
csvs_out_train_path = 'train_data.csv'
preprocessor = MoveNetPreprocessor(
images_in_folder=images_in_train_folder,
images_out_folder=images_out_train_folder,
csvs_out_path=csvs_out_train_path,
)
preprocessor.process(per_pose_class_limit=None)
Explanation: Preprocess the TRAIN dataset
End of explanation
images_in_test_folder = 'yoga_cg/test'
images_out_test_folder = 'poses_images_out_test'
csvs_out_test_path = 'test_data.csv'
preprocessor = MoveNetPreprocessor(
images_in_folder=images_in_test_folder,
images_out_folder=images_out_test_folder,
csvs_out_path=csvs_out_test_path,
)
preprocessor.process(per_pose_class_limit=None)
Explanation: Preprocess the TEST dataset
End of explanation
# # Download the preprocessed CSV so that you don't need to run the part 1 again.
# !wget -O train_data.csv http://download.tensorflow.org/data/pose_classification/yoga_train_data.csv
# !wget -O test_data.csv http://download.tensorflow.org/data/pose_classification/yoga_test_data.csv
# csvs_out_train_path = 'train_data.csv'
# csvs_out_test_path = 'test_data.csv'
Explanation: Part 2: Train a pose classification model that takes the landmark coordinates as input, and output the predicted labels.
You'll build a TensorFlow model that takes the landmark coordinates and predicts the pose class that the person in the input image performs. The model consists of two submodels:
Submodel 1 calculates a pose embedding (a.k.a feature vector) from the detected landmark coordinates.
Submodel 2 feeds pose embedding through several Dense layer to predict the pose class.
You'll then train the model based on the dataset that were preprocessed in part 1.
(Optional) Download the preprocessed dataset if you didn't run part 1
End of explanation
def load_pose_landmarks(csv_path):
Loads a CSV created by from the MoveNetPreprocessor.
Returns:
X: Detected landmark coordinates and scores of shape (N, 17 * 3)
y: Ground truth labels of shape of shape (N, label_count)
classes: The list of all class names found in the dataset
dataframe: The CSV loaded as a Pandas dataframe features (X) and ground
truth labels (y) to use later to train a pose classification model.
# Load the CSV file
dataframe = pd.read_csv(csv_path)
df_to_process = dataframe.copy()
# Drop the file_name columns as you don't need it during training.
df_to_process.drop(columns=['file_name'], inplace=True)
# Extract the list of class names
classes = df_to_process.pop('class_name').unique()
# Extract the labels
y = df_to_process.pop('class_no')
# Convert the input features and labels into the correct format for training.
X = df_to_process.astype('float64')
y = keras.utils.to_categorical(y)
return X, y, classes, dataframe
Explanation: Load the preprocessed CSVs into TRAIN and TEST datasets.
End of explanation
# Load the train data
X, y, class_names, _ = load_pose_landmarks(csvs_out_train_path)
# Split training data (X, y) into (X_train, y_train) and (X_val, y_val)
X_train, X_val, y_train, y_val = train_test_split(X, y,
test_size=0.15)
# Load the test data
X_test, y_test, _, df_test = load_pose_landmarks(csvs_out_test_path)
Explanation: Load and split the original TRAIN dataset into TRAIN (85% of the data) and VALIDATE (the remaining 15%).
End of explanation
def get_center_point(landmarks, left_name, right_name):
Calculates the center point of the two given landmarks.
left = tf.gather(landmarks, utils.KEYPOINT_DICT[left_name], axis=1)
right = tf.gather(landmarks, utils.KEYPOINT_DICT[right_name], axis=1)
center = left * 0.5 + right * 0.5
return center
def get_pose_size(landmarks, torso_size_multiplier=2.5):
Calculates pose size.
It is the maximum of two values:
* Torso size multiplied by `torso_size_multiplier`
* Maximum distance from pose center to any pose landmark
# Hips center
hips_center = get_center_point(landmarks, "left_hip", "right_hip")
# Shoulders center
shoulders_center = get_center_point(landmarks,
"left_shoulder", "right_shoulder")
# Torso size as the minimum body size
torso_size = tf.linalg.norm(shoulders_center - hips_center)
# Pose center
pose_center_new = get_center_point(landmarks, "left_hip", "right_hip")
pose_center_new = tf.expand_dims(pose_center_new, axis=1)
# Broadcast the pose center to the same size as the landmark vector to
# perform substraction
pose_center_new = tf.broadcast_to(pose_center_new,
[tf.size(landmarks) // (17*2), 17, 2])
# Dist to pose center
d = tf.gather(landmarks - pose_center_new, 0, axis=0,
name="dist_to_pose_center")
# Max dist to pose center
max_dist = tf.reduce_max(tf.linalg.norm(d, axis=0))
# Normalize scale
pose_size = tf.maximum(torso_size * torso_size_multiplier, max_dist)
return pose_size
def normalize_pose_landmarks(landmarks):
Normalizes the landmarks translation by moving the pose center to (0,0) and
scaling it to a constant pose size.
# Move landmarks so that the pose center becomes (0,0)
pose_center = get_center_point(landmarks, "left_hip", "right_hip")
pose_center = tf.expand_dims(pose_center, axis=1)
# Broadcast the pose center to the same size as the landmark vector to perform
# substraction
pose_center = tf.broadcast_to(pose_center,
[tf.size(landmarks) // (17*2), 17, 2])
landmarks = landmarks - pose_center
# Scale the landmarks to a constant pose size
pose_size = get_pose_size(landmarks)
landmarks /= pose_size
return landmarks
def landmarks_to_embedding(landmarks_and_scores):
Converts the input landmarks into a pose embedding.
# Reshape the flat input into a matrix with shape=(17, 3)
reshaped_inputs = keras.layers.Reshape((17, 3))(landmarks_and_scores)
# Normalize landmarks 2D
landmarks = normalize_pose_landmarks(reshaped_inputs[:, :, :2])
# Flatten the normalized landmark coordinates into a vector
embedding = keras.layers.Flatten()(landmarks)
return embedding
Explanation: Define functions to convert the pose landmarks to a pose embedding (a.k.a. feature vector) for pose classification
Next, convert the landmark coordinates to a feature vector by:
1. Moving the pose center to the origin.
2. Scaling the pose so that the pose size becomes 1
3. Flattening these coordinates into a feature vector
Then use this feature vector to train a neural-network based pose classifier.
End of explanation
# Define the model
inputs = tf.keras.Input(shape=(51))
embedding = landmarks_to_embedding(inputs)
layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
layer = keras.layers.Dropout(0.5)(layer)
layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer)
layer = keras.layers.Dropout(0.5)(layer)
outputs = keras.layers.Dense(5, activation="softmax")(layer)
model = keras.Model(inputs, outputs)
model.summary()
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
# Add a checkpoint callback to store the checkpoint that has the highest
# validation accuracy.
checkpoint_path = "weights.best.hdf5"
checkpoint = keras.callbacks.ModelCheckpoint(checkpoint_path,
monitor='val_accuracy',
verbose=1,
save_best_only=True,
mode='max')
earlystopping = keras.callbacks.EarlyStopping(monitor='val_accuracy',
patience=20)
# Start training
history = model.fit(X_train, y_train,
epochs=200,
batch_size=16,
validation_data=(X_val, y_val),
callbacks=[checkpoint, earlystopping])
# Visualize the training history to see whether you're overfitting.
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['TRAIN', 'VAL'], loc='lower right')
plt.show()
# Evaluate the model using the TEST dataset
loss, accuracy = model.evaluate(X_test, y_test)
Explanation: Define a Keras model for pose classification
Our Keras model takes the detected pose landmarks, then calculates the pose embedding and predicts the pose class.
End of explanation
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
Plots the confusion matrix.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=55)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
# Classify pose in the TEST dataset using the trained model
y_pred = model.predict(X_test)
# Convert the prediction result to class name
y_pred_label = [class_names[i] for i in np.argmax(y_pred, axis=1)]
y_true_label = [class_names[i] for i in np.argmax(y_test, axis=1)]
# Plot the confusion matrix
cm = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(y_pred, axis=1))
plot_confusion_matrix(cm,
class_names,
title ='Confusion Matrix of Pose Classification Model')
# Print the classification report
print('\nClassification Report:\n', classification_report(y_true_label,
y_pred_label))
Explanation: Draw the confusion matrix to better understand the model performance
End of explanation
IMAGE_PER_ROW = 3
MAX_NO_OF_IMAGE_TO_PLOT = 30
# Extract the list of incorrectly predicted poses
false_predict = [id_in_df for id_in_df in range(len(y_test)) \
if y_pred_label[id_in_df] != y_true_label[id_in_df]]
if len(false_predict) > MAX_NO_OF_IMAGE_TO_PLOT:
false_predict = false_predict[:MAX_NO_OF_IMAGE_TO_PLOT]
# Plot the incorrectly predicted images
row_count = len(false_predict) // IMAGE_PER_ROW + 1
fig = plt.figure(figsize=(10 * IMAGE_PER_ROW, 10 * row_count))
for i, id_in_df in enumerate(false_predict):
ax = fig.add_subplot(row_count, IMAGE_PER_ROW, i + 1)
image_path = os.path.join(images_out_test_folder,
df_test.iloc[id_in_df]['file_name'])
image = cv2.imread(image_path)
plt.title("Predict: %s; Actual: %s"
% (y_pred_label[id_in_df], y_true_label[id_in_df]))
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()
Explanation: (Optional) Investigate incorrect predictions
You can look at the poses from the TEST dataset that were incorrectly predicted to see whether the model accuracy can be improved.
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
print('Model size: %dKB' % (len(tflite_model) / 1024))
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
Explanation: Part 3: Convert the pose classification model to TensorFlow Lite
You'll convert the Keras pose classification model to the TensorFlow Lite format so that you can deploy it to mobile apps, web browsers and IoT devices. When converting the model, you'll apply dynamic range quantization to reduce the pose classification TensorFlow Lite model size by about 4 times with insignificant accuracy loss.
Note: TensorFlow Lite supports multiple quantization schemes. See the documentation if you are interested to learn more.
End of explanation
with open('labels.txt', 'w') as f:
f.write('\n'.join(class_names))
Explanation: Then you'll write the label file which contains mapping from the class indexes to the human readable class names.
End of explanation
def evaluate_model(interpreter, X, y_true):
Evaluates the given TFLite model and return its accuracy.
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on all given poses.
y_pred = []
for i in range(len(y_true)):
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = X[i: i + 1].astype('float32')
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the class with highest
# probability.
output = interpreter.tensor(output_index)
predicted_label = np.argmax(output()[0])
y_pred.append(predicted_label)
# Compare prediction results with ground truth labels to calculate accuracy.
y_pred = keras.utils.to_categorical(y_pred)
return accuracy_score(y_true, y_pred)
# Evaluate the accuracy of the converted TFLite model
classifier_interpreter = tf.lite.Interpreter(model_content=tflite_model)
classifier_interpreter.allocate_tensors()
print('Accuracy of TFLite model: %s' %
evaluate_model(classifier_interpreter, X_test, y_test))
Explanation: As you've applied quantization to reduce the model size, let's evaluate the quantized TFLite model to check whether the accuracy drop is acceptable.
End of explanation
!zip classifier.zip labels.txt model.tflite
# Download the zip archive if running on Colab.
try:
from google.colab import files
files.download('classifier.zip')
except:
pass
Explanation: Now you can download the TFLite model (model.tflite) and the label file (labels.txt) to classify custom poses. See the Android and Python/Raspberry Pi sample app for an end-to-end example of how to use the TFLite pose classification model.
End of explanation |
681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wing modelling example
In this example, we demonstrate, how to build up a wing surface by starting with a list of curves. These curves are then interpolated using a B-spline suface interpolation
Importing modules
Again, all low lovel geomtry functions can be found in the tigl3.geometry module. For a more convenient use,
the module tigl3.surface_factories offers functions to create surfaces. Lets use this!
Step1: Create profile points
Now, we want to create 3 profiles that are the input for the profile curves. The wing should have one curve at its root, one at its outer end and one at the tip of a winglet.
Step2: Build profiles curves
Now, lets built the profiles curves using tigl3.curve_factories.interpolate_points as done in the Airfoil example.
Step3: Create the surface
The final surface is created with the B-spline interpolation from the tigl3.surface_factories package.
If you want, comment out the second line and play around with the curve parameters, especially the second value. What influence do they have on the final shape?
Step4: The function tigl3.surface_factories.interpolate_curves has many more parameters that influence the resulting shape. Lets have a look
Step5: Visualize the result
Now, lets draw our wing. How does it look like? What can be improved?
Note | Python Code:
import tigl3.curve_factories
import tigl3.surface_factories
from OCC.gp import gp_Pnt
from OCC.Display.SimpleGui import init_display
import numpy as np
Explanation: Wing modelling example
In this example, we demonstrate, how to build up a wing surface by starting with a list of curves. These curves are then interpolated using a B-spline suface interpolation
Importing modules
Again, all low lovel geomtry functions can be found in the tigl3.geometry module. For a more convenient use,
the module tigl3.surface_factories offers functions to create surfaces. Lets use this!
End of explanation
# list of points on NACA2412 profile
px = [1.000084, 0.975825, 0.905287, 0.795069, 0.655665, 0.500588, 0.34468, 0.203313, 0.091996, 0.022051, 0.0, 0.026892, 0.098987, 0.208902, 0.346303, 0.499412, 0.653352, 0.792716, 0.90373, 0.975232, 0.999916]
py = [0.001257, 0.006231, 0.019752, 0.03826, 0.057302, 0.072381, 0.079198, 0.072947, 0.054325, 0.028152, 0.0, -0.023408, -0.037507, -0.042346, -0.039941, -0.033493, -0.0245, -0.015499, -0.008033, -0.003035, -0.001257]
points_c1 = np.array([pnt for pnt in zip(px, [0.]*len(px), py)]) * 2.
points_c2 = np.array([pnt for pnt in zip(px, [0]*len(px), py)])
points_c3 = np.array([pnt for pnt in zip(px, py, [0.]*len(px))]) * 0.2
# shift sections to their correct position
# second curve at y = 7
points_c2 += np.array([1.0, 7, 0])
# third curve at y = 7.5
points_c3[:, 1] *= -1
points_c3 += np.array([1.7, 7.8, 1.0])
Explanation: Create profile points
Now, we want to create 3 profiles that are the input for the profile curves. The wing should have one curve at its root, one at its outer end and one at the tip of a winglet.
End of explanation
curve1 = tigl3.curve_factories.interpolate_points(points_c1)
curve2 = tigl3.curve_factories.interpolate_points(points_c2)
curve3 = tigl3.curve_factories.interpolate_points(points_c3)
Explanation: Build profiles curves
Now, lets built the profiles curves using tigl3.curve_factories.interpolate_points as done in the Airfoil example.
End of explanation
surface = tigl3.surface_factories.interpolate_curves([curve1, curve2, curve3])
# surface = tigl3.surface_factories.interpolate_curves([curve1, curve2, curve3], [0., 0.7, 1.])
# surface = tigl3.surface_factories.interpolate_curves([curve1, curve2, curve3], degree=1)
Explanation: Create the surface
The final surface is created with the B-spline interpolation from the tigl3.surface_factories package.
If you want, comment out the second line and play around with the curve parameters, especially the second value. What influence do they have on the final shape?
End of explanation
tigl3.surface_factories.interpolate_curves?
Explanation: The function tigl3.surface_factories.interpolate_curves has many more parameters that influence the resulting shape. Lets have a look:
End of explanation
# start up the gui
display, start_display, add_menu, add_function_to_menu = init_display()
# make tesselation more accurate
display.Context.SetDeviationCoefficient(0.0001)
# draw the curve
display.DisplayShape(curve1)
display.DisplayShape(curve2)
display.DisplayShape(curve3)
display.DisplayShape(surface)
# match content to screen and start the event loop
display.FitAll()
start_display()
Explanation: Visualize the result
Now, lets draw our wing. How does it look like? What can be improved?
Note: a separate window with the 3D Viewer is opening!
End of explanation |
682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Step1: Import raw data
The user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.
Step2: We'll generate a list of pairs of stypes and channels for ease of use.
Step3: We can now read in all datafiles specified by the data dictionary above.
Step4: Calculate landmark bins
Step5: Calculate landmark bins based on user input parameters and the previously specified control sample.
Step6: Calculate landmarks | Python Code:
import deltascope as ds
import deltascope.alignment as ut
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import normalize
from scipy.optimize import minimize
import os
import tqdm
import json
import time
Explanation: Introduction: Landmarks
End of explanation
# --------------------------------
# -------- User input ------------
# --------------------------------
data = {
# Specify sample type key
'wt': {
# Specify path to data directory
'path': 'path\to\data\directory\sample1',
# Specify which channels are in the directory and are of interest
'channels': ['AT','ZRF']
},
'stype2': {
'path': 'path\to\data\directory\sample2',
'channels': ['AT','ZRF']
}
}
Explanation: Import raw data
The user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.
End of explanation
data_pairs = []
for s in data.keys():
for c in data[s]['channels']:
data_pairs.append((s,c))
Explanation: We'll generate a list of pairs of stypes and channels for ease of use.
End of explanation
D = {}
for s in data.keys():
D[s] = {}
for c in data[s]['channels']:
D[s][c] = ds.read_psi_to_dict(data[s]['path'],c)
Explanation: We can now read in all datafiles specified by the data dictionary above.
End of explanation
# --------------------------------
# -------- User input ------------
# --------------------------------
# Pick an integer value for bin number based on results above
anum = 25
# Specify the percentiles which will be used to calculate landmarks
percbins = [50]
Explanation: Calculate landmark bins
End of explanation
lm = ds.landmarks(percbins=percbins, rnull=np.nan)
lm.calc_bins(D[s_ctrl][c_ctrl], anum, theta_step)
print('Alpha bins')
print(lm.acbins)
print('Theta bins')
print(lm.tbins)
Explanation: Calculate landmark bins based on user input parameters and the previously specified control sample.
End of explanation
lmdf = pd.DataFrame()
# Loop through each pair of stype and channels
for s,c in tqdm.tqdm(data_pairs):
print(s,c)
# Calculate landmarks for each sample with this data pair
for k,df in tqdm.tqdm(D[s][c].items()):
lmdf = lm.calc_perc(df, k, '-'.join([s,c]), lmdf)
# Set timestamp for saving data
tstamp = time.strftime("%m-%d-%H-%M",time.localtime())
# Save completed landmarks to a csv file
lmdf.to_csv(tstamp+'_landmarks.csv')
# Save landmark bins to json file
bins = {
'acbins':list(lm.acbins),
'tbins':list(lm.tbins)
}
with open(tstamp+'_landmarks_bins.json', 'w') as outfile:
json.dump(bins, outfile)
Explanation: Calculate landmarks
End of explanation |
683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment - part 2
Now that we have a better understanding of how to set up a basic neural network in Tensorflow, let's see if we can convert our dataset to a classificiation problem, and then rework our neural network to solve it. I will replicate most of our code from the previous assignment below, but leave blank spots where you should implement changes to convert our regression model into a classification one. Look for text descriptions above code blocks explaining the changes that need to be made, and #UPPERCASE COMMENTS where the new code should be written.
Step1: 1. Target data format
The first step is to change the target of the dataset from a continuous variable (the value of the house) to a categorical one. In this case we will change it to have two categories, specifying whether the value of the house is higher or lower than the average.
In the code block below, write code to change the ‘target’ column to a categorical variable instead of a continuous one. This variable should be 1 if the target is higher than the average value, and 0 if it is lower. You can use np.mean() to calculate the average value. Then, you can iterate over all entries in the column, and compare each value to the average to decide if it is higher or lower. Finally, you can use the int() function to convert the True/False values to 0 and 1.
Step2: 2. Target data encoding
Since we are now dealing with a classification problem, our target values need to be encoded using one-hot encoding (OHE) (see Lab 3 for a description of what this is and why it's necessary). In the code block below, use scikit-learn's OneHotEncoder() module to ocnvert the y target array to OHE.
hint
Step3: 3. Perfomance measure
Instead of measuring the average error in the prediction of a continuous variable, we now want our performance measure to be the number of samples for which we guess the right category.
As before, this function takes in an array of predictions and an array of targets. This time, however, each prediction or target is represented by a two-piece array. With the predictions, the two values represent the confidence of the system for choosing either value as the category. Because these predictions are generated through the softmax function, they are guaranteed to add up to 1.0, so they can be interpreted as the percentage of confidence behind each category. In our two category example,
A prediction of [1,0] means complete confidence that the sample belongs in the first category
A prediction of [0,1] means complete confidence that the sample belongs in the second category
A prediction of [0.5,0.5] means the system is split, and cannot clearly decide which category the sample belongs to.
With the targets, the two values are the one-hot encodings generated previously. You can now see how the one-hot encoding actually represents the target values in the same format as the predictions coming from the model. This is helpful because while the model is training all it has to do is try to match the prediction arrays to the encoded targets. Infact, this is exactly what our modified cost function will do.
For our accuracy measure, we want to take these two arrays of predictions and targets, see how many of them match (correct classification), then devide by the total number of predictions to get the ratio of accurate guesses, and multiply by 100.0 to convert it to a percentage.
hints
Step4: 4. Model definition
For the most part, our model definition will stay roughtly the same. The major difference is that the final layer in our network now contains two values, which are interpreted as the confidence that the network has in classifying each input set of data as belonging to either the first or second category.
However, as the raw output of the network, these outputs can take on any value. In order to interpret them for categorization it is typical to use the softmax function, which converts a range of values to a probability distribution along a number of categories. For example, if the outputs from the network from a given input are [1,000,000 and 10], we would like to interpret that as [0.99 and 0.01], or almost full confidence that the sample belongs in the first category. Similarly, if the outputs are closer together, such as 10 and 5, we would like to interpret it as something like [0.7 and 0.3], which shows that the first category is still more likely, but it is not as confident as before. This is exactly what the softmax function does. The exact formulation of the softmax function is not so important, as long as you know that the goal is to take the raw outputs from the neural network, and convert them to a set of values that preserve the relationship between the outputs while summing up to 1.0.
To adapt our code for classification, we simply have to wrap all of our outputs in a tf.nn.softmax() function, which will convert the raw outputs to confidence measures. We will also replace the MSE error function with a cross-entropy function which performs better with classification tasks. Look for comments below for implementation details.
Step5: Now that we have replaced the relevant accuracy measures and loss function, our training process is exactly the same, meaning we can run the same training process and plotting code to visualize the results. The only difference is that with classificiation we are using an accuracy rather than an error measure, so the better our model is performing, the higher the graph should be (higher accuracy is better, while lower error is better). | Python Code:
%matplotlib inline
import math
import random
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import load_boston
'''Since this is a classification problem, we will need to
represent our targets as one-hot encoding vectors (see previous lab).
To do this we will use scikit-learn's OneHotEncoder module
which we import here'''
from sklearn.preprocessing import OneHotEncoder
import numpy as np
import tensorflow as tf
sns.set(style="ticks", color_codes=True)
Explanation: Assignment - part 2
Now that we have a better understanding of how to set up a basic neural network in Tensorflow, let's see if we can convert our dataset to a classificiation problem, and then rework our neural network to solve it. I will replicate most of our code from the previous assignment below, but leave blank spots where you should implement changes to convert our regression model into a classification one. Look for text descriptions above code blocks explaining the changes that need to be made, and #UPPERCASE COMMENTS where the new code should be written.
End of explanation
dataset = load_boston()
houses = pd.DataFrame(dataset.data, columns=dataset.feature_names)
houses['target'] = dataset.target
averageValue=np.mean(houses.target)
houses['target'] = (dataset.target > averageValue).astype(int)
# WRITE CODE TO CONVERT
#'TARGET' COLUMN FROM CONTINUOUS TO CATEGORICAL
'''check your work'''
print np.max(houses['target']), "<-- should be 1"
print np.min(houses['target']), "<-- should be 0"
Explanation: 1. Target data format
The first step is to change the target of the dataset from a continuous variable (the value of the house) to a categorical one. In this case we will change it to have two categories, specifying whether the value of the house is higher or lower than the average.
In the code block below, write code to change the ‘target’ column to a categorical variable instead of a continuous one. This variable should be 1 if the target is higher than the average value, and 0 if it is lower. You can use np.mean() to calculate the average value. Then, you can iterate over all entries in the column, and compare each value to the average to decide if it is higher or lower. Finally, you can use the int() function to convert the True/False values to 0 and 1.
End of explanation
houses_array = houses.as_matrix().astype(float)
np.random.shuffle(houses_array)
X = houses_array[:, :-1]
y = houses_array[:, -1]
print(y.shape)
y = y.reshape(-1,1)
print(y[0])
enc = OneHotEncoder(sparse=False)
y = enc.fit_transform(y)
print(y[0])
# USE SCIKIT-LEARN'S ONE-HOT ENCODING MODULE TO
# CONVERT THE y ARRAY OF TARGETS TO ONE-HOT ENCODING.
X = X / X.max(axis=0)
trainingSplit = int(.7 * houses_array.shape[0])
X_train = X[:trainingSplit]
y_train = y[:trainingSplit]
X_test = X[trainingSplit:]
y_test = y[trainingSplit:]
print('Training set', X_train.shape, y_train.shape)
print('Test set', X_test.shape, y_test.shape)
'''check your work'''
print y_train.shape[1], "<-- should be 2"
print y_test.shape[1], "<-- should be 2"
print y_train[0], "<-- should be either [0. 1.] or [1. 0.]"
# helper variables
num_samples = X_train.shape[0]
num_features = X_train.shape[1]
num_outputs = y_train.shape[1]
# Hyper-parameters
batch_size = 15
num_hidden_1 = 15
num_hidden_2 = 15
learning_rate = 0.25
training_epochs = 500
dropout_keep_prob = 1 # 0.5 # set to no dropout by default
# variable to control the resolution at which the training results are stored
display_step = 1
Explanation: 2. Target data encoding
Since we are now dealing with a classification problem, our target values need to be encoded using one-hot encoding (OHE) (see Lab 3 for a description of what this is and why it's necessary). In the code block below, use scikit-learn's OneHotEncoder() module to ocnvert the y target array to OHE.
hint: when you create the onehotencoder object, pass in the variable sparse=false to give the resulting data the proper formatting each value in y should be a two-part array, either [0,1] or [1,0] depending on the target value.
End of explanation
def accuracy(predictions, targets):
# IMPLEMENT THE NEW ACCURACY MEASURE HERE
max_pred = np.argmax(predictions,1)
max_target = np.argmax(targets,1)
accuracy = np.sum((max_pred==max_target))/float(max_pred.shape[0])*100.0
return accuracy
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
Explanation: 3. Perfomance measure
Instead of measuring the average error in the prediction of a continuous variable, we now want our performance measure to be the number of samples for which we guess the right category.
As before, this function takes in an array of predictions and an array of targets. This time, however, each prediction or target is represented by a two-piece array. With the predictions, the two values represent the confidence of the system for choosing either value as the category. Because these predictions are generated through the softmax function, they are guaranteed to add up to 1.0, so they can be interpreted as the percentage of confidence behind each category. In our two category example,
A prediction of [1,0] means complete confidence that the sample belongs in the first category
A prediction of [0,1] means complete confidence that the sample belongs in the second category
A prediction of [0.5,0.5] means the system is split, and cannot clearly decide which category the sample belongs to.
With the targets, the two values are the one-hot encodings generated previously. You can now see how the one-hot encoding actually represents the target values in the same format as the predictions coming from the model. This is helpful because while the model is training all it has to do is try to match the prediction arrays to the encoded targets. Infact, this is exactly what our modified cost function will do.
For our accuracy measure, we want to take these two arrays of predictions and targets, see how many of them match (correct classification), then devide by the total number of predictions to get the ratio of accurate guesses, and multiply by 100.0 to convert it to a percentage.
hints:
numpy's np.argmax() function will give you the position of the largest value in the array along an axis, so executing np.argmax(predictions, 1) will convert the confidence measures to the single most likely category.
once you have a list of single-value predictions, you can compare them using the '==' operator to see how many match (matches result in a 'True' and mismatches result in a 'False')
you can use numpy's np.sum() function to find out the total number of 'True' statements, and divide them by the total number of predictions to get the ratio of accurate predictions.
End of explanation
graph = tf.Graph()
with graph.as_default():
x = tf.placeholder(tf.float32, shape=(None, num_features))
_y = tf.placeholder(tf.float32, shape=(None))
keep_prob = tf.placeholder(tf.float32)
tf_X_test = tf.constant(X_test, dtype=tf.float32)
tf_X_train = tf.constant(X_train, dtype=tf.float32)
W_fc1 = weight_variable([num_features, num_hidden_1])
b_fc1 = bias_variable([num_hidden_1])
W_fc2 = weight_variable([num_hidden_1, num_hidden_2])
b_fc2 = bias_variable([num_hidden_2])
W_fc3 = weight_variable([num_hidden_2, num_outputs])
b_fc3 = bias_variable([num_outputs])
def model(data, keep):
fc1 = tf.nn.sigmoid(tf.matmul(data, W_fc1) + b_fc1)
fc1_drop = tf.nn.dropout(fc1, keep)
fc2 = tf.nn.sigmoid(tf.matmul(fc1_drop, W_fc2) + b_fc2)
fc2_drop = tf.nn.dropout(fc2, keep)
fc3 = tf.matmul(fc2_drop, W_fc3) + b_fc3
return fc3
'''for our loss function we still want to get the raw outputs
of the model, but since it no longer represents the actual prediction
we rename the variable to ‘output’'''
output = model(x, keep_prob)
# WHEN WE CALCULATE THE PREDICTIONS, WE NEED TO WRAP EACH OUTPUT IN A
# tf.nn.softmax() FUNCTION. THE FIRST ONE HAS BEEN DONE FOR YOU:
prediction = tf.nn.softmax(output)
test_prediction = model(tf_X_test, 1.0)
train_prediction = model(tf_X_train, 1.0)
'''finally, we replace our previous MSE cost function with the
cross-entropy function included in Tensorflow. This function takes in the
raw output of the network and calculates the average loss with the target'''
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(output, _y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
saver = tf.train.Saver()
Explanation: 4. Model definition
For the most part, our model definition will stay roughtly the same. The major difference is that the final layer in our network now contains two values, which are interpreted as the confidence that the network has in classifying each input set of data as belonging to either the first or second category.
However, as the raw output of the network, these outputs can take on any value. In order to interpret them for categorization it is typical to use the softmax function, which converts a range of values to a probability distribution along a number of categories. For example, if the outputs from the network from a given input are [1,000,000 and 10], we would like to interpret that as [0.99 and 0.01], or almost full confidence that the sample belongs in the first category. Similarly, if the outputs are closer together, such as 10 and 5, we would like to interpret it as something like [0.7 and 0.3], which shows that the first category is still more likely, but it is not as confident as before. This is exactly what the softmax function does. The exact formulation of the softmax function is not so important, as long as you know that the goal is to take the raw outputs from the neural network, and convert them to a set of values that preserve the relationship between the outputs while summing up to 1.0.
To adapt our code for classification, we simply have to wrap all of our outputs in a tf.nn.softmax() function, which will convert the raw outputs to confidence measures. We will also replace the MSE error function with a cross-entropy function which performs better with classification tasks. Look for comments below for implementation details.
End of explanation
results = []
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
for epoch in range(training_epochs):
indexes = range(num_samples)
random.shuffle(indexes)
for step in range(int(math.floor(num_samples/float(batch_size)))):
offset = step * batch_size
batch_data = X_train[indexes[offset:(offset + batch_size)]]
batch_labels = y_train[indexes[offset:(offset + batch_size)]]
feed_dict = {x : batch_data, _y : batch_labels, keep_prob: dropout_keep_prob}
_, l, p = session.run([optimizer, loss, prediction], feed_dict=feed_dict)
if (epoch % display_step == 0):
batch_acc = accuracy(p, batch_labels)
train_acc = accuracy(train_prediction.eval(session=session), y_train)
test_acc = accuracy(test_prediction.eval(session=session), y_test)
results.append([epoch, batch_acc, train_acc, test_acc])
save_path = saver.save(session, "model_houses_classification.ckpt")
print("Model saved in file: %s" % save_path)
df = pd.DataFrame(data=results, columns = ["epoch", "batch_acc", "train_acc", "test_acc"])
df.set_index("epoch", drop=True, inplace=True)
fig, ax = plt.subplots(1, 1, figsize=(10, 4))
ax.plot(df)
ax.set(xlabel='Epoch',
ylabel='Error',
title='Training result')
ax.legend(df.columns, loc=1)
print "Maximum test accuracy: %.2f%%" % np.max(df["test_acc"])
Explanation: Now that we have replaced the relevant accuracy measures and loss function, our training process is exactly the same, meaning we can run the same training process and plotting code to visualize the results. The only difference is that with classificiation we are using an accuracy rather than an error measure, so the better our model is performing, the higher the graph should be (higher accuracy is better, while lower error is better).
End of explanation |
684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here
Step1: Problems
Step2: Problems
Step3: Problems
Step4: If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines
Step5: Once we've fit the data, we need to do something useful with it. Try to do the following things
Step6: Model selection
Difficult to do with classic tools (here)
Problem | Python Code:
running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
Explanation: The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here:
http://jakecoltman.com in the pyData post
It is designed to be roughly similar to the output from DCM's path to conversion
Download the file and transform it into something with the columns:
id,lifetime,age,male,event,search,brand
where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints
It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
End of explanation
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
Explanation: Problems:
1 - Try to fit your data from section 1
2 - Use the results to plot the distribution of the median
Note that the media of a Weibull distribution is:
$$β(log 2)^{1/α}$$
End of explanation
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 3000, thin = 20)
pm.Matplot.plot(mcmc)
#Solution to Q5
## Adjusting the priors impacts the overall result
## If we give a looser, less informative prior then we end up with a broader, shorter distribution
## If we give much more informative priors, then we get a tighter, taller distribution
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
## Note the narrowing of the prior
alpha = pm.Normal("alpha", 1.7, 10000)
beta = pm.Normal("beta", 18.5, 10000)
####Uncomment this to see the result of looser priors
## Note this ends up pretty much the same as we're already very loose
#alpha = pm.Uniform("alpha", 0, 30)
#beta = pm.Uniform("beta", 0, 30)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 5000, thin = 20)
pm.Matplot.plot(mcmc)
#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
Explanation: Problems:
4 - Try adjusting the number of samples for burning and thinnning
5 - Try adjusting the prior and see how it affects the estimate
End of explanation
medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]
testing_value = 14.9
number_of_greater_samples = sum([x >= testing_value for x in medians])
100 * (number_of_greater_samples / len(medians))
Explanation: Problems:
7 - Try testing whether the median is greater than a different values
End of explanation
### Fit a cox proprtional hazards model
Explanation: If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines:
http://lifelines.readthedocs.io/en/latest/
End of explanation
#### Plot baseline hazard function
#### Predict
#### Plot survival functions for different covariates
#### Plot some odds
Explanation: Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
End of explanation
#### BMA Coefficient values
#### Different priors
Explanation: Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Try running with different priors
End of explanation |
685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-2', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:25
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Convolutional VAE
Step1: Import TensorFlow and enable Eager execution
Step2: Load the MNIST dataset
Each MNIST image is originally a vector of 784 integers, each of which is between 0-255 and represents the intensity of a pixel. We model each pixel with a Bernoulli distribution in our model, and we statically binarize the dataset.
Step3: Use tf.data to create batches and shuffle the dataset
Step4: Wire up the generative and inference network with tf.keras.Sequential
In our VAE example, we use two small ConvNets for the generative and inference network. Since these neural nets are small, we use tf.keras.Sequential to simplify our code. Let $x$ and $z$ denote the observation and latent variable respectively in the following descriptions.
Generative Network
This defines the generative model which takes a latent encoding as input, and outputs the parameters for a conditional distribution of the observation, i.e. $p(x|z)$. Additionally, we use a unit Gaussian prior $p(z)$ for the latent variable.
Inference Network
This defines an approximate posterior distribution $q(z|x)$, which takes as input an observation and outputs a set of parameters for the conditional distribution of the latent representation. In this example, we simply model this distribution as a diagonal Gaussian. In this case, the inference network outputs the mean and log-variance parameters of a factorized Gaussian (log-variance instead of the variance directly is for numerical stability).
Reparameterization Trick
During optimization, we can sample from $q(z|x)$ by first sampling from a unit Gaussian, and then multiplying by the standard deviation and adding the mean. This ensures the gradients could pass through the sample to the inference network parameters.
Network architecture
For the inference network, we use two convolutional layers followed by a fully-connected layer. In the generative network, we mirror this architecture by using a fully-connected layer followed by three convolution transpose layers (a.k.a. deconvolutional layers in some contexts). Note, it's common practice to avoid using batch normalization when training VAEs, since the additional stochasticity due to using mini-batches may aggravate instability on top of the stochasticity from sampling.
Step5: Define the loss function and the optimizer
VAEs train by maximizing the evidence lower bound (ELBO) on the marginal log-likelihood
Step6: Training
We start by iterating over the dataset
During each iteration, we pass the image to the encoder to obtain a set of mean and log-variance parameters of the approximate posterior $q(z|x)$
We then apply the reparameterization trick to sample from $q(z|x)$
Finally, we pass the reparameterized samples to the decoder to obtain the logits of the generative distribution $p(x|z)$
Note
Step7: Display an image using the epoch number
Step8: Generate a GIF of all the saved images.
Step9: To downlod the animation from Colab uncomment the code below | Python Code:
# to generate gifs
!pip install imageio
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Convolutional VAE: An example with tf.keras and eager
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/cvae.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/cvae.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This notebook demonstrates how to generate images of handwritten digits using tf.keras and eager execution by training a Variational Autoencoder. (VAE, [1], [2]).
End of explanation
from __future__ import absolute_import, division, print_function
# Import TensorFlow >= 1.9 and enable eager execution
import tensorflow as tf
tfe = tf.contrib.eager
tf.enable_eager_execution()
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
Explanation: Import TensorFlow and enable Eager execution
End of explanation
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32')
# Normalizing the images to the range of [0., 1.]
train_images /= 255.
test_images /= 255.
# Binarization
train_images[train_images >= .5] = 1.
train_images[train_images < .5] = 0.
test_images[test_images >= .5] = 1.
test_images[test_images < .5] = 0.
TRAIN_BUF = 60000
BATCH_SIZE = 100
TEST_BUF = 10000
Explanation: Load the MNIST dataset
Each MNIST image is originally a vector of 784 integers, each of which is between 0-255 and represents the intensity of a pixel. We model each pixel with a Bernoulli distribution in our model, and we statically binarize the dataset.
End of explanation
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(TRAIN_BUF).batch(BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices(test_images).shuffle(TEST_BUF).batch(BATCH_SIZE)
Explanation: Use tf.data to create batches and shuffle the dataset
End of explanation
class CVAE(tf.keras.Model):
def __init__(self, latent_dim):
super(CVAE, self).__init__()
self.latent_dim = latent_dim
self.inference_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation=tf.nn.relu),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation=tf.nn.relu),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latent_dim + latent_dim),
]
)
self.generative_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(latent_dim,)),
tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(7, 7, 32)),
tf.keras.layers.Conv2DTranspose(
filters=64,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation=tf.nn.relu),
tf.keras.layers.Conv2DTranspose(
filters=32,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation=tf.nn.relu),
# No activation
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=(1, 1), padding="SAME"),
]
)
def sample(self, eps=None):
if eps is None:
eps = tf.random_normal(shape=(100, self.latent_dim))
return self.decode(eps, apply_sigmoid=True)
def encode(self, x):
mean, logvar = tf.split(self.inference_net(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random_normal(shape=mean.shape)
return eps * tf.exp(logvar * .5) + mean
def decode(self, z, apply_sigmoid=False):
logits = self.generative_net(z)
if apply_sigmoid:
probs = tf.sigmoid(logits)
return probs
return logits
Explanation: Wire up the generative and inference network with tf.keras.Sequential
In our VAE example, we use two small ConvNets for the generative and inference network. Since these neural nets are small, we use tf.keras.Sequential to simplify our code. Let $x$ and $z$ denote the observation and latent variable respectively in the following descriptions.
Generative Network
This defines the generative model which takes a latent encoding as input, and outputs the parameters for a conditional distribution of the observation, i.e. $p(x|z)$. Additionally, we use a unit Gaussian prior $p(z)$ for the latent variable.
Inference Network
This defines an approximate posterior distribution $q(z|x)$, which takes as input an observation and outputs a set of parameters for the conditional distribution of the latent representation. In this example, we simply model this distribution as a diagonal Gaussian. In this case, the inference network outputs the mean and log-variance parameters of a factorized Gaussian (log-variance instead of the variance directly is for numerical stability).
Reparameterization Trick
During optimization, we can sample from $q(z|x)$ by first sampling from a unit Gaussian, and then multiplying by the standard deviation and adding the mean. This ensures the gradients could pass through the sample to the inference network parameters.
Network architecture
For the inference network, we use two convolutional layers followed by a fully-connected layer. In the generative network, we mirror this architecture by using a fully-connected layer followed by three convolution transpose layers (a.k.a. deconvolutional layers in some contexts). Note, it's common practice to avoid using batch normalization when training VAEs, since the additional stochasticity due to using mini-batches may aggravate instability on top of the stochasticity from sampling.
End of explanation
def log_normal_pdf(sample, mean, logvar, raxis=1):
log2pi = tf.log(2. * np.pi)
return tf.reduce_sum(
-.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),
axis=raxis)
def compute_loss(model, x):
mean, logvar = model.encode(x)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])
logpz = log_normal_pdf(z, 0., 0.)
logqz_x = log_normal_pdf(z, mean, logvar)
return -tf.reduce_mean(logpx_z + logpz - logqz_x)
def compute_gradients(model, x):
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
return tape.gradient(loss, model.trainable_variables), loss
optimizer = tf.train.AdamOptimizer(1e-4)
def apply_gradients(optimizer, gradients, variables, global_step=None):
optimizer.apply_gradients(zip(gradients, variables), global_step=global_step)
Explanation: Define the loss function and the optimizer
VAEs train by maximizing the evidence lower bound (ELBO) on the marginal log-likelihood:
$$\log p(x) \ge \text{ELBO} = \mathbb{E}_{q(z|x)}\left[\log \frac{p(x, z)}{q(z|x)}\right].$$
In practice, we optimize the single sample Monte Carlo estimate of this expectation:
$$\log p(x| z) + \log p(z) - \log q(z|x),$$
where $z$ is sampled from $q(z|x)$.
Note: we could also analytically compute the KL term, but here we incorporate all three terms in the Monte Carlo estimator for simplicity.
End of explanation
epochs = 100
latent_dim = 50
num_examples_to_generate = 16
# keeping the random vector constant for generation (prediction) so
# it will be easier to see the improvement.
random_vector_for_generation = tf.random_normal(
shape=[num_examples_to_generate, latent_dim])
model = CVAE(latent_dim)
def generate_and_save_images(model, epoch, test_input):
predictions = model.sample(test_input)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
generate_and_save_images(model, 0, random_vector_for_generation)
for epoch in range(1, epochs + 1):
start_time = time.time()
for train_x in train_dataset:
gradients, loss = compute_gradients(model, train_x)
apply_gradients(optimizer, gradients, model.trainable_variables)
end_time = time.time()
if epoch % 1 == 0:
loss = tfe.metrics.Mean()
for test_x in test_dataset.make_one_shot_iterator():
loss(compute_loss(model, test_x))
elbo = -loss.result()
display.clear_output(wait=False)
print('Epoch: {}, Test set ELBO: {}, '
'time elapse for current epoch {}'.format(epoch,
elbo,
end_time - start_time))
generate_and_save_images(
model, epoch, random_vector_for_generation)
Explanation: Training
We start by iterating over the dataset
During each iteration, we pass the image to the encoder to obtain a set of mean and log-variance parameters of the approximate posterior $q(z|x)$
We then apply the reparameterization trick to sample from $q(z|x)$
Finally, we pass the reparameterized samples to the decoder to obtain the logits of the generative distribution $p(x|z)$
Note: Since we use the dataset loaded by keras with 60k datapoints in the training set and 10k datapoints in the test set, our resulting ELBO on the test set is slightly higher than reported results in the literature which uses dynamic binarization of Larochelle's MNIST.
Generate Images
After training, it is time to generate some images
We start by sampling a set of latent vectors from the unit Gaussian prior distribution $p(z)$
The generator will then convert the latent sample $z$ to logits of the observation, giving a distribution $p(x|z)$
Here we plot the probabilities of Bernoulli distributions
End of explanation
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(epochs) # Display images
Explanation: Display an image using the epoch number
End of explanation
with imageio.get_writer('cvae.gif', mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
# this is a hack to display the gif inside the notebook
os.system('cp cvae.gif cvae.gif.png')
display.Image(filename="cvae.gif.png")
Explanation: Generate a GIF of all the saved images.
End of explanation
#from google.colab import files
#files.download('cvae.gif')
Explanation: To downlod the animation from Colab uncomment the code below:
End of explanation |
687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Name
Step1: Part One
Step2: plot xkcd style
Step3: Part Two
Step4: Calculating the letter frequency for male names.
Step5: Calculating the letter frequency for female names.
Step6: Calculating the last letter frequency for male names.
Step7: Calculating the last letter frequency for female names.
Step8: Plot for each letter showing the frequency of that letter as the last letter for both for male and female names.
<br>I use the OrderedDict function from collections here to arrange the letters present in counter in acsending order for plotting.
Step9: Part Three
Step10: Preparing data for the 1880s.
<br>A counter for last letter frequencies.
Step11: Preparing data for the 1940s.
<br>A counter for last letter frequencies.
Step12: Preparing data for the 1990s.
<br>A counter for last letter frequencies.
Step13: Converting the frequency data from counter to dataframes after sorting the letters alphabetically.
Step14: Aggregating all required decades (1880s, 1940s, 1990s) into a single dataframe and then into a pivot table for ease in plotting graphs.
Step15: Plot of last letter of females for 1880s , 1940s, 1990s, and for all years (from part 2).
Step16: The graph has extreme variations in highs and lows.
<br>Plotting the logarithmic scale of frequencies takes care of this and makes it easier for comparison.
Step17: Evaluate how stable this statistic is. Speculate on why it is is stable, if it is, or on what demographic facts might explain any changes, if there are any.
We can normalize the table by total births in each particular decades to compute a new table containing proportion
of total births for each decade ending in each letter. | Python Code:
#import required libraries
import pandas as pd
import numpy as np
#for counter operations
from collections import Counter
#for plotting graphs
import matplotlib.pyplot as plt
# Make the graphs a bit prettier, and bigger
pd.set_option('display.mpl_style', 'default')
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
%matplotlib inline
Explanation: Name: Vinit Nalawade
End of explanation
#informing python that ',' indicates thousands
df = pd.read_clipboard(thousands = ',')
df
#plot male and female births for the years covered in the data
plt.plot(df['Year of birth'], df['Male'], c = 'b', label = 'Male')
plt.plot(df['Year of birth'], df['Female'],c = 'r', label = 'Female')
plt.legend(loc = 'upper left')
#plt.axis([1880, 2015, 0, 2500000])
plt.xlabel('Year of birth')
plt.ylabel('No. of births')
plt.title('Total births by Sex and Year')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
Explanation: Part One: Go to the <a href = "https://www.ssa.gov/oact/babynames/numberUSbirths.html">Social Security Administration US births website</a> and select the births table there and copy it to your clipboard. Use the pandas read_clipboard function to read the table into Python, and use matplotlib to plot male and female births for the years covered in the data.
End of explanation
years = range(1881,2011)
pieces = []
columns = ['name','sex','births']
for year in years:
path = 'names/yob{0:d}.txt'.format(year)
frame = pd.read_csv(path,names=columns)
frame['year'] = year
pieces.append(frame)
names = pd.concat(pieces, ignore_index=True)
names.head()
names.tail()
Explanation: plot xkcd style :)
with plt.xkcd():
#plt.plot(df['Year of birth'], df['Male'], c = 'b', label = 'Male')
#plt.plot(df['Year of birth'], df['Female'],c = 'r', label = 'Female')
#plt.legend(loc = 'upper left')
#plt.xlim(xmax = 2015)
#plt.xlabel('Year of birth')
#plt.ylabel('No. of births')
#plt.title('Male and Female births from 1880 to 2015')
#plt.show()
In the same notebook, use Python to get a list of male and female names from these files. This data is broken down by year of birth.
<br>The files contain names data of the years from 1881 to 2010.
<br>Aggregating this data in "names" dataframe below.
End of explanation
female_names = names[names.sex == 'F']
male_names = names[names.sex == 'M']
print "For Female names"
print female_names.head()
print "\nFor Male names"
print male_names.tail()
female_list = list(female_names['name'])
male_list = list(male_names['name'])
Explanation: Part Two: Aggregate the data for all years (see the examples in the Pandas notebooks). Use Python Counters to get letter frequencies for male and female names. Use matplotlib to draw a plot that for each letter (x-axis) shows the frequency of that letter (y-axis) as the last letter for both for male and female names.
The data is already agregated in "names" dataframe.
<br>Getting separate dataframes for Males and Females.
<br>Defining a List for male and female names.
End of explanation
male_letter_freq = Counter()
#converting every letter to lowercase
for name in map(lambda x:x.lower(),male_names['name']):
for i in name:
male_letter_freq[i] += 1
male_letter_freq
Explanation: Calculating the letter frequency for male names.
End of explanation
female_letter_freq = Counter()
#converting every letter to lowercase
for name in map(lambda x:x.lower(),female_names['name']):
for i in name:
female_letter_freq[i] += 1
female_letter_freq
Explanation: Calculating the letter frequency for female names.
End of explanation
male_last_letter_freq = Counter()
for name in male_names['name']:
male_last_letter_freq[name[-1]] += 1
male_last_letter_freq
Explanation: Calculating the last letter frequency for male names.
End of explanation
female_last_letter_freq = Counter()
for name in female_names['name']:
female_last_letter_freq[name[-1]] += 1
female_last_letter_freq
Explanation: Calculating the last letter frequency for female names.
End of explanation
#for ordering items of counter in ascending order
from collections import OrderedDict
#plot of last letter frequency of male names in ascending order of letters
male_last_letter_freq_asc = OrderedDict(sorted(male_last_letter_freq.items()))
plt.bar(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.values(), align='center')
plt.xticks(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.keys())
plt.xlabel('Letters')
plt.ylabel('Frequency')
plt.title('Frequency of last letter for Male names')
plt.show()
#plot of last letter frequency of female names in ascending order of letters
female_last_letter_freq_asc = OrderedDict(sorted(female_last_letter_freq.items()))
plt.bar(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), align='center')
plt.xticks(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.keys())
plt.xlabel('Letters')
plt.ylabel('Frequency')
plt.title('Frequency of last letter for Female names')
plt.show()
female_last_letter_freq_asc = OrderedDict(sorted(female_last_letter_freq.items()))
plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'Female')
plt.plot(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.values(), c = 'b', label = 'Male')
plt.xticks(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.keys())
plt.xlabel('Letters')
plt.ylabel('Frequency')
plt.legend(loc = 'upper right')
plt.title('Frequency of last letter in names by Sex')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
Explanation: Plot for each letter showing the frequency of that letter as the last letter for both for male and female names.
<br>I use the OrderedDict function from collections here to arrange the letters present in counter in acsending order for plotting.
End of explanation
#to get the decade lists
#female_1880 = female_names[female_names['year'] < 1890]
#female_1890 = female_names[(female_names['year'] >= 1890) & (female_names['year'] < 1900)]
#female_1900 = female_names[(female_names['year'] >= 1900) & (female_names['year'] < 1910)]
#female_1910 = female_names[(female_names['year'] >= 1910) & (female_names['year'] < 1920)]
#female_1920 = female_names[(female_names['year'] >= 1920) & (female_names['year'] < 1930)]
#female_1930 = female_names[(female_names['year'] >= 1930) & (female_names['year'] < 1940)]
#female_1940 = female_names[(female_names['year'] >= 1940) & (female_names['year'] < 1950)]
#female_1950 = female_names[(female_names['year'] >= 1950) & (female_names['year'] < 1960)]
#female_1960 = female_names[(female_names['year'] >= 1960) & (female_names['year'] < 1970)]
#female_1970 = female_names[(female_names['year'] >= 1970) & (female_names['year'] < 1980)]
#female_1980 = female_names[(female_names['year'] >= 1980) & (female_names['year'] < 1990)]
#female_1990 = female_names[(female_names['year'] >= 1990) & (female_names['year'] < 2000)]
#female_2000 = female_names[(female_names['year'] >= 2000) & (female_names['year'] < 2010)]
#female_2010 = female_names[female_names['year'] >= 2010]
#another earier way to get the decade lists for females
female_1880 = female_names[female_names.year.isin(range(1880,1890))]
female_1890 = female_names[female_names.year.isin(range(1890,1900))]
female_1900 = female_names[female_names.year.isin(range(1900,1910))]
female_1910 = female_names[female_names.year.isin(range(1910,1920))]
female_1920 = female_names[female_names.year.isin(range(1920,1930))]
female_1930 = female_names[female_names.year.isin(range(1930,1940))]
female_1940 = female_names[female_names.year.isin(range(1940,1950))]
female_1950 = female_names[female_names.year.isin(range(1950,1960))]
female_1960 = female_names[female_names.year.isin(range(1960,1970))]
female_1970 = female_names[female_names.year.isin(range(1970,1980))]
female_1980 = female_names[female_names.year.isin(range(1980,1990))]
female_1990 = female_names[female_names.year.isin(range(1990,2000))]
female_2000 = female_names[female_names.year.isin(range(2000,2010))]
female_2010 = female_names[female_names.year.isin(range(2010,2011))] #just the year 2010 present
#to verify sorting of data
print female_1880.head()
print female_1880.tail()
Explanation: Part Three: Now do just female names, but aggregate your data in decades (10 year) increments. Produce a plot that contains the 1880s line, the 1940s line, and the 1990s line, as well as the female line for all years aggregated together from Part Two. Evaluate how stable this statistic is. Speculate on why it is is stable, if it is, or on what demographic facts might explain any changes, if there are any. Turn in your ipython notebook file, showing the code you used to complete parts One, Two, an Three.
End of explanation
female_1880_freq = Counter()
for name in female_1880['name']:
female_1880_freq[name[-1]] += 1
female_1880_freq
Explanation: Preparing data for the 1880s.
<br>A counter for last letter frequencies.
End of explanation
female_1940_freq = Counter()
for name in female_1940['name']:
female_1940_freq[name[-1]] += 1
female_1940_freq
Explanation: Preparing data for the 1940s.
<br>A counter for last letter frequencies.
End of explanation
female_1990_freq = Counter()
for name in female_1990['name']:
female_1990_freq[name[-1]] += 1
female_1990_freq
Explanation: Preparing data for the 1990s.
<br>A counter for last letter frequencies.
End of explanation
#for 1880s
first = pd.DataFrame.from_dict((OrderedDict(sorted(female_1880_freq.items()))), orient = 'index').reset_index()
first.columns = ['letter','frequency']
first['decade'] = '1880s'
print first.head()
#for 1940s
second = pd.DataFrame.from_dict((OrderedDict(sorted(female_1940_freq.items()))), orient = 'index').reset_index()
second.columns = ['letter','frequency']
second['decade'] = '1940s'
print second.head()
#for 1990s
third = pd.DataFrame.from_dict((OrderedDict(sorted(female_1990_freq.items()))), orient = 'index').reset_index()
third.columns = ['letter','frequency']
third['decade'] = '1990s'
print third.head()
Explanation: Converting the frequency data from counter to dataframes after sorting the letters alphabetically.
End of explanation
#Aggregate 1880s, 1940s and 1990s frequencies
frames = [first, second, third]
columns = ["letter","frequency", "decade"]
req_decades = pd.DataFrame(pd.concat(frames))
req_decades.columns = columns
print req_decades.head()
print req_decades.tail()
#Get data into a pivot table for ease in plotting
decades_table = pd.pivot_table(req_decades, index=['letter'], values=['frequency'], columns=['decade'])
decades_table.head()
Explanation: Aggregating all required decades (1880s, 1940s, 1990s) into a single dataframe and then into a pivot table for ease in plotting graphs.
End of explanation
#plot the decades as bars and the female line for all years as a line
c = ['m','g','c']
decades_table['frequency'].plot(kind = 'bar', rot = 0,color = c, title = 'Frequency of Last letter of Female names by Female Births')
#the female line for all years taken from part 2
plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'All Female births')
plt.xlabel('Letters')
plt.ylabel('Frequency')
plt.legend(loc = 'best')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
Explanation: Plot of last letter of females for 1880s , 1940s, 1990s, and for all years (from part 2).
End of explanation
#plot the decades as bars and the female line for all years as a line
c = ['m','g','c']
decades_table['frequency'].plot(kind = 'bar', rot = 0, logy = 'True',color = c, title = 'Log(Frequency) of Last letter of Female names by Female Births')
#the female line for all years taken from part 2
plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'All Female births')
plt.xlabel('Letters')
plt.ylabel('Log(Frequency)')
plt.legend(loc = 'best')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
Explanation: The graph has extreme variations in highs and lows.
<br>Plotting the logarithmic scale of frequencies takes care of this and makes it easier for comparison.
End of explanation
decades_table.sum()
#plot the decades as bars and the female line for all years as a line
c = ['m','g','c']
decades_table_prop = decades_table/decades_table.sum().astype(float)
decades_table_prop['frequency'].plot(kind = 'bar', rot = 0,color = c, title = 'Normalized Frequency of Last letter of Female names by Female Births')
#the female line for all years taken from part 2
#plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'All Female births')
plt.xlabel('Letters')
plt.ylabel('Normalized Frequency')
plt.legend(loc = 'best')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
Explanation: Evaluate how stable this statistic is. Speculate on why it is is stable, if it is, or on what demographic facts might explain any changes, if there are any.
We can normalize the table by total births in each particular decades to compute a new table containing proportion
of total births for each decade ending in each letter.
End of explanation |
688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure 4 csv data generation
Figure data consolidation for Figure 4, which shows patterns entropy for taxa and across the phylogeny
Figure 4a
Step1: Figure 4b
Step2: Figure 4c
Step3: Figure 4bc | Python Code:
# read in exported table for genus
fig4a_genus = pd.read_csv('../../../data/07-entropy-and-covariation/genus-level-distribution.csv', header=0)
# read in exported table for otu
fig4a_otu = pd.read_csv('../../../data/07-entropy-and-covariation/otu-level-distribution-400.csv', header=0)
Explanation: Figure 4 csv data generation
Figure data consolidation for Figure 4, which shows patterns entropy for taxa and across the phylogeny
Figure 4a: entropy for genera and entropy for representative tag sequences
End of explanation
# read in exported table
fig4b = pd.read_csv('../../../data/07-entropy-and-covariation/entropy_by_phylogeny_c20.csv', header=0)
Explanation: Figure 4b: Entropy as a function of phylogenetic width per clade
End of explanation
# read in exported table
fig4c = pd.read_csv('../../../data/07-entropy-and-covariation/entropy_by_taxonomy_c20.csv', header=0)
Explanation: Figure 4c: Entropy as a function of taxonomic level
End of explanation
# read in exported table
fig4bc = pd.read_csv('../../../data/07-entropy-and-covariation/entropy_per_tag_sequence_s10.csv', header=0)
fig4 = pd.ExcelWriter('Figure4_data.xlsx')
fig4a_genus.to_excel(fig4,'Fig-4a_genus')
fig4a_otu.to_excel(fig4,'Fig-4a_sequence')
fig4b.to_excel(fig4,'Fig-4b')
fig4c.to_excel(fig4,'Fig-4c')
fig4bc.to_excel(fig4,'Fig-4bc_violin')
fig4.save()
Explanation: Figure 4bc: Entropy values per OTU for violin plot
End of explanation |
689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graded = 9/10
Homework 11
Step1: 4. "Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN.
Step2: 5. "Violation time" is... not a time. Make it a time.
Step3: 6, There sure are a lot of colors of cars, too bad so many of them are the same. Make "BLK" and "BLACK", "WT" and "WHITE", and any other combinations that you notice.
Step4: 7. Join the data with the Parking Violations Code dataset from the NYC Open Data site.
Step5: 8. How much money did NYC make off of parking violations?
Step6: Now, we know the locations of the violation and from the merged dataset we know that there the fine rates vary upon the location of the violation, with certain rates for everything under 96th street in Manhattan and other rates for elsewhere. So using the Violation County numbers, and the assumption that NY county is everything under 96th - we calculate.
Step7: 9. What's the most lucrative kind of parking violation? The most frequent?
Step8: 10. New Jersey has bad drivers, but does it have bad parkers, too? How much money does NYC make off of all non-New York vehicles?
Step9: 11. Make a chart of the top few.
Step10: We can then inspect what 21,38,14 violation codes are.
12. What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am.
Step11: From the graph above, it is very evident that most people get tickets in the 6 AM to 12 PM ie the morning slot. More than any other segments of the day.
13. What's the average ticket cost in NYC?
Step12: 14. Make a graph of the number of tickets per day.
Step13: Looks like, the month of August is when the Police seem to have gone on a fining spree.
15. Make a graph of the amount of revenue collected per day.
<< Not too sure on how to get this done>>
16. Manually construct a dataframe out of https
Step14: 17. What's the parking-ticket in dollars-per-licensed-driver in each borough of NYC? Do this with pandas and the dataframe you just made, not with your head! | Python Code:
import pandas as pd
dates=['Issue Date', 'Vehicle Expiration Date'] #Importing dates as datetime
col_types={'Plate ID': 'str','Date First Observed':'str'} #Importing Plate ID and the Date First Observed as a string, because it has to be made into a time by a function.
df=pd.read_csv("small-violations.csv",dtype=col_types,parse_dates=dates,na_values={'Date First Observed'==0,'Vehicle Expiration Date'==88888888,88888888.0,88880088},infer_datetime_format=True)
df.dtypes #Finding out if our datatype import has worked or not.
df.columns #Identifying all the columns of the dataframe.
Explanation: Graded = 9/10
Homework 11: PARKING TICKETS
1. I want to make sure my Plate ID is a string. Can't lose the leading zeroes!
2. I don't think anyone's car was built in 0AD. Discard the '0's as NaN.
3. I want the dates to be dates! Read the read_csv documentation to find out how to make pandas automatically parse dates.
End of explanation
## WRITING A FUNCTION TO CONVERT STRING INTO A DATE
def string_to_date(string):
from dateutil import parser
if pd.isnull(string):
return None
else:
dt = parser.parse(string)
return dt.date()
string_to_date('20160808') #Testing it out.
df['Date First Observed']= df['Date First Observed'].apply(string_to_date) #Applying it to the dataframe column
Explanation: 4. "Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN.
End of explanation
def string_to_time(string): #CONVERTING THE VIOLATION TIME STRING INTO TIME
from dateutil import parser
import re
if pd.isnull(string):
return None
if string =='Nan' or string =='nan' or string =='0':
return None
if string[0:4].isnumeric()==True:
if int(string[0:2]) <=12 and int(string[2:4])<=59:
regex=re.search(r"\d\d\d\d[AaPp]",string)
if regex:
time=string[0:2]+":"+string[2:4]+" "+string[4]+"M"
return parser.parse(time).time()
else:
return None
def float_to_int(float): #THE VIOLATION CODE COLUMN IS ACTUALLY A FLOAT, SO WE'RE CONVERTING INTO AN INTEGER FIRST.
if pd.isnull(float):
return None
else:
return int(float)
df['Violation Code']=df['Violation Code'].apply(float_to_int)
df['Violation Time']=df['Violation Time'].apply(string_to_time)
df['Violation Time'].head() #TESTING IF IT WORKED
Explanation: 5. "Violation time" is... not a time. Make it a time.
End of explanation
df['Vehicle Color'].value_counts() #LOOKING AT COLORS IN THE ORIGINAL DATASET
df['Vehicle Color'] = df['Vehicle Color'].replace(['WHT', 'WH','WT','WHI'], 'WHITE')
df['Vehicle Color'] = df['Vehicle Color'].replace(['GRAY', 'GY','GRY'], 'GREY')
df['Vehicle Color'] = df['Vehicle Color'].replace(['BLK', 'BK'], 'BLACK')
df['Vehicle Color'] = df['Vehicle Color'].replace('BL', 'BLUE')
df['Vehicle Color'] = df['Vehicle Color'].replace(['BR', 'BN','BRWN'], 'BROWN')
df['Vehicle Color'] = df['Vehicle Color'].replace('RD', 'RED')
df['Vehicle Color'] = df['Vehicle Color'].replace(['GR', 'GN','GRN'], 'GREEN')
df['Vehicle Color'] = df['Vehicle Color'].replace('TN', 'TAN')
df['Vehicle Color'] = df['Vehicle Color'].replace('GL', 'GOLD')
df['Vehicle Color'] = df['Vehicle Color'].replace('BRN', 'BROWN')
df['Vehicle Color'] = df['Vehicle Color'].replace(['YW', 'YELLO','YELL'], 'YELLOW')
df['Vehicle Color'] = df['Vehicle Color'].replace(['BL', 'BLU'], 'BLUE')
df['Vehicle Color'] = df['Vehicle Color'].replace('MR', 'MAROON')
df['Vehicle Color'] = df['Vehicle Color'].replace(['SIL', 'SILVR','SILVE','SILV'], 'SILVER')
df['Vehicle Color'] = df['Vehicle Color'].replace(['OR', 'ORANG'], 'ORANGE')
df['Vehicle Color'].value_counts() #LOOKING AT THE COLORS AFTER CLEANING UP, MUCH BETTER!
Explanation: 6, There sure are a lot of colors of cars, too bad so many of them are the same. Make "BLK" and "BLACK", "WT" and "WHITE", and any other combinations that you notice.
End of explanation
#READING PARKING VIOLATION CODES FROM NYC DATASET
codesdf=pd.read_csv("DOF_Parking_Violation_Codes.csv",dtype={'Manhattan\xa0 96th St. & below':int, 'All Other Areas': int})
codesdf.columns
codesdf['Manhattan\xa0 96th St. & below'].value_counts() #CHECKING IF THE IMPORT WORKED WELL
newdf=df.join(codesdf, on='Violation Code', how='left') #MERGING THE TWO DATAFRAMES INTO A NEW DATAFRAME Unimaginatively CALLED NEWDF
newdf.head() #DID IT WORK? SEEMS TO HAVE, THE NANS IN THE END OF THIS FRAME ARE THERE IN THE ORIGINAL ONE TOO.
newdf.columns #THE NEW COLUMNS SEEM TO HAVE BEEN ADDED
Explanation: 7. Join the data with the Parking Violations Code dataset from the NYC Open Data site.
End of explanation
newdf['Violation County'].value_counts() #LET US SEE THE VIOLATION COUNTY DATA, THIS NEEDS TO BE CLEANED UP A BIT
newdf['Violation County'] = newdf['Violation County'].replace('BX', 'BRONX')
newdf['Violation County'] = newdf['Violation County'].replace(['R', 'RICH'], 'RICHMOND')
newdf['Violation County'] = newdf['Violation County'].replace('K', 'KINGS')
newdf['Violation County'].value_counts() #ALL CLEANED
Explanation: 8. How much money did NYC make off of parking violations?
End of explanation
manhattanviolations=newdf[newdf['Violation County']=='NY']
#FINES COLLECTED IN MANHATTAN
manhattanviolations['Manhattan\xa0 96th St. & below'].sum()
allotherviolations=newdf[newdf['Violation County']!= 'NY']
#FINES NOT COLLECTED IN MANHATTAN
allotherviolations['All Other Areas'].sum()
#FINES COLLECTED IN ALL OF NYC
manhattanviolations['Manhattan\xa0 96th St. & below'].sum()+allotherviolations['All Other Areas'].sum()
Explanation: Now, we know the locations of the violation and from the merged dataset we know that there the fine rates vary upon the location of the violation, with certain rates for everything under 96th street in Manhattan and other rates for elsewhere. So using the Violation County numbers, and the assumption that NY county is everything under 96th - we calculate.
End of explanation
newdf['Manhattan\xa0 96th St. & below'].value_counts()
#IT IS EVIDENT THAT THE 265 DOLLARS IS THE MOST LUCRATIVE FINE in terms of monetary amount, but 115 DOlLARS IS THE MOST COMMON
lucrative=newdf[newdf['Manhattan\xa0 96th St. & below']==265]
lucrative['DEFINITION'].value_counts()
#TRACTOR TRAILER PARKING SEEMS TO MAKE MONIES
frequent=newdf['DEFINITION'].value_counts()
#IDENTIFYING THE MOST FREQUENT SETS OF VIOLATIONS
frequent.head() #THE TOP FIVE SETS OF VIOLATIONS
Explanation: 9. What's the most lucrative kind of parking violation? The most frequent?
End of explanation
nonnyviolations=newdf[newdf['Registration State']!= 'NY']
nonnymanhattanviolations=nonnyviolations[nonnyviolations['Violation County']=='NY']
nonnymanhattanviolations['Manhattan\xa0 96th St. & below'].sum()
#MONEY MADE BY NON NY VEHICLES IN MANHATTAN
nonnymanhattanviolations=nonnyviolations[nonnyviolations['Violation County']!='NY']
nonnymanhattanviolations['All Other Areas'].sum()
#MONEY MADE BY NON NY VEHICLES OUTSIDE MANHATTAN
nonnymanhattanviolations['Manhattan\xa0 96th St. & below'].sum()+nonnymanhattanviolations['All Other Areas'].sum()
#TOTAL MONEY MADE BY NON NY VEHICLES
Explanation: 10. New Jersey has bad drivers, but does it have bad parkers, too? How much money does NYC make off of all non-New York vehicles?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
toptenviolations=nonnyviolations['Violation Code'].value_counts()
toptenviolations.head(10).plot(kind='bar')
#I USE THE VIOLATION CODE INSTEAD OF ACTUAL VIOLATION NAME BECAUSE THE NAMES OF SOME OF THEM ARE TOO LONG.
Explanation: 11. Make a chart of the top few.
End of explanation
newdf['Violation Time'].value_counts().plot(figsize=(16, 8))
Explanation: We can then inspect what 21,38,14 violation codes are.
12. What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am.
End of explanation
newdf['Manhattan\xa0 96th St. & below'].describe()
#AVERAGE COST OF TICKETS IN MANHATTAN is 95 DOLALRS
newdf['All Other Areas'].describe()
#AVERAGE COST OF TICKET ELSEWHERE IS 85 DOLLARS
import numpy
a=[84.716996,94.962425]
numpy.mean(a)
#SO THE AVERAGE TICKET COST IN NEW YORK CITY IS ABOUT 90 DOLLARS
Explanation: From the graph above, it is very evident that most people get tickets in the 6 AM to 12 PM ie the morning slot. More than any other segments of the day.
13. What's the average ticket cost in NYC?
End of explanation
ticketsperday=newdf['Issue Date'].value_counts()
ticketsperday.head(100).plot(figsize=(16, 8))
Explanation: 14. Make a graph of the number of tickets per day.
End of explanation
driversdf=pd.read_csv("drivers.csv")
driversdf.head(6)
Explanation: Looks like, the month of August is when the Police seem to have gone on a fining spree.
15. Make a graph of the amount of revenue collected per day.
<< Not too sure on how to get this done>>
16. Manually construct a dataframe out of https://dmv.ny.gov/statistic/2015licinforce-web.pdf (only NYC boroughts - bronx, queens, manhattan, staten island, brooklyn), having columns for borough name, abbreviation, and number of licensed drivers.
End of explanation
#MAKING DATAFRAMES FOR EACH OF THESE BOROUGHS/COUNTIES
bronxviolations=newdf[newdf['Violation County']=='BRONX']
brooklynviolations=newdf[newdf['Violation County']=='KINGS']
manhattanviolations=newdf[newdf['Violation County']=='NY']
queensviolations=newdf[newdf['Violation County']=='Q']
statenislandviolations=newdf[newdf['Violation County']=='RICHMOND']
#GETTING THE TOTAL NUMBER OF DRIVERS FROM THE NEW DATA FRAME
numberofdriversinbronx=driversdf.iloc[0]['Total']
numberofdriversinbrooklyn=driversdf.iloc[1]['Total']
numberofdriversinmanhattan=driversdf.iloc[2]['Total']
numberofdriversinqueens=driversdf.iloc[3]['Total']
numberofdriversinstatenisland=driversdf.iloc[4]['Total']
bronxviolations['All Other Areas'].sum()/numberofdriversinbronx #AVERAGE PARKING TICKET PER DRIVER IN BRONX
brooklynviolations['All Other Areas'].sum()/numberofdriversinbrooklyn #AVERAGE PARKING TICKET PER DRIVER IN BROOKLYN
manhattanviolations['Manhattan\xa0 96th St. & below'].sum()/numberofdriversinmanhattan #AVERAGE PARKING TICKET PER DRIVER IN MANHATTAN
queensviolations['All Other Areas'].sum()/numberofdriversinqueens #AVERAGE PARKING TICKET PER DRIVER IN QUEENS
statenislandviolations['All Other Areas'].sum()/numberofdriversinstatenisland #AVERAGE PARKING TICKET PER DRIVER IN STATEN ISLAND
Explanation: 17. What's the parking-ticket in dollars-per-licensed-driver in each borough of NYC? Do this with pandas and the dataframe you just made, not with your head!
End of explanation |
690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
INTRODUCTION TO COMPUTING WITH TENSOR FLOW
Tensor flow can be visualised as computations in a graph with nodes that has definite operations. In simple terms, tensor flow carries out computations in the form of graphs. Nodes of the graph are Ops (Operations)
Step1: Tensor dimensionality is denoted using shape, rank and dimension number. Tensor with rank 0 is a scalar. A rank 1 tensor is a vector. A rank 2 tensor is a matrix, ie you can access element using 2 variables like tensor[i,j]; here the dimension is 2D. In the case of rank 3, access it using tensor[i,j,k];here dimension is 3D.
The tensor elements can be subjected to slice, tile operations. The below example deals with slice command.
Example 2
Step2: Example 3
Step3: Tensor Board provides a visualization of the model that you create. It deals with 2 types of connections
Step4: INTRODUCING PLACEHOLDERS
They are used to provide input whenever tensor flow is made to run computation. To mention a dimension of any length, None can be specified in the shape argument. The below example which deals with matrix multiplication of tensors-generated using the random function, provides an understanding of working with placeholder values.
Example 6
Step5: ACTIVATION FUNCTIONS
In the biological world, Neurons does the decision making process. The below figure shows a 3 layer neural network | Python Code:
#```python
# Objective of the program is to do a simple multiplication on an input tensor of
# constant values
# To use tensor flow; import it
import tensorflow as tf
# tf.constant creates constant values. The below command creates a tensor;shape (2,2)
# with constant values. Be sure that each element are separated by comma delimiter and also
# that the two operands with which operation is performed are of the same data type!
val = tf.constant([[2,4],[7,7]],name = "constantValue")
# tf.constant creates a constant value with no shape
singleValue = tf.constant(7,name = "SingleValueConstant")
# tf.Variable creates a storage variable, where values can be updated.
# In a graph, each variable corresponds corresponds to a node
storageVal = tf.Variable(val *singleValue, name = "StorageVaraible" )
# The below command takes care of the Initialization part.
# In the absence of this command; you would receive a
# 'Failed precondition error; which is to do with uninitialised varaibles'
initialization = tf.initialize_all_variables()
# USING CONTEXT MANAGER SESSION FOR EXECUTION
with tf.Session() as session:
session.run(initialization)
print "Product is: ",session.run(storageVal)
print "The shape of the product tensor: ",(storageVal.get_shape())
# ```
Explanation: INTRODUCTION TO COMPUTING WITH TENSOR FLOW
Tensor flow can be visualised as computations in a graph with nodes that has definite operations. In simple terms, tensor flow carries out computations in the form of graphs. Nodes of the graph are Ops (Operations): Handles zero or multiple tensors to output zero or multiple tensors. Tensors are multidimensional array / list. Data is represented using tensor data structure which includes: Ranks, Shapes, Types. It is the tensors that are passed between the nodes in the graph.
Tensor flow computations are mainly carried out using tensors such as variable tensors constant tensors and placeholder tensors.
Every Op or operation object execution and tensor evaluations, happens within a session. A session owns all the results generated and its related resources used within that session. Launch a session using the following command:
<center>sess = tf.Session()</center>
If the above commands are used then it is important that these resources are released at the end of a session using close() command. But for simplicity, it is adviced to use context manager session; so that resource release happens automatically.
For the execution of an Op (node) in tensor flow to happen, execute using session.run() command as:
<center>print(sess.run(input))</center>
Example 1: Lets consider a simple tensor computation:
End of explanation
import tensorflow as tf
import numpy as np
val1 = tf.placeholder(tf.float32,shape = (8,8))
val2 = tf.placeholder(tf.float32, shape = (8,8))
randomNos2 = np.random.rand(8,8)
product = tf.matmul(val1,val2)
inpSlice = tf.constant([[[1,11,111],[2,22,222]],
[[3,33,333],[4,44,444]],
[[5,55,555],[6,66,666]]])
init = tf.initialize_all_variables()
# Softplus activation function computes log(exp(featurePoints)+1)
softPlus = tf.nn.softplus(product)
with tf.Session() as sess:
sess.run(init)
print (sess.run(product,feed_dict={val1:randomNos2, val2:randomNos2}))
result = sess.run(softPlus,feed_dict={val1:randomNos2, val2:randomNos2})
# The slicing function takes the arguments input, size and begin
print (result)
print "SLICING ON TENSORS: "
print "Applying slicing on softplus output",(sess.run(tf.slice(result, [1, 1], [1, 1])))
print "Access1: ",(sess.run(tf.slice(inpSlice,[0,0,0],[1,1,1])))
print "Access2: ",(sess.run(tf.slice(inpSlice,[0,0,0],[1,1,2])))
print "Access3: ",(sess.run(tf.slice(inpSlice,[0,0,0],[1,1,3])))
# Interpretation of slice arguments:
# The begin parameter begin(i,j,k): will enable access to 'i' ranks; spanning 'j' dimensions;
# the number of values returned is 'k'
# This will access elements at the 2ranks, from both dimensions- starting from the range 0 as size is specified as 0
print "Access7: ",(sess.run(tf.slice(inpSlice,[0,0,0],[2,1,1])))
print "Access8: ",(sess.run(tf.slice(inpSlice,[0,0,0],[3,2,1])))
Explanation: Tensor dimensionality is denoted using shape, rank and dimension number. Tensor with rank 0 is a scalar. A rank 1 tensor is a vector. A rank 2 tensor is a matrix, ie you can access element using 2 variables like tensor[i,j]; here the dimension is 2D. In the case of rank 3, access it using tensor[i,j,k];here dimension is 3D.
The tensor elements can be subjected to slice, tile operations. The below example deals with slice command.
Example 2:
End of explanation
import tensorflow as tf
import numpy as np
# The below command creates Op nodes.
# A constant tensor is denoted using a circle.
with tf.name_scope('Values') as scope:
x = tf.Variable(0, name='x')
y = tf.constant(5)
z = x+y
model = tf.initialize_all_variables()
with tf.Session() as session:
for i in range(5):
session.run(model)
print "At Loop: ",i
data = np.random.randint(10, size=10)
for j in range(len(data)-1):
sumVal = tf.add(z,data[0:j+1])
print "Sum is:",
result = session.run(sumVal)
print result
print "Last sumval: ",result
with tf.name_scope('Mean') as scope:
mean = tf.reduce_mean(tf.to_float((sumVal)))
mean = tf.reduce_mean(tf.to_float((sumVal)))
print "Reduced Mean is : ",session.run(tf.reduce_mean(tf.to_float((sumVal))))
_=tf.scalar_summary('Mean', mean)
merged = tf.merge_all_summaries()
writer = tf.train.SummaryWriter("/tmp/mnist_logs", session.graph_def)
model = tf.initialize_all_variables()
session.run(model)
Explanation: Example 3:
Tensor flow offers a variety of basic arithemtic functions to do computations.
A few of them are implemented below for ease.
``` python
The following program shows the basic arithemtic operations using Tenspr Flow and the # arguments required to be passed.
Takes arguments: x input,y input, name=None (optional; without space)
The x input and y input should be tensors;Must be one of the types: float32, float 64, uint8, int16, int32, int 64, complex64
y should be of the same type as x input; variables are case sensitive
import tensorflow as tf
Input constant float values.
a = tf.constant([[4.0, 8.0,10.0],[20.0,40.0,80.0]])
b = tf.constant([[8.0, 16.0,15.0],[25.0,40.0,50.0]])
The common arithemtic operators can be implemented as below:
sumVal = tf.add (a,b, name = "AdditiveOp")
TrueDiv function produces floating point quotients; Here integer operands are converted to floating point values
If integer values of type int8 or int16: Casted to float32 / If integer values of type int32 or int64: Casted to float64
If a and b are of different types, it gives error
QuoValtrue = tf.truediv(a,b)
The cross function provides the cross product of 2 values whose innermost dimension 3
crossVal = tf.cross(a,b)
Provides remainder with respect to each element in the tensor
modVal = tf.mod(a,b)
Provides absolute value of input tensor
negTensor = tf.constant([-2,-4,-6])
absVal = tf.abs(negTensor)
Produces -1 if negative, 0 if value is 0 & 1 if positive value
The input tensor should be either float32, float64, int32, int64
signVal = tf.sign(negTensor, name = "direction")
Provides the reciprocal of a value
The input tensor should be of type float32,float64,int32,complex64,int64
invVal = tf.inv(a, name = "Inverse")
The below function finds the reciprocal of the square root a number; tensor accepts :float32,float64,int32,complex64,int64
Mathematical functions: tf.sqrt(a), tf.round(a): Rounds values to ceiling value; the tensor should be either float or double,
tf.square(a): finds the square
revSqVal = tf.rsqrt(a, name = "ReverseSqrt")
The below function computes to the power of the tensor value;If output exceeds the limit of datatype allowed, it gives 0 as output for that tensor element
powVal = tf.pow(absVal,absVal)
The exponential value is obtained for tensors of types: float32,float64,complex64,int64
Similarly, natural log is found using tf.log(<input tensor>) function
expVal = tf.exp(a, name = "Exponential")
Other functions include:
tf.ceil(a,name = "Ceil";produces smaller value > x & tf.floor(a, name = "floor");produces bigger integer < x;
both accepts values of type :float32 or float64
tf.maximum function & tf.minimum produces the max between two values
tf.maximum(a,b,name = "Max")
tf.minimum(a,b,name = "Min")
init = tf.initialize_all_variables()
With the below commands, the Op gets executed
with tf.Session() as sess:
sess.run(init)
print "Additive: "
print (sess.run(sumVal))
sub,div,mul resluts can be obtained in the same manner.
print "Quotient from Truediv function: "
print (sess.run(QuoValtrue))
print "Cross Product: ",(sess.run(crossVal))
print "Element by element Mod value: ",(sess.run(modVal))
print "Absolute: ",(sess.run(absVal))
print "Element by element Sign value: ",(sess.run(signVal))
print "Element to the Power a: ",(sess.run(powVal))
print "Function to generate exponential values: ",(sess.run(expVal))
```
Example 4: The objective of this program is to generate a simple tensor board that describes the main events in the program below. The following tensor flow program produces a tensor using random numbers, and reducing functions that calculates: mean, sum, products of tensor elements are carried out:
End of explanation
#```python
import tensorflow as tf
# Variables: Memory Buffers, with tensors that update and hold values
# Needs explicit initializations
# tf.Variables creates a variable with value 1.
currState = tf.Variable(1)
constValue = tf.constant(5)
cProd = currState *5
#Subtraction operation: cProd - currState
interValue = tf.sub(cProd,currState)
#Addition
newState = tf.add(currState,interValue)
#The below operation is performed once the run command is executed
update = tf.assign(currState,newState)
#Whenever a variable is created, it is an empty node. It is only by initializing them the
# variable gets filled with the content, which is a value
init_op = tf.initialize_all_variables()
#Within Session class, the operations are executed and datas in the form of
#tensors are evaluated
with tf.Session() as sess:
sess.run(init_op)
#The below statement will execute to generate the currState value
print(sess.run(currState))
for _ in range(5):
sess.run(update)
#print(sess.run(currState))
print (sess.run([currState, interValue]))
#```
Explanation: Tensor Board provides a visualization of the model that you create. It deals with 2 types of connections: Control dependency(denoted using dotted lines) and Data dependency(denoted using solid lines).
To record the variations produced by a particular function, provide them as inputs to scalar_summary ops with a tag name. Histogram_summary provides the distribution of an output variable from a layer.
tf.merge_all_summaries will merge the summaries created within the program, which is then directed to a summary_writer. Specify a logdir for summary writer to write the events.
To get the tensor board launched as below:
- run the command :
tensorboard --logdir = /tmp/mnist_logs
- Open a browser and navigate to
http:<server>:6006
Example 5:
Consider a simple model which calculates a new state using the error obtained from the previous state.The functions add, sub forms a node in the graph with their own definite operations. They produce zero or more tensors as inputs for the next set of nodes:
End of explanation
# The objective of this program is to familiarise with the use of placeholders and
# a few matrix arithmetic functions
# Using placeholder value inidicates directing the tensor
# value into Operations in graph.
# Placeholders should always fed with feed_dict argument while using Session.run()/tensor.eval()/operation.run() command
# Takes arguments: dtype, shape(optional),name(optional)
# the dtype elements indicate the type of tensor. For eg: float32, complex64, int8, qint8, bool, string etc
import tensorflow as tf
import numpy as np
val1 = tf.placeholder(tf.float32,shape = (8,8))
val2 = tf.placeholder(tf.float32, shape = (8,8))
randomNos2 = np.random.rand(8,8)
# tf.random_normal : Generates random numbers following normal distribution with mean =0.0, stddev = 1.0, dtype = tf.float32
randomNormal = tf.Variable(tf.random_normal([8,8]),name = 'NormalDistribution')
# Below command creates an OP (Operation) of type MatMul;
product = tf.matmul(val1,val2)
matInv = tf.matrix_inverse(product, name="MatrixInverse")
matDeter = tf.matrix_determinant(val1,name ="MatrixDeterminant")
eigenVec = tf.self_adjoint_eig(randomNormal, name = "EigenVectors")
# Unless specified, both val1 and val2 has to be of placeholder type input or they both
# have to be of Variable type which doesnt require initialization using feed_dict
# addition = tf.add(randomNormal,val1) # This command will give an invalid argument error as val1
#randomNormal are variable and placeholder values respectively.
init = tf.initialize_all_variables()
# With the below commands, the Op gets executed
with tf.Session() as sess:
sess.run(init)
print (sess.run(product,feed_dict={val1:randomNos2, val2:randomNos2}))
print "INVERSE: ",(sess.run(matInv,feed_dict={val1:randomNos2, val2:randomNos2}))
# get_shape(): Returns the shape of a tensor; which would be either Fully-known shape(size and number of dimensions are known) or
# Partially-known shape(number of dimensions known but size is unknown) or Unknown Shape(as the name says, number of
# dimensions and size are unknown)
print "SHAPE OF PRODUCT TENSOR: ",(product.get_shape())
print "MATRIX DETERMINANT: ",(sess.run(matDeter,feed_dict={val1:randomNos2}))
print "EIGEN VECTORS: ",sess.run(eigenVec)
#print "Addition",sess.run(addition)
Explanation: INTRODUCING PLACEHOLDERS
They are used to provide input whenever tensor flow is made to run computation. To mention a dimension of any length, None can be specified in the shape argument. The below example which deals with matrix multiplication of tensors-generated using the random function, provides an understanding of working with placeholder values.
Example 6:
End of explanation
import tensorflow as tf
import numpy as np
input = tf.placeholder("float",[1,3])
weights = tf.Variable(tf.random_normal([3,3]),name = "Weights")
y = tf.matmul(input,weights)
relu_output = tf.nn.relu(y)
softmax = tf.nn.softmax(relu_output)
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
print "Matrix PRoduct: ",sess.run(y,feed_dict={input:np.array([[1.0,2.0,3.0]])})
print "Weights: ",sess.run(weights)
print "Applying Activation function Softmax: ",sess.run(softmax,feed_dict={input:np.array([[1.0,2.0,3.0]])})
Explanation: ACTIVATION FUNCTIONS
In the biological world, Neurons does the decision making process. The below figure shows a 3 layer neural network: with 2 hidden layers and one output layer. Each element in the input layer(Blue layer), hidden layer(yellow layer) and output layer(green layer) are considered as neurons that communicates between the layers.
<center> 3 Layer Neural Network</center>
A Perceptron neuron is an artificial neuron, that takes many binary inputs and gives binary output. To compute result, weights (real numbers indicating the role of the inputs in the corresponding output) were used. The weighted sum of the product of input and weights, added with bias are compared with a threshold value to decide the neuron output as 0 or 1.
The output from one layer is fed as input to the next layer. Such a network is called as a feed forward network. Each element in the hidden and output layer is called a neuron and it just does some simple tasks like: Reads input data, Process it and gives the output. By having a network of such a model we make it to ‘learn’, producing intelligent results that can be converged towards a decision making process.
Perceptron is a basic network model which gets a weighted input from the previous layer. The results from a perceptron is found to be flipping as the weights / bias values change.
Sigmoid neurons are considered to be better than perceptron. Though with some similarities with perceptron, sigmoid neurons deals with exponent values of inputs between 0 and 1. The output is a real number between 0 and 1. Sigmoid neuron is a smoothed version of a perceptron.
There are many other learning algorithms and they will make our model learn and adapt the weights and biases of our model so as to provide us with the decision made.
The activation functions produces a tensor of the same shape as their inputs.
The most common activation function is called the Sigmoid.
$$\sigma(x) = 1/(1+e^{-x})$$
Other activation functions include:Softplus, Sigmoid, tanh, elu and tensor flow have their respective commands for these activation functions.
Softmax Regression used for classification operation. It is a supervised learning algorithm. It has an added feature of being able to classify the input data to multiple groups, rather than to just 2 groups. In the case of MNIST digit recognition task, the number of groups is 10.
Softmax regression requires evidence information, which is calculated using bias, weights and the input image. A weighted sum of the tallies up the evidence that it is in a specific class. A positive weight indicates that the evidence is in favor and belongs to that class. If the weight is negative then the pixel is evidence against the image belonging to that class.
Lets a try a simple implementation on tensor flow using the activation functions: Relu
Example 5:
End of explanation |
691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression
Resources
Step1: The logistic regression equation has a very simiar representation like linear regression. The difference is that the output value being modelled is binary in nature.
$$\hat{y}=\frac{e^{\beta_0+\beta_1x_1}}{1+\beta_0+\beta_1x_1}$$
or
$$\hat{y}=\frac{1.0}{1.0+e^{-\beta_0-\beta_1x_1}}$$
$\beta_0$ is the intecept term
$\beta_1$ is the coefficient for $x_1$
$\hat{y}$ is the predicted output with real value between 0 and 1. To convert this to binary output of 0 or 1, this would either need to be rounded to an integer value or a cutoff point be provided to specify the class segregation point.
Step2: Making Predictions with Logistic Regression
$$\hat{y}=\frac{1.0}{1.0+e^{-\beta_0-\beta_1x_i}}$$
$\beta_0$ is the intecept term
$\beta_1$ is the coefficient for $x_i$
$\hat{y}$ is the predicted output with real value between 0 and 1. To convert this to binary output of 0 or 1, this would either need to be rounded to an integer value or a cutoff point be provided to specify the class segregation point.
Step3: Let's say you have been provided with the coefficient
Step4: Learning the Logistic Regression Model
The coefficients (Beta values b) of the logistic regression algorithm must be estimated from your training data. This is done using maximum-likelihood estimation.
Maximum-likelihood estimation is a common learning algorithm used by a variety of machine learning algorithms, although it does make assumptions about the distribution of your data (more on this when we talk about preparing your data).
The best coefficients would result in a model that would predict a value very close to 1 (e.g. male) for the default class and a value very close to 0 (e.g. female) for the other class. The intuition for maximum-likelihood for logistic regression is that a search procedure seeks values for the coefficients (Beta values) that minimize the error in the probabilities predicted by the model to those in the data (e.g. probability of 1 if the data is the primary class).
We are not going to go into the math of maximum likelihood. It is enough to say that a minimization algorithm is used to optimize the best values for the coefficients for your training data. This is often implemented in practice using efficient numerical optimization algorithm (like the Quasi-newton method).
When you are learning logistic, you can implement it yourself from scratch using the much simpler gradient descent algorithm.
Learning with Stochastic Gradient Descent
Logistic Regression uses gradient descent to update the coefficients.
Each gradient descent iteration, the coefficients are updated using the equation | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
x = np.linspace(-6, 6, num = 1000)
plt.figure(figsize = (12,8))
plt.plot(x, 1 / (1 + np.exp(-x))); # Sigmoid Function
plt.title("Sigmoid Function");
Explanation: Logistic Regression
Resources:
Logistic Regression Tutorial for Machine Learning
Logistic Regression for Machine Learning
How To Implement Logistic Regression With Stochastic Gradient Descent From Scratch With Python
Logistic regression is the go-to linear classification algorithm for two-class problems. It is easy to implement, easy to understand and gets great results on a wide variety of problems, even when the expectations the method has for your data are violated.
Description
Logistic Regression
Logistic regression is named for the function used at the core of the method, the logistic function.
The logistic function, also called the Sigmoid function was developed by statisticians to describe properties of population growth in ecology, rising quickly and maxing out at the carrying capacity of the environment. It’s an S-shaped curve that can take any real-valued number and map it into a value between 0 and 1, but never exactly at those limits.
$$\frac{1}{1 + e^{-x}}$$
$e$ is the base of the natural logarithms and $x$ is value that you want to transform via the logistic function.
End of explanation
tmp = [0, 0.4, 0.6, 0.8, 1.0]
tmp
np.round(tmp)
np.array(tmp) > 0.7
Explanation: The logistic regression equation has a very simiar representation like linear regression. The difference is that the output value being modelled is binary in nature.
$$\hat{y}=\frac{e^{\beta_0+\beta_1x_1}}{1+\beta_0+\beta_1x_1}$$
or
$$\hat{y}=\frac{1.0}{1.0+e^{-\beta_0-\beta_1x_1}}$$
$\beta_0$ is the intecept term
$\beta_1$ is the coefficient for $x_1$
$\hat{y}$ is the predicted output with real value between 0 and 1. To convert this to binary output of 0 or 1, this would either need to be rounded to an integer value or a cutoff point be provided to specify the class segregation point.
End of explanation
dataset = [[-2.0011, 0],
[-1.4654, 0],
[0.0965, 0],
[1.3881, 0],
[3.0641, 0],
[7.6275, 1],
[5.3324, 1],
[6.9225, 1],
[8.6754, 1],
[7.6737, 1]]
Explanation: Making Predictions with Logistic Regression
$$\hat{y}=\frac{1.0}{1.0+e^{-\beta_0-\beta_1x_i}}$$
$\beta_0$ is the intecept term
$\beta_1$ is the coefficient for $x_i$
$\hat{y}$ is the predicted output with real value between 0 and 1. To convert this to binary output of 0 or 1, this would either need to be rounded to an integer value or a cutoff point be provided to specify the class segregation point.
End of explanation
coef = [-0.806605464, 0.2573316]
for row in dataset:
yhat = 1.0 / (1.0 + np.exp(- coef[0] - coef[1] * row[0]))
print("yhat {0:.4f}, yhat {1}".format(yhat, round(yhat)))
Explanation: Let's say you have been provided with the coefficient
End of explanation
from sklearn.linear_model import LogisticRegression
dataset
X = np.array(dataset)[:, 0:1]
y = np.array(dataset)[:, 1]
X
y
clf_LR = LogisticRegression(C=1.0, penalty='l2', tol=0.0001)
clf_LR.fit(X,y)
clf_LR.predict(X)
clf_LR.predict_proba(X)
Explanation: Learning the Logistic Regression Model
The coefficients (Beta values b) of the logistic regression algorithm must be estimated from your training data. This is done using maximum-likelihood estimation.
Maximum-likelihood estimation is a common learning algorithm used by a variety of machine learning algorithms, although it does make assumptions about the distribution of your data (more on this when we talk about preparing your data).
The best coefficients would result in a model that would predict a value very close to 1 (e.g. male) for the default class and a value very close to 0 (e.g. female) for the other class. The intuition for maximum-likelihood for logistic regression is that a search procedure seeks values for the coefficients (Beta values) that minimize the error in the probabilities predicted by the model to those in the data (e.g. probability of 1 if the data is the primary class).
We are not going to go into the math of maximum likelihood. It is enough to say that a minimization algorithm is used to optimize the best values for the coefficients for your training data. This is often implemented in practice using efficient numerical optimization algorithm (like the Quasi-newton method).
When you are learning logistic, you can implement it yourself from scratch using the much simpler gradient descent algorithm.
Learning with Stochastic Gradient Descent
Logistic Regression uses gradient descent to update the coefficients.
Each gradient descent iteration, the coefficients are updated using the equation:
$$\beta=\beta+\textrm{learning rate}\times (y-\hat{y}) \times \hat{y} \times (1-\hat{y}) \times x $$
Using Scikit Learn to Estimate Coefficients
End of explanation |
692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generating human faces with Adversarial Networks (5 points)
<img src="https
Step1: Generative adversarial nets 101
<img src="https
Step2: Discriminator
Discriminator is your usual convolutional network with interlooping convolution and pooling layers
The network does not include dropout/batchnorm to avoid learning complications.
We also regularize the pre-output layer to prevent discriminator from being too certain.
Step5: Training
We train the two networks concurrently
Step6: Auxilary functions
Here we define a few helper functions that draw current data distributions and sample training batches.
Step7: Training
Main loop.
We just train generator and discriminator in a loop and draw results once every N iterations.
Step8: Evaluation
The code below dumps a batch of images so that you could use them for precision/recall evaluation.
Please generate the same number of images as for autoencoders for a fair comparison. | Python Code:
from torchvision import utils
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import torch, torch.nn as nn
import torch.nn.functional as F
from itertools import count
from IPython import display
import warnings
import time
plt.rcParams.update({'axes.titlesize': 'small'})
from sklearn.datasets import load_digits
#The following line fetches you two datasets: images, usable for autoencoder training and attributes.
#Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind
from lfw_dataset import fetch_lfw_dataset
data,attrs = fetch_lfw_dataset(dimx=36, dimy=36)
#preprocess faces
data = np.float32(data).transpose([0,3,1,2]) / 255.
IMG_SHAPE = data.shape[1:]
#print random image
plt.imshow(data[np.random.randint(data.shape[0])].transpose([1,2,0]),
cmap="gray", interpolation="none")
Explanation: Generating human faces with Adversarial Networks (5 points)
<img src="https://www.strangerdimensions.com/wp-content/uploads/2013/11/reception-robot.jpg" width=320>
This time we'll train a neural net to generate plausible human faces in all their subtlty: appearance, expression, accessories, etc. 'Cuz when us machines gonna take over Earth, there won't be any more faces left. We want to preserve this data for future iterations. Yikes...
Based on Based on https://github.com/Lasagne/Recipes/pull/94 .
End of explanation
use_cuda = torch.cuda.is_available()
print("Torch version:", torch.__version__)
if use_cuda:
print("Using GPU")
else:
print("Not using GPU")
def sample_noise_batch(batch_size):
noise = torch.randn(batch_size, CODE_SIZE)
#print(noise.shape)
return noise.cuda() if use_cuda else noise.cpu()
class Reshape(nn.Module):
def __init__(self, shape):
nn.Module.__init__(self)
self.shape=shape
def forward(self,input):
return input.view(self.shape)
def save_checkpoint(state, filename):
torch.save(state, filename)
CODE_SIZE = 256
# automatic layer name maker. Don't do this in production :)
ix = ('layer_%i'%i for i in count())
generator = nn.Sequential()
generator.add_module(next(ix), nn.Linear(CODE_SIZE, 10*8*8)) #output 10*8*8
generator.add_module(next(ix), nn.ELU())
generator.add_module(next(ix), Reshape([-1, 10, 8, 8])) #output 10x8x8
generator.add_module(next(ix), nn.ConvTranspose2d(10, 64, kernel_size=(5,5))) #output 64x12x12
generator.add_module(next(ix), nn.ELU())
generator.add_module(next(ix), nn.ConvTranspose2d(64, 64, kernel_size=(5,5))) #output 64x16x16
generator.add_module(next(ix), nn.ELU())
generator.add_module(next(ix), nn.Upsample(scale_factor=2)) #output 64x32x32
generator.add_module(next(ix), nn.ConvTranspose2d(64, 32, kernel_size=(5,5))) #output 32x36x36
generator.add_module(next(ix), nn.ELU())
generator.add_module(next(ix), nn.ConvTranspose2d(32, 32, kernel_size=(5,5))) #output 32x40x40
generator.add_module(next(ix), nn.ELU())
generator.add_module(next(ix), nn.Conv2d(32, 3, kernel_size=(5,5))) #output 3x36x36
#generator.add_module(next(ix), nn.Sigmoid())
if use_cuda: generator.cuda()
generated_data = generator(sample_noise_batch(5))
assert tuple(generated_data.shape)[1:] == IMG_SHAPE, \
"generator must output an image of shape %s, but instead it produces %s"%(IMG_SHAPE,generated_data.shape)
plt.figure(figsize=(16,10))
plt.axis('off')
plt.imshow(utils.make_grid(generated_data).cpu().detach().numpy().transpose((1,2,0)).clip(0,1)*10)
plt.show()
Explanation: Generative adversarial nets 101
<img src="https://raw.githubusercontent.com/torch/torch.github.io/master/blog/_posts/images/model.png" width=320px height=240px>
Deep learning is simple, isn't it?
* build some network that generates the face (small image)
* make up a measure of how good that face is
* optimize with gradient descent :)
The only problem is: how can we engineers tell well-generated faces from bad? And i bet you we won't ask a designer for help.
If we can't tell good faces from bad, we delegate it to yet another neural network!
That makes the two of them:
* G__enerator - takes random noize for inspiration and tries to generate a face sample.
* Let's call him __G(z), where z is a gaussian noize.
* D__iscriminator - takes a face sample and tries to tell if it's great or fake.
* Predicts the probability of input image being a __real face
* Let's call him D(x), x being an image.
* D(x) is a predition for real image and D(G(z)) is prediction for the face made by generator.
Before we dive into training them, let's construct the two networks.
End of explanation
def sample_data_batch(batch_size):
idxs = np.random.choice(np.arange(data.shape[0]), size=batch_size)
batch = torch.tensor(data[idxs], dtype=torch.float32)
return batch.cuda() if use_cuda else batch.cpu()
# a special module that converts [batch, channel, w, h] to [batch, units]
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.shape[0], -1)
discriminator = nn.Sequential()
## Got mediocre result
### YOUR CODE - create convolutional architecture for discriminator
### Note: please start simple. A few convolutions & poolings would do, inception/resnet is an overkill
discriminator.add_module("conv1", nn.Conv2d(3, 32, 5)) #output 32x32x32
discriminator.add_module("elu1", nn.ELU())
#discriminator.add_module("pool2d", nn.MaxPool2d(2, stride=2)) #output 32x16x16
discriminator.add_module('avgpool1', nn.AdaptiveAvgPool2d((16,16)))
discriminator.add_module("conv2", nn.Conv2d(32, 64, 5)) #output 64x12x12
discriminator.add_module("elu2", nn.ELU())
discriminator.add_module("conv3", nn.Conv2d(64, 10, 5)) #output 10x8x8
discriminator.add_module("elu3", nn.ELU())
discriminator.add_module("reshape", Reshape([-1, 10*8*8]))
discriminator.add_module("linear1", nn.Linear(10*8*8, CODE_SIZE)) #output 256
discriminator.add_module("elu4", nn.ELU())
discriminator.add_module("linear1", nn.Linear(CODE_SIZE, 1))
if use_cuda: discriminator.cuda()
discriminator = nn.Sequential()
# Got bad results
### YOUR CODE - create convolutional architecture for discriminator
### Note: please start simple. A few convolutions & poolings would do, inception/resnet is an overkill
discriminator.add_module("conv1", nn.Conv2d(3, 32, 5)) #output 32x32x32
discriminator.add_module("lrelu1", nn.LeakyReLU(0.2))
discriminator.add_module("conv2", nn.Conv2d(32, 64, 3)) #output 64x30x30
discriminator.add_module("bn1", nn.BatchNorm2d(64))
discriminator.add_module("lrelu2", nn.LeakyReLU(0.2))
discriminator.add_module('avgpool1', nn.AdaptiveAvgPool2d((15,15)))
discriminator.add_module("conv3", nn.Conv2d(64, 128, 4)) #output 128x12x12
discriminator.add_module("bn2", nn.BatchNorm2d(128))
discriminator.add_module("lrelu3", nn.LeakyReLU(0.2))
discriminator.add_module('avgpool2', nn.AdaptiveAvgPool2d((6,6))) #output 128x6x6
discriminator.add_module("conv4", nn.Conv2d(128, 256, 4)) #output 256x3x3
discriminator.add_module("bn3", nn.BatchNorm2d(256))
discriminator.add_module("lrelu4", nn.LeakyReLU(0.2))
discriminator.add_module("reshape", Reshape([-1, 256*3*3]))
discriminator.add_module("linear1", nn.Linear(256*3*3, 1)) #output 256
if use_cuda: discriminator.cuda()
discriminator = nn.Sequential()
# Moreless fine
### YOUR CODE - create convolutional architecture for discriminator
### Note: please start simple. A few convolutions & poolings would do, inception/resnet is an overkill
discriminator.add_module("conv1", nn.Conv2d(3, 32, 5)) #output 32x32x32
discriminator.add_module("lrelu1", nn.LeakyReLU(0.2))
discriminator.add_module('avgpool1', nn.AdaptiveAvgPool2d((16,16))) #output 32x16x16
discriminator.add_module("conv2", nn.Conv2d(32, 64, 5, 1, 2)) #output 64x16x16
discriminator.add_module("bn1", nn.BatchNorm2d(64))
discriminator.add_module("lrelu2", nn.LeakyReLU(0.2))
discriminator.add_module('avgpool2', nn.AdaptiveAvgPool2d((8,8))) #output 64x8x8
discriminator.add_module("conv3", nn.Conv2d(64, 128, 5, 1, 2)) #output 128x8x8
discriminator.add_module("bn2", nn.BatchNorm2d(128))
discriminator.add_module("lrelu3", nn.LeakyReLU(0.2))
discriminator.add_module('avgpool2', nn.AdaptiveAvgPool2d((4,4))) #output 128x4x4
discriminator.add_module("conv4", nn.Dropout(0.5))
discriminator.add_module("reshape", Reshape([-1, 128*4*4]))
discriminator.add_module("linear1", nn.Linear(128*4*4, 1)) #output 1
if use_cuda: discriminator.cuda()
sample = sample_data_batch(5)
plt.figure(figsize=(16,10))
plt.axis('off')
plt.imshow(utils.make_grid(sample).cpu().detach().numpy().transpose((1,2,0)).clip(0,1))
plt.show()
discriminator(sample).shape
Explanation: Discriminator
Discriminator is your usual convolutional network with interlooping convolution and pooling layers
The network does not include dropout/batchnorm to avoid learning complications.
We also regularize the pre-output layer to prevent discriminator from being too certain.
End of explanation
def generator_loss(noise):
1. generate data given noise
2. compute log P(real | gen noise)
3. return generator loss (should be scalar)
generated_data = generator(noise)
disc_on_generated_data = discriminator(generated_data)
logp_gen_is_real = F.logsigmoid(disc_on_generated_data)
loss = -1 * torch.mean(logp_gen_is_real)
return loss
loss = generator_loss(sample_noise_batch(32))
print(loss)
assert len(loss.shape) == 0, "loss must be scalar"
def discriminator_loss(real_data, generated_data):
1. compute discriminator's output on real & generated data
2. compute log-probabilities of real data being real, generated data being fake
3. return discriminator loss (scalar)
disc_on_real_data = discriminator(real_data)
disc_on_fake_data = discriminator(generated_data)
logp_real_is_real = F.logsigmoid(disc_on_real_data)
logp_gen_is_fake = F.logsigmoid(1 - disc_on_fake_data)
loss = -1 * torch.mean(logp_real_is_real + logp_gen_is_fake)
return loss
loss = discriminator_loss(sample_data_batch(32),
generator(sample_noise_batch(32)))
print(loss)
assert len(loss.shape) == 0, "loss must be scalar"
Explanation: Training
We train the two networks concurrently:
* Train discriminator to better distinguish real data from current generator
* Train generator to make discriminator think generator is real
* Since discriminator is a differentiable neural network, we train both with gradient descent.
Training is done iteratively until discriminator is no longer able to find the difference (or until you run out of patience).
Tricks:
Regularize discriminator output weights to prevent explosion
Train generator with adam to speed up training. Discriminator trains with SGD to avoid problems with momentum.
More: https://github.com/soumith/ganhacks
End of explanation
def sample_images(nrow, ncol, sharp=False):
with torch.no_grad():
images = generator(sample_noise_batch(batch_size=nrow*ncol))
images = images.data.cpu().numpy().transpose([0, 2, 3, 1])
if np.var(images)!=0:
images = images.clip(np.min(data),np.max(data))
for i in range(nrow*ncol):
plt.subplot(nrow,ncol,i+1)
plt.axis('off')
if sharp:
plt.imshow(images[i], cmap="gray", interpolation="none")
else:
plt.imshow(images[i], cmap="gray")
plt.show()
def sample_probas(batch_size):
plt.title('Generated vs real data')
D_real = F.sigmoid(discriminator(sample_data_batch(batch_size)))
generated_data_batch = generator(sample_noise_batch(batch_size))
D_fake = F.sigmoid(discriminator(generated_data_batch))
plt.hist(D_real.data.cpu().numpy(),
label='D(x)', alpha=0.5, range=[0,1])
plt.hist(D_fake.data.cpu().numpy(),
label='D(G(z))', alpha=0.5, range=[0,1])
plt.legend(loc='best')
plt.show()
Explanation: Auxilary functions
Here we define a few helper functions that draw current data distributions and sample training batches.
End of explanation
#optimizers
disc_opt = torch.optim.SGD(discriminator.parameters(), weight_decay=1e-4, lr=5e-3)
gen_opt = torch.optim.Adam(generator.parameters(), lr=1e-4)
last_epoch = 0
WEIGHTS_PATH = './weights/dcgan.pth.tar'
if (torch.cuda.is_available()):
checkpoint = torch.load(f=WEIGHTS_PATH)
else:
net = nn.DataParallel(net)
checkpoint = torch.load(map_location='cpu', f=WEIGHTS_PATH)
generator.load_state_dict(checkpoint['gen_weights'])
discriminator.load_state_dict(checkpoint['disc_weights'])
last_epoch = checkpoint['last_epoch']
disc_opt.load_state_dict(checkpoint['disc_optim'])
gen_opt.load_state_dict(checkpoint['gen_optim'])
def gaussian(ins, mean=0, stddev=0.05):
noise = torch.autograd.Variable(ins.data.new(ins.size()).normal_(mean, stddev))
return ins + noise
warnings.simplefilter('ignore')
batch_size = 100
disc_loss = 0
gen_loss = 0
start = time.time()
for epoch in range(last_epoch, 50000):
# Train discriminator
for i in range(5):
real_data = sample_data_batch(batch_size)
fake_data = generator(sample_noise_batch(batch_size))
loss = discriminator_loss(gaussian(real_data), gaussian(fake_data))
disc_opt.zero_grad()
loss.backward()
disc_opt.step()
disc_loss = loss.item()
# Train generator
for j in range(1):
noise = sample_noise_batch(batch_size)
loss = generator_loss(noise)
gen_opt.zero_grad()
loss.backward()
gen_opt.step()
gen_loss = loss.item()
if epoch %100==0:
end = time.time()
display.clear_output(wait=True)
print("epoch %d, Generator loss %.7f, discriminator loss %.7f" % (epoch, gen_loss, disc_loss))
print("time taken (100 epochs) %.0f sec" % (end - start))
sample_images(2,3,True)
sample_probas(1000)
start = time.time()
last_epoch = epoch
print(epoch)
save_checkpoint({
'gen_weights': generator.state_dict(),
'disc_weights' : discriminator.state_dict(),
'gen_optim' : gen_opt.state_dict(),
'disc_optim' : disc_opt.state_dict(),
'last_epoch' : last_epoch
}, "./weights/dcgan.pth.tar")
plt.figure(figsize=[16, 24])
sample_images(16, 8)
# Note: a no-nonsense neural network should be able to produce reasonably good images after 15k iterations
# By "reasonably good" we mean "resembling a car crash victim" or better
Explanation: Training
Main loop.
We just train generator and discriminator in a loop and draw results once every N iterations.
End of explanation
num_images = len(data)
batch_size = 100
all_images = []
for batch_i in range(int((num_images - 1) / batch_size + 1)):
with torch.no_grad():
images = generator(sample_noise_batch(batch_size=batch_size))
images = images.data.cpu().numpy().transpose([0, 2, 3, 1])
if np.var(images)!=0:
images = images.clip(np.min(data), np.max(data))
all_images.append(images)
all_images = np.concatenate(all_images, axis=0)[:num_images]
np.savez("./gan.npz", Pictures=all_images)
Explanation: Evaluation
The code below dumps a batch of images so that you could use them for precision/recall evaluation.
Please generate the same number of images as for autoencoders for a fair comparison.
End of explanation |
693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Retraining an Image Classifier
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Select the TF2 SavedModel module to use
For starters, use https
Step3: Set up the Flowers dataset
Inputs are suitably resized for the selected module. Dataset augmentation (i.e., random distortions of an image each time it is read) improves training, esp. when fine-tuning.
Step4: Defining the model
All it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module.
For speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy.
Step5: Training the model
Step6: Try out the model on an image from the validation data
Step7: Finally, the trained model can be saved for deployment to TF Serving or TFLite (on mobile) as follows.
Step8: Optional | Python Code:
# Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2021 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import itertools
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("TF version:", tf.__version__)
print("Hub version:", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
Explanation: Retraining an Image Classifier
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_image_retraining"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_image_retraining.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/collections/image/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
</td>
</table>
Introduction
Image classification models have millions of parameters. Training them from
scratch requires a lot of labeled training data and a lot of computing power. Transfer learning is a technique that shortcuts much of this by taking a piece of a model that has already been trained on a related task and reusing it in a new model.
This Colab demonstrates how to build a Keras model for classifying five species of flowers by using a pre-trained TF2 SavedModel from TensorFlow Hub for image feature extraction, trained on the much larger and more general ImageNet dataset. Optionally, the feature extractor can be trained ("fine-tuned") alongside the newly added classifier.
Looking for a tool instead?
This is a TensorFlow coding tutorial. If you want a tool that just builds the TensorFlow or TFLite model for, take a look at the make_image_classifier command-line tool that gets installed by the PIP package tensorflow-hub[make_image_classifier], or at this TFLite colab.
Setup
End of explanation
model_name = "efficientnetv2-xl-21k" # @param ['efficientnetv2-s', 'efficientnetv2-m', 'efficientnetv2-l', 'efficientnetv2-s-21k', 'efficientnetv2-m-21k', 'efficientnetv2-l-21k', 'efficientnetv2-xl-21k', 'efficientnetv2-b0-21k', 'efficientnetv2-b1-21k', 'efficientnetv2-b2-21k', 'efficientnetv2-b3-21k', 'efficientnetv2-s-21k-ft1k', 'efficientnetv2-m-21k-ft1k', 'efficientnetv2-l-21k-ft1k', 'efficientnetv2-xl-21k-ft1k', 'efficientnetv2-b0-21k-ft1k', 'efficientnetv2-b1-21k-ft1k', 'efficientnetv2-b2-21k-ft1k', 'efficientnetv2-b3-21k-ft1k', 'efficientnetv2-b0', 'efficientnetv2-b1', 'efficientnetv2-b2', 'efficientnetv2-b3', 'efficientnet_b0', 'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3', 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6', 'efficientnet_b7', 'bit_s-r50x1', 'inception_v3', 'inception_resnet_v2', 'resnet_v1_50', 'resnet_v1_101', 'resnet_v1_152', 'resnet_v2_50', 'resnet_v2_101', 'resnet_v2_152', 'nasnet_large', 'nasnet_mobile', 'pnasnet_large', 'mobilenet_v2_100_224', 'mobilenet_v2_130_224', 'mobilenet_v2_140_224', 'mobilenet_v3_small_100_224', 'mobilenet_v3_small_075_224', 'mobilenet_v3_large_100_224', 'mobilenet_v3_large_075_224']
model_handle_map = {
"efficientnetv2-s": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_s/feature_vector/2",
"efficientnetv2-m": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_m/feature_vector/2",
"efficientnetv2-l": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_l/feature_vector/2",
"efficientnetv2-s-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_s/feature_vector/2",
"efficientnetv2-m-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_m/feature_vector/2",
"efficientnetv2-l-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_l/feature_vector/2",
"efficientnetv2-xl-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_xl/feature_vector/2",
"efficientnetv2-b0-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b0/feature_vector/2",
"efficientnetv2-b1-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b1/feature_vector/2",
"efficientnetv2-b2-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b2/feature_vector/2",
"efficientnetv2-b3-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b3/feature_vector/2",
"efficientnetv2-s-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_s/feature_vector/2",
"efficientnetv2-m-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_m/feature_vector/2",
"efficientnetv2-l-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_l/feature_vector/2",
"efficientnetv2-xl-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_xl/feature_vector/2",
"efficientnetv2-b0-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b0/feature_vector/2",
"efficientnetv2-b1-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b1/feature_vector/2",
"efficientnetv2-b2-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b2/feature_vector/2",
"efficientnetv2-b3-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b3/feature_vector/2",
"efficientnetv2-b0": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b0/feature_vector/2",
"efficientnetv2-b1": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b1/feature_vector/2",
"efficientnetv2-b2": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b2/feature_vector/2",
"efficientnetv2-b3": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b3/feature_vector/2",
"efficientnet_b0": "https://tfhub.dev/tensorflow/efficientnet/b0/feature-vector/1",
"efficientnet_b1": "https://tfhub.dev/tensorflow/efficientnet/b1/feature-vector/1",
"efficientnet_b2": "https://tfhub.dev/tensorflow/efficientnet/b2/feature-vector/1",
"efficientnet_b3": "https://tfhub.dev/tensorflow/efficientnet/b3/feature-vector/1",
"efficientnet_b4": "https://tfhub.dev/tensorflow/efficientnet/b4/feature-vector/1",
"efficientnet_b5": "https://tfhub.dev/tensorflow/efficientnet/b5/feature-vector/1",
"efficientnet_b6": "https://tfhub.dev/tensorflow/efficientnet/b6/feature-vector/1",
"efficientnet_b7": "https://tfhub.dev/tensorflow/efficientnet/b7/feature-vector/1",
"bit_s-r50x1": "https://tfhub.dev/google/bit/s-r50x1/1",
"inception_v3": "https://tfhub.dev/google/imagenet/inception_v3/feature-vector/4",
"inception_resnet_v2": "https://tfhub.dev/google/imagenet/inception_resnet_v2/feature-vector/4",
"resnet_v1_50": "https://tfhub.dev/google/imagenet/resnet_v1_50/feature-vector/4",
"resnet_v1_101": "https://tfhub.dev/google/imagenet/resnet_v1_101/feature-vector/4",
"resnet_v1_152": "https://tfhub.dev/google/imagenet/resnet_v1_152/feature-vector/4",
"resnet_v2_50": "https://tfhub.dev/google/imagenet/resnet_v2_50/feature-vector/4",
"resnet_v2_101": "https://tfhub.dev/google/imagenet/resnet_v2_101/feature-vector/4",
"resnet_v2_152": "https://tfhub.dev/google/imagenet/resnet_v2_152/feature-vector/4",
"nasnet_large": "https://tfhub.dev/google/imagenet/nasnet_large/feature_vector/4",
"nasnet_mobile": "https://tfhub.dev/google/imagenet/nasnet_mobile/feature_vector/4",
"pnasnet_large": "https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/4",
"mobilenet_v2_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4",
"mobilenet_v2_130_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/feature_vector/4",
"mobilenet_v2_140_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/feature_vector/4",
"mobilenet_v3_small_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/feature_vector/5",
"mobilenet_v3_small_075_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_small_075_224/feature_vector/5",
"mobilenet_v3_large_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_large_100_224/feature_vector/5",
"mobilenet_v3_large_075_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_large_075_224/feature_vector/5",
}
model_image_size_map = {
"efficientnetv2-s": 384,
"efficientnetv2-m": 480,
"efficientnetv2-l": 480,
"efficientnetv2-b0": 224,
"efficientnetv2-b1": 240,
"efficientnetv2-b2": 260,
"efficientnetv2-b3": 300,
"efficientnetv2-s-21k": 384,
"efficientnetv2-m-21k": 480,
"efficientnetv2-l-21k": 480,
"efficientnetv2-xl-21k": 512,
"efficientnetv2-b0-21k": 224,
"efficientnetv2-b1-21k": 240,
"efficientnetv2-b2-21k": 260,
"efficientnetv2-b3-21k": 300,
"efficientnetv2-s-21k-ft1k": 384,
"efficientnetv2-m-21k-ft1k": 480,
"efficientnetv2-l-21k-ft1k": 480,
"efficientnetv2-xl-21k-ft1k": 512,
"efficientnetv2-b0-21k-ft1k": 224,
"efficientnetv2-b1-21k-ft1k": 240,
"efficientnetv2-b2-21k-ft1k": 260,
"efficientnetv2-b3-21k-ft1k": 300,
"efficientnet_b0": 224,
"efficientnet_b1": 240,
"efficientnet_b2": 260,
"efficientnet_b3": 300,
"efficientnet_b4": 380,
"efficientnet_b5": 456,
"efficientnet_b6": 528,
"efficientnet_b7": 600,
"inception_v3": 299,
"inception_resnet_v2": 299,
"nasnet_large": 331,
"pnasnet_large": 331,
}
model_handle = model_handle_map.get(model_name)
pixels = model_image_size_map.get(model_name, 224)
print(f"Selected model: {model_name} : {model_handle}")
IMAGE_SIZE = (pixels, pixels)
print(f"Input size {IMAGE_SIZE}")
BATCH_SIZE = 16#@param {type:"integer"}
Explanation: Select the TF2 SavedModel module to use
For starters, use https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4. The same URL can be used in code to identify the SavedModel and in your browser to show its documentation. (Note that models in TF1 Hub format won't work here.)
You can find more TF2 models that generate image feature vectors here.
There are multiple possible models to try. All you need to do is select a different one on the cell below and follow up with the notebook.
End of explanation
data_dir = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
def build_dataset(subset):
return tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=.20,
subset=subset,
label_mode="categorical",
# Seed needs to provided when using validation_split and shuffle = True.
# A fixed seed is used so that the validation set is stable across runs.
seed=123,
image_size=IMAGE_SIZE,
batch_size=1)
train_ds = build_dataset("training")
class_names = tuple(train_ds.class_names)
train_size = train_ds.cardinality().numpy()
train_ds = train_ds.unbatch().batch(BATCH_SIZE)
train_ds = train_ds.repeat()
normalization_layer = tf.keras.layers.Rescaling(1. / 255)
preprocessing_model = tf.keras.Sequential([normalization_layer])
do_data_augmentation = False #@param {type:"boolean"}
if do_data_augmentation:
preprocessing_model.add(
tf.keras.layers.RandomRotation(40))
preprocessing_model.add(
tf.keras.layers.RandomTranslation(0, 0.2))
preprocessing_model.add(
tf.keras.layers.RandomTranslation(0.2, 0))
# Like the old tf.keras.preprocessing.image.ImageDataGenerator(),
# image sizes are fixed when reading, and then a random zoom is applied.
# If all training inputs are larger than image_size, one could also use
# RandomCrop with a batch size of 1 and rebatch later.
preprocessing_model.add(
tf.keras.layers.RandomZoom(0.2, 0.2))
preprocessing_model.add(
tf.keras.layers.RandomFlip(mode="horizontal"))
train_ds = train_ds.map(lambda images, labels:
(preprocessing_model(images), labels))
val_ds = build_dataset("validation")
valid_size = val_ds.cardinality().numpy()
val_ds = val_ds.unbatch().batch(BATCH_SIZE)
val_ds = val_ds.map(lambda images, labels:
(normalization_layer(images), labels))
Explanation: Set up the Flowers dataset
Inputs are suitably resized for the selected module. Dataset augmentation (i.e., random distortions of an image each time it is read) improves training, esp. when fine-tuning.
End of explanation
do_fine_tuning = False #@param {type:"boolean"}
print("Building model with", model_handle)
model = tf.keras.Sequential([
# Explicitly define the input shape so the model can be properly
# loaded by the TFLiteConverter
tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),
hub.KerasLayer(model_handle, trainable=do_fine_tuning),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(len(class_names),
kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model.build((None,)+IMAGE_SIZE+(3,))
model.summary()
Explanation: Defining the model
All it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module.
For speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy.
End of explanation
model.compile(
optimizer=tf.keras.optimizers.SGD(learning_rate=0.005, momentum=0.9),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1),
metrics=['accuracy'])
steps_per_epoch = train_size // BATCH_SIZE
validation_steps = valid_size // BATCH_SIZE
hist = model.fit(
train_ds,
epochs=5, steps_per_epoch=steps_per_epoch,
validation_data=val_ds,
validation_steps=validation_steps).history
plt.figure()
plt.ylabel("Loss (training and validation)")
plt.xlabel("Training Steps")
plt.ylim([0,2])
plt.plot(hist["loss"])
plt.plot(hist["val_loss"])
plt.figure()
plt.ylabel("Accuracy (training and validation)")
plt.xlabel("Training Steps")
plt.ylim([0,1])
plt.plot(hist["accuracy"])
plt.plot(hist["val_accuracy"])
Explanation: Training the model
End of explanation
x, y = next(iter(val_ds))
image = x[0, :, :, :]
true_index = np.argmax(y[0])
plt.imshow(image)
plt.axis('off')
plt.show()
# Expand the validation image to (1, 224, 224, 3) before predicting the label
prediction_scores = model.predict(np.expand_dims(image, axis=0))
predicted_index = np.argmax(prediction_scores)
print("True label: " + class_names[true_index])
print("Predicted label: " + class_names[predicted_index])
Explanation: Try out the model on an image from the validation data:
End of explanation
saved_model_path = f"/tmp/saved_flowers_model_{model_name}"
tf.saved_model.save(model, saved_model_path)
Explanation: Finally, the trained model can be saved for deployment to TF Serving or TFLite (on mobile) as follows.
End of explanation
#@title Optimization settings
optimize_lite_model = False #@param {type:"boolean"}
#@markdown Setting a value greater than zero enables quantization of neural network activations. A few dozen is already a useful amount.
num_calibration_examples = 60 #@param {type:"slider", min:0, max:1000, step:1}
representative_dataset = None
if optimize_lite_model and num_calibration_examples:
# Use a bounded number of training examples without labels for calibration.
# TFLiteConverter expects a list of input tensors, each with batch size 1.
representative_dataset = lambda: itertools.islice(
([image[None, ...]] for batch, _ in train_ds for image in batch),
num_calibration_examples)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
if optimize_lite_model:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
if representative_dataset: # This is optional, see above.
converter.representative_dataset = representative_dataset
lite_model_content = converter.convert()
with open(f"/tmp/lite_flowers_model_{model_name}.tflite", "wb") as f:
f.write(lite_model_content)
print("Wrote %sTFLite model of %d bytes." %
("optimized " if optimize_lite_model else "", len(lite_model_content)))
interpreter = tf.lite.Interpreter(model_content=lite_model_content)
# This little helper wraps the TFLite Interpreter as a numpy-to-numpy function.
def lite_model(images):
interpreter.allocate_tensors()
interpreter.set_tensor(interpreter.get_input_details()[0]['index'], images)
interpreter.invoke()
return interpreter.get_tensor(interpreter.get_output_details()[0]['index'])
#@markdown For rapid experimentation, start with a moderate number of examples.
num_eval_examples = 50 #@param {type:"slider", min:0, max:700}
eval_dataset = ((image, label) # TFLite expects batch size 1.
for batch in train_ds
for (image, label) in zip(*batch))
count = 0
count_lite_tf_agree = 0
count_lite_correct = 0
for image, label in eval_dataset:
probs_lite = lite_model(image[None, ...])[0]
probs_tf = model(image[None, ...]).numpy()[0]
y_lite = np.argmax(probs_lite)
y_tf = np.argmax(probs_tf)
y_true = np.argmax(label)
count +=1
if y_lite == y_tf: count_lite_tf_agree += 1
if y_lite == y_true: count_lite_correct += 1
if count >= num_eval_examples: break
print("TFLite model agrees with original model on %d of %d examples (%g%%)." %
(count_lite_tf_agree, count, 100.0 * count_lite_tf_agree / count))
print("TFLite model is accurate on %d of %d examples (%g%%)." %
(count_lite_correct, count, 100.0 * count_lite_correct / count))
Explanation: Optional: Deployment to TensorFlow Lite
TensorFlow Lite lets you deploy TensorFlow models to mobile and IoT devices. The code below shows how to convert the trained model to TFLite and apply post-training tools from the TensorFlow Model Optimization Toolkit. Finally, it runs it in the TFLite Interpreter to examine the resulting quality
Converting without optimization provides the same results as before (up to roundoff error).
Converting with optimization without any data quantizes the model weights to 8 bits, but inference still uses floating-point computation for the neural network activations. This reduces model size almost by a factor of 4 and improves CPU latency on mobile devices.
On top, computation of the neural network activations can be quantized to 8-bit integers as well if a small reference dataset is provided to calibrate the quantization range. On a mobile device, this accelerates inference further and makes it possible to run on accelerators like Edge TPU.
End of explanation |
694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ml - Machine Learning et données cryptées - correction
Comment faire du machine learning avec des données cryptées ? Ce notebook propose d'en montrer un principe exposés CryptoNets
Step1: Principe
Voir l'énoncé.
Exercice 1
Step2: Si $a=47$, on cherche $a',k$ tel que $aa' - nk=1$.
Step3: Notes sur l'inverse de a
Si $n$ est premier alors $\mathbb{Z}/n\mathbb{Z}$ est un corps. Cela implique que tout nombre $a \neq 0$ a un inverse dans $\mathbb{Z}/n\mathbb{Z}$. Donc, $\forall a \neq 0, \exists a'$ tel que $aa'=1$. On va d'abord montrer que $\forall a \neq 0, \forall k \in \mathbb{N^*}, a^k \neq 0$. On procède par l'absurde en supposant que $\exists k > 0$ tel quel $a^k=0$. Cela signifie qu'il existe $v$ tel quel $a^k = vn$. Comme $n$ est premier, $a$ divise $v$ et on peut écrire que $a^k = wan \Rightarrow a(a^{k-1} - wn)=0$. Par récurrence, on peut montrer qu'il existe $z$ tel que $a = zn$ donc $a$ est un multiple de $n$ et c'est impossible car $a$ et $n$ sont premiers entre eux.
L'ensemble $A={a, a^2, a^3, ...}$ est à valeur dans $\mathbb{Z}/n\mathbb{Z}$ et est fini donc il existe nécessairement $i$ tel que $a^i \in A$. Il existe alors $k > 0$ tel que $a^i \equiv a^k \mod n$ et $u$ tel que $a^i = a^k + un$. On suppose d'abord que $i > k$, alors $a^k(a^{i-k} -1) = un$. Comme $n$ est premier, $a^{i-k} -1$ divise $n$ donc il existe $v$ tel que $a^{i-k}=un + 1$ donc $a^{i-k} \equiv 1 \mod n$. On note $a^{i-k-1} = a^{-1}$ l'inverse de $a$ dans $\mathbb{Z}/n\mathbb{Z}$. Si $k > i$, la même chose est vraie pour $a^{k-i}$. Si $i^=\arg\min{i \, | \, a^i \in A}$, $i^ \leqslant n-1$ car l'ensemble $A$ contient au plus $n-1$ éléments et $i^-k < n-1$. On note maintenant $j^ = \arg \min {j \, | \, a^j \equiv 1 \mod n}$. Donc ce cas, on peut montrer que $A = {1, a, ..., a^{j^-1}}$. $j^$ est l'[ordre](https
Step4: On considère seulement la fonction de décision brute car c'est une fonction qui peut-être calculée à partir d'additions et de multiplications. Pour la suite, nous aurons besoin d'un modèle qui fonctionne sur des variables normalisées avec MinMaxScaler. On supprime également le biais pour le remplacer par une colonne constante.
Step5: Exercice 3
Step6: Exercice 4
Step7: Notes
Les coefficients sont en clair mais les données sont cryptées. Pour crypter les coefficients du modèle, il faudrait pouvoir s'assurer que l'addition et la multiplication sont stables après le cryptage. Cela nécessite un cryptage différent comme Fully Homomorphic Encryption over the Integers. Les entiers cryptés sont dans l'intervalle [0, 10000], cela veut dire qu'il est préférable de crypter des entiers dans un intervalle équivalent sous peine de ne pouvoir décrypter avec certitude. Ceci implique que l'algorithme fasse des calculs qui restent dans cet intervalle. C'est pourquoi les entrées et les sorties prennent leur valeur dans l'intervalle [0, 100] afin que le produit coefficient x entrée reste dans l'intervalle considéré. Pour éviter ce problème, il faudrait décomposer chaque entier en une séquence d'entiers entre 0 et 100 et réécrire les opérations addition et multiplication en fonction.
Questions
Le cryptage choisi est moins efficace qu'un cryptage RSA qui conserve la multiplication. Il faudrait transformer l'écriture du modèle pour utiliser des multiplications plutôt que des additions. Si je vous disais qu'une des variables est l'âge d'une population, vous pourriez la retrouver. Il en est de même pour un chiffrage RSA qui change un entier en un autre. On peut crypter des éléments de ces entiers et les recomposer dans le monde crypté. C'est ce que propose d'autres type de cryptage. On peut aussi altérer les données en ajoutant un bruit aléatoire qui change peu la prédiction mais qui change la valeur cryptée. Dans ce cas, la distribution de chaque variable paraîtra uniforme.
On peut entraîner un modèle sur des données cryptées si on peut reproduire l'addition et la multiplication avec les nombres cryptés. Une option est le cryptage
Step8: Même distribution dans un ordre différent. Pour changer cette distribution, on ajoute un petit bruit peu important pour la variable numérique considérée mais qui sera cryptée de manière totalement différente. | Python Code:
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.ml - Machine Learning et données cryptées - correction
Comment faire du machine learning avec des données cryptées ? Ce notebook propose d'en montrer un principe exposés CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy. Correction.
End of explanation
def compose(x, a, n):
return (a * x) % n
def crypt(x):
return compose(x, 577, 10000)
crypt(5), crypt(6)
crypt(5+6), (crypt(5) + crypt(6)) % 10000
crypt(6-5), (crypt(6) - crypt(5)) % 10000
crypt(5-6), (crypt(5) - crypt(6)) % 10000
Explanation: Principe
Voir l'énoncé.
Exercice 1 : écrire deux fonctions de cryptage, décryptage
Il faut bien choisir $n$, $a$ pour implémenter la fonction de cryptage :
$\varepsilon:\mathbb{N} \rightarrow \mathbb{Z}/n\mathbb{Z}$ et $\varepsilon(x) = (x * a) \mod n$. On vérifie ensuite qu'elle conserve l'addition au module $n$ près.
End of explanation
n = 10000
for k in range(2, n):
if (577*k) % n == 1:
ap = k
break
ap
def decrypt(x):
return compose(x, 2513, 10000)
decrypt(crypt(5)), decrypt(crypt(6))
decrypt(crypt(5)*67), decrypt(crypt(5*67))
Explanation: Si $a=47$, on cherche $a',k$ tel que $aa' - nk=1$.
End of explanation
from sklearn.datasets import load_diabetes
data = load_diabetes()
X = data.data
Y = data.target
from sklearn.linear_model import LinearRegression
clr = LinearRegression()
clr.fit(X, Y)
clr.predict(X[:1]), Y[0]
from sklearn.metrics import r2_score
r2_score(Y, clr.predict(X))
Explanation: Notes sur l'inverse de a
Si $n$ est premier alors $\mathbb{Z}/n\mathbb{Z}$ est un corps. Cela implique que tout nombre $a \neq 0$ a un inverse dans $\mathbb{Z}/n\mathbb{Z}$. Donc, $\forall a \neq 0, \exists a'$ tel que $aa'=1$. On va d'abord montrer que $\forall a \neq 0, \forall k \in \mathbb{N^*}, a^k \neq 0$. On procède par l'absurde en supposant que $\exists k > 0$ tel quel $a^k=0$. Cela signifie qu'il existe $v$ tel quel $a^k = vn$. Comme $n$ est premier, $a$ divise $v$ et on peut écrire que $a^k = wan \Rightarrow a(a^{k-1} - wn)=0$. Par récurrence, on peut montrer qu'il existe $z$ tel que $a = zn$ donc $a$ est un multiple de $n$ et c'est impossible car $a$ et $n$ sont premiers entre eux.
L'ensemble $A={a, a^2, a^3, ...}$ est à valeur dans $\mathbb{Z}/n\mathbb{Z}$ et est fini donc il existe nécessairement $i$ tel que $a^i \in A$. Il existe alors $k > 0$ tel que $a^i \equiv a^k \mod n$ et $u$ tel que $a^i = a^k + un$. On suppose d'abord que $i > k$, alors $a^k(a^{i-k} -1) = un$. Comme $n$ est premier, $a^{i-k} -1$ divise $n$ donc il existe $v$ tel que $a^{i-k}=un + 1$ donc $a^{i-k} \equiv 1 \mod n$. On note $a^{i-k-1} = a^{-1}$ l'inverse de $a$ dans $\mathbb{Z}/n\mathbb{Z}$. Si $k > i$, la même chose est vraie pour $a^{k-i}$. Si $i^=\arg\min{i \, | \, a^i \in A}$, $i^ \leqslant n-1$ car l'ensemble $A$ contient au plus $n-1$ éléments et $i^-k < n-1$. On note maintenant $j^ = \arg \min {j \, | \, a^j \equiv 1 \mod n}$. Donc ce cas, on peut montrer que $A = {1, a, ..., a^{j^-1}}$. $j^$ est l'[ordre](https://fr.wikipedia.org/wiki/Ordre_(th%C3%A9orie_des_groupes) du sous-groupe engendré par $a$.
Le théorème de Lagrange nous dit que cet ordre divise $n-1$ qui est l'ordre du groupe multiplicatif $\mathbb{Z}/n\mathbb{Z} \backslash {0}$. On peut donc écrire $n-1=kj^*$ avec $k \in \mathbb{N}$. Par conséquent, $a^{n-1} \equiv 1 \mod n$. Ce théorème en considérant les classes d'équivalence qui forment une partition de l'ensemble du groupe de départ.
Exercice 2 : Entraîner une régression linéaire
End of explanation
from sklearn.preprocessing import MinMaxScaler
import numpy
X_norm = numpy.hstack([MinMaxScaler((0, 100)).fit_transform(X),
numpy.ones((X.shape[0], 1))])
Y_norm = MinMaxScaler((0, 100)).fit_transform(Y.reshape(len(Y), 1)).ravel()
Y_norm.min(), Y_norm.max()
clr_norm = LinearRegression(fit_intercept=False)
clr_norm.fit(X_norm, Y_norm)
clr_norm.predict(X_norm[:1]), Y_norm[0]
from sklearn.metrics import r2_score
r2_score(Y_norm, clr_norm.predict(X_norm))
Explanation: On considère seulement la fonction de décision brute car c'est une fonction qui peut-être calculée à partir d'additions et de multiplications. Pour la suite, nous aurons besoin d'un modèle qui fonctionne sur des variables normalisées avec MinMaxScaler. On supprime également le biais pour le remplacer par une colonne constante.
End of explanation
def decision_linreg(xs, coef, bias):
s = bias
xs = xs.copy().ravel()
coef = coef.copy().ravel()
if xs.shape != coef.shape:
raise ValueError("Not the same dimension {0}!={1}".format(xs.shape, coef.shape))
for x, c in zip(xs, coef):
s += c * x
return s
list(X[0])[:5]
clr.predict(X[:1]), decision_linreg(X[:1], clr.coef_, clr.intercept_)
clr_norm.predict(X_norm[:1]), decision_linreg(X_norm[:1], clr_norm.coef_, clr_norm.intercept_)
Explanation: Exercice 3 : réécrire la fonction de prédiction pour une régression linéaire
La fonction est un produit scalaire.
End of explanation
coef_int = [int(i) for i in clr_norm.coef_ * 100]
coef_int
inter_int = int(clr_norm.intercept_ * 10000)
inter_int
import numpy
def decision_linreg_int(xs, coef):
s = 0
for x, c in zip(xs, coef):
s += c * x
return s % 10000
def decision_crypt_decrypt_linreg(xs, coef_int):
# On crypte les entrées
int_xs = [int(x) for x in xs.ravel()]
crypt_xs = [crypt(i) for i in int_xs]
# On applique la prédiction.
pred = decision_linreg_int(crypt_xs, coef_int)
# On décrypte.
dec = decrypt(pred % 10000)
return dec / 100
(decision_linreg(X_norm[:1], clr_norm.coef_, clr_norm.intercept_),
decision_crypt_decrypt_linreg(X_norm[0], coef_int))
p1s = []
p2s = []
for i in range(0, X_norm.shape[0]):
p1 = decision_linreg(X_norm[i:i+1], clr_norm.coef_, clr_norm.intercept_)
p2 = decision_crypt_decrypt_linreg(X_norm[i], coef_int)
if i < 5:
print(i, p1, p2)
p1s.append(p1)
p2s.append(p2)
import matplotlib.pyplot as plt
plt.plot(p1s, p2s, '.')
Explanation: Exercice 4 : assembler le tout
Prendre une observation, crypter, prédire, décrypter, comparer avec la version non cryptée. Il faudra sans doute un peu ruser car la fonction de cryptage s'applique à des entiers et le modèle de prédiction à des réels. On multiplie par 10000 les variables. Comme le cryptage que nous avons choisi ne conserve que l'addition, nous garderons les modèles en clair.
End of explanation
from numpy.random import poisson
X = poisson(size=10000)
mx = X.max()+1
X.min(), mx
from matplotlib import pyplot as plt
plt.hist(X, bins=mx, rwidth=0.9);
def crypt(x):
return compose(x, 5794, 10000)
import numpy
Xcrypt = numpy.array([crypt(x) for x in X])
Xcrypt[:10]
plt.hist(Xcrypt, bins=mx, rwidth=0.9);
Explanation: Notes
Les coefficients sont en clair mais les données sont cryptées. Pour crypter les coefficients du modèle, il faudrait pouvoir s'assurer que l'addition et la multiplication sont stables après le cryptage. Cela nécessite un cryptage différent comme Fully Homomorphic Encryption over the Integers. Les entiers cryptés sont dans l'intervalle [0, 10000], cela veut dire qu'il est préférable de crypter des entiers dans un intervalle équivalent sous peine de ne pouvoir décrypter avec certitude. Ceci implique que l'algorithme fasse des calculs qui restent dans cet intervalle. C'est pourquoi les entrées et les sorties prennent leur valeur dans l'intervalle [0, 100] afin que le produit coefficient x entrée reste dans l'intervalle considéré. Pour éviter ce problème, il faudrait décomposer chaque entier en une séquence d'entiers entre 0 et 100 et réécrire les opérations addition et multiplication en fonction.
Questions
Le cryptage choisi est moins efficace qu'un cryptage RSA qui conserve la multiplication. Il faudrait transformer l'écriture du modèle pour utiliser des multiplications plutôt que des additions. Si je vous disais qu'une des variables est l'âge d'une population, vous pourriez la retrouver. Il en est de même pour un chiffrage RSA qui change un entier en un autre. On peut crypter des éléments de ces entiers et les recomposer dans le monde crypté. C'est ce que propose d'autres type de cryptage. On peut aussi altérer les données en ajoutant un bruit aléatoire qui change peu la prédiction mais qui change la valeur cryptée. Dans ce cas, la distribution de chaque variable paraîtra uniforme.
On peut entraîner un modèle sur des données cryptées si on peut reproduire l'addition et la multiplication avec les nombres cryptés. Une option est le cryptage : Fully Homomorphic Encryption over the Integers. Cela implique qu'on peut approcher toute fonction par un polynôme (voir développement limité). Le gradient d'un polynôme est un polynôme également. Il est possible de calculer la norme du gradient crypté mais pas de la comparer à une autre valeur cryptées.
De ce fait les arbres de décision se prêtent mal à ce type d'apprentissage puisque chaque noeud de l'arbre consiste à comparer deux valeurs. Cependant, on peut s'en sortir en imposant à l'algorithme d'apprentissage d'un arbre de décision de ne s'appuyer sur des égalités. Cela nécessite plus de coefficients et la discrétisation des variables continues. Il reste une dernière chose à vérifier. Chaque noeud d'un arbre de décision est déterminé en maximisant une quantité. Comment trouver le maximum dans un ensemble de données cryptées qu'on ne peut comparer ? On utilise une propriété des normes :
$$\lim_{d \rightarrow \infty} (x^d + y^d)^{1/d} = \max(x, y)$$
Il existe d'autres options : Machine Learning Classification over Encrypted Data.
Ajouter du bruit sur une colonne
Les données peuvent être cryptées mais la distribution est inchangée à une permutation près. Pour éviter cela, on ajoute un peu de bruit, nous allons voir comment faire cela. On suppose que nous avons une colonne qui des entiers distribués selon une loi de Poisson.
End of explanation
import random
Xbruit = numpy.array([100*x + random.randint(0,100) for x in X])
Xbruit[:10]
fix, ax = plt.subplots(1, 2, figsize=(12,4))
ax[0].hist(Xbruit, bins=mx, rwidth=0.9)
ax[1].hist(Xbruit, bins=mx*100);
Xbruitcrypt = numpy.array([crypt(x) for x in Xbruit])
fix, ax = plt.subplots(1, 2, figsize=(12,4))
ax[0].hist(Xbruitcrypt, bins=mx, rwidth=0.9)
ax[1].hist(Xbruitcrypt, bins=mx*100);
Explanation: Même distribution dans un ordre différent. Pour changer cette distribution, on ajoute un petit bruit peu important pour la variable numérique considérée mais qui sera cryptée de manière totalement différente.
End of explanation |
695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Note
Step2: Definition of the layers
So let us define the layers for the convolutional net. In general, layers are assembled in a list. Each element of the list is a tuple -- first a Lasagne layer, next a dictionary containing the arguments of the layer. We will explain the layer definitions in a moment, but in general, you should look them up in the Lasagne documenation.
Nolearn allows you to skip Lasagne's incoming keyword, which specifies how the layers are connected. Instead, nolearn will automatically assume that layers are connected in the order they appear in the list.
Note
Step3: Definition of the neural network
We now define the neural network itself. But before we do this, we want to add L2 regularization to the net (see here for more). This is achieved in the little helper function below. If you don't understand exactly what this about, just ignore this.
Step4: Now we initialize nolearn's neural net itself. We will explain each argument shortly
Step5: Training the neural network
To train the net, we call its fit method with our X and y data, as we would with any scikit learn classifier.
Step6: As we set the verbosity to 1, nolearn will print some useful information for us
Step7: Train and validation loss progress
With nolearn's visualization tools, it is possible to get some further insights into the working of the CNN. First of all, we will simply plot the log loss of the training and validation data over each epoch, as shown below
Step8: This kind of visualization can be helpful in determining whether we want to continue training or not. For instance, here we see that both loss functions still are still decreasing and that more training will pay off. This graph can also help determine if we are overfitting
Step9: As can be seen above, in our case, the results are not too interesting. If the weights just look like noise, we might have to do something (e.g. use more filters so that each can specialize better).
Visualizing the layers' activities
To see through the "eyes" of the net, we can plot the activities produced by different layers. The plot_conv_activity function is made for that. The first argument, again, is a layer, the second argument an image in the bc01 format (which is why we use X[0
Step10: Here we can see that depending on the learned filters, the neural net represents the image in different ways, which is what we should expect. If, e.g., some images were completely black, that could indicate that the corresponding filters have not learned anything useful. When you find yourself in such a situation, training longer or initializing the weights differently might do the trick.
Plot occlusion images
A possibility to check if the net, for instance, overfits or learns important features is to occlude part of the image. Then we can check whether the net still makes correct predictions. The idea behind that is the following
Step11: Here we see which parts of the number are most important for correct classification. We ses that the critical parts are all directly above the numbers, so this seems to work out. For more complex images with different objects in the scene, this function should be more useful, though.
Finding a good architecture
This section tries to help you go deep with your convolutional neural net. To do so, one cannot simply increase the number of convolutional layers at will. It is important that the layers have a sufficiently high learning capacity while they should cover approximately 100% of the incoming image (Xudong Cao, 2015).
The usual approach is to try to go deep with convolutional layers. If you chain too many convolutional layers, though, the learning capacity of the layers falls too low. At this point, you have to add a max pooling layer. Use too many max pooling layers, and your image coverage grows larger than the image, which is clearly pointless. Striking the right balance while maximizing the depth of your layer is the final goal.
It is generally a good idea to use small filter sizes for your convolutional layers, generally <b>3x3</b>. The reason for this is that this allows to cover the same receptive field of the image while using less parameters that would be required if a larger filter size were used. Moreover, deeper stacks of convolutional layers are more expressive (see here for more).
Step12: A shallow net
Let us try out a simple architecture and see how we fare.
Step13: To see information about the capacity and coverage of each layer, we need to set the verbosity of the net to a value of 2 and then initialize the net. We next pass the initialized net to PrintLayerInfo to see some useful information. By the way, we could also just call the fit method of the net to get the same outcome, but since we don't want to fit just now, we proceed as shown below.
Step14: This net is fine. The capacity never falls below 1/6, which would be 16.7%, and the coverage of the image never exceeds 100%. However, with only 4 convolutional layers, this net is not very deep and will properly not achieve the best possible results.
What we also see is the role of max pooling. If we look at 'maxpool2d1', after this layer, the capacity of the net is increased. Max pooling thus helps to increase capacity should it dip too low. However, max pooling also significantly increases the coverage of the image. So if we use max pooling too often, the coverage will quickly exceed 100% and we cannot go sufficiently deep.
Too little maxpooling
Now let us try an architecture that uses a lot of convolutional layers but only one maxpooling layer.
Step15: Here we have a very deep net but we have a problem
Step16: This net uses too much maxpooling for too small an image. The later layers, colored in cyan, would cover more than 100% of the image. So this network is clearly also suboptimal.
A good compromise
Now let us have a look at a reasonably deep architecture that satisfies the criteria we set out to meet
Step17: With 10 convolutional layers, this network is rather deep, given the small image size. Yet the learning capacity is always suffiently large and never are is than 100% of the image covered. This could just be a good solution. Maybe you would like to give this architecture a spin?
Note 1 | Python Code:
import os
import matplotlib.pyplot as plt
%pylab inline
import numpy as np
from lasagne.layers import DenseLayer
from lasagne.layers import InputLayer
from lasagne.layers import DropoutLayer
from lasagne.layers import Conv2DLayer
from lasagne.layers import MaxPool2DLayer
from lasagne.nonlinearities import softmax
from lasagne.updates import adam
from lasagne.layers import get_all_params
from nolearn.lasagne import NeuralNet
from nolearn.lasagne import TrainSplit
from nolearn.lasagne import objective
Explanation: Tutorial: Training convolutional neural networks with nolearn
Author: Benjamin Bossan
This tutorial's goal is to teach you how to use nolearn to train convolutional neural networks (CNNs). The nolearn documentation can be found here. We assume that you have some general knowledge about machine learning in general or neural nets specifically, but want to learn more about convolutional neural networks and nolearn.
We well cover several points in this notebook.
How to load image data such that we can use it for our purpose. For this tutorial, we will use the MNIST data set, which consists of images of the numbers from 0 to 9.
How to properly define layers of the net. A good choice of layers, i.e. a good network architecture, is most important to get nice results out of a neural net.
The definition of the neural network itself. Here we define important hyper-parameters.
Next we will see how visualizations may help us to further refine the network.
Finally, we will show you how nolearn can help us find better architectures for our neural network.
Imports
End of explanation
def load_mnist(path):
X = []
y = []
with open(path, 'rb') as f:
next(f) # skip header
for line in f:
yi, xi = line.split(',', 1)
y.append(yi)
X.append(xi.split(','))
# Theano works with fp32 precision
X = np.array(X).astype(np.float32)
y = np.array(y).astype(np.int32)
# apply some very simple normalization to the data
X -= X.mean()
X /= X.std()
# For convolutional layers, the default shape of data is bc01,
# i.e. batch size x color channels x image dimension 1 x image dimension 2.
# Therefore, we reshape the X data to -1, 1, 28, 28.
X = X.reshape(
-1, # number of samples, -1 makes it so that this number is determined automatically
1, # 1 color channel, since images are only black and white
28, # first image dimension (vertical)
28, # second image dimension (horizontal)
)
return X, y
# here you should enter the path to your MNIST data
path = os.path.join(os.path.expanduser('~'), 'data/mnist/train.csv')
X, y = load_mnist(path)
figs, axes = plt.subplots(4, 4, figsize=(6, 6))
for i in range(4):
for j in range(4):
axes[i, j].imshow(-X[i + 4 * j].reshape(28, 28), cmap='gray', interpolation='none')
axes[i, j].set_xticks([])
axes[i, j].set_yticks([])
axes[i, j].set_title("Label: {}".format(y[i + 4 * j]))
axes[i, j].axis('off')
Explanation: Note: If your GPU supports it, you should try using lasagne.cuda_convnet.Conv2DCCLayer and lasagne.cuda_convnet.MaxPool2DCCLayer, which could give you a nice speed up.
Loading MNIST data
This little helper function loads the MNIST data available here.
End of explanation
layers0 = [
# layer dealing with the input data
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
# first stage of our convolutional layers
(Conv2DLayer, {'num_filters': 96, 'filter_size': 5}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(MaxPool2DLayer, {'pool_size': 2}),
# second stage of our convolutional layers
(Conv2DLayer, {'num_filters': 128, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 128, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 128, 'filter_size': 3}),
(MaxPool2DLayer, {'pool_size': 2}),
# two dense layers with dropout
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
# the output layer
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
Explanation: Definition of the layers
So let us define the layers for the convolutional net. In general, layers are assembled in a list. Each element of the list is a tuple -- first a Lasagne layer, next a dictionary containing the arguments of the layer. We will explain the layer definitions in a moment, but in general, you should look them up in the Lasagne documenation.
Nolearn allows you to skip Lasagne's incoming keyword, which specifies how the layers are connected. Instead, nolearn will automatically assume that layers are connected in the order they appear in the list.
Note: Of course you can manually set the incoming parameter if your neural net's layers are connected differently. To do so, you have to give the corresponding layer a name (e.g. 'name': 'my layer') and use that name as a reference ('incoming': 'my layer').
The layers we use are the following:
InputLayer: We have to specify the shape of the data. For image data, it is batch size x color channels x image dimension 1 x image dimension 2 (aka bc01). Here you should generally just leave the batch size as None, so that it is taken care off automatically. The other dimensions are given by X.
Conv2DLayer: The most important keywords are num_filters and filter_size. The former indicates the number of channels -- the more you choose, the more different filters can be learned by the CNN. Generally, the first convolutional layers will learn simple features, such as edges, while deeper layers can learn more abstract features. Therefore, you should increase the number of filters the deeper you go. The filter_size is the size of the filter/kernel. The current consensus is to always use 3x3 filters, as these allow to cover the same number of image pixels with fewer parameters than larger filters do.
MaxPool2DLayer: This layer performs max pooling and hopefully provides translation invariance. We need to indicate the region over which it pools, with 2x2 being the default of most users.
DenseLayer: This is your vanilla fully-connected layer; you should indicate the number of 'neurons' with the num_units argument. The very last layer is assumed to be the output layer. We thus set the number of units to be the number of classes, 10, and choose softmax as the output nonlinearity, as we are dealing with a classification task.
DropoutLayer: Dropout is a common technique to regularize neural networks. It is almost always a good idea to include dropout between your dense layers.
Apart from these arguments, the Lasagne layers have very reasonable defaults concerning weight initialization, nonlinearities (rectified linear units), etc.
End of explanation
def regularization_objective(layers, lambda1=0., lambda2=0., *args, **kwargs):
# default loss
losses = objective(layers, *args, **kwargs)
# get the layers' weights, but only those that should be regularized
# (i.e. not the biases)
weights = get_all_params(layers[-1], regularizable=True)
# sum of absolute weights for L1
sum_abs_weights = sum([abs(w).sum() for w in weights])
# sum of squared weights for L2
sum_squared_weights = sum([(w ** 2).sum() for w in weights])
# add weights to regular loss
losses += lambda1 * sum_abs_weights + lambda2 * sum_squared_weights
return losses
Explanation: Definition of the neural network
We now define the neural network itself. But before we do this, we want to add L2 regularization to the net (see here for more). This is achieved in the little helper function below. If you don't understand exactly what this about, just ignore this.
End of explanation
net0 = NeuralNet(
layers=layers0,
max_epochs=10,
update=adam,
update_learning_rate=0.0002,
objective=regularization_objective,
objective_lambda2=0.0025,
train_split=TrainSplit(eval_size=0.25),
verbose=1,
)
Explanation: Now we initialize nolearn's neural net itself. We will explain each argument shortly:
* The most important argument is the layers argument, which should be the list of layers defined above.
* max_epochs is simply the number of epochs the net learns with each call to fit (an 'epoch' is a full training cycle using all training data).
* As update, we choose adam, which for many problems is a good first choice as updateing rule.
* The objective of our net will be the regularization_objective we just defined.
* To change the lambda2 parameter of our objective function, we set the objective_lambda2 parameter. The NeuralNetwork class will then automatically set this value. Usually, moderate L2 regularization is applied, whereas L1 regularization is less frequent.
* For 'adam', a small learning rate is best, so we set it with the update_learning_rate argument (nolearn will automatically interpret this argument to mean the learning_rate argument of the update parameter, i.e. adam in our case).
* The NeuralNet will hold out some of the training data for validation if we set the eval_size of the TrainSplit to a number greater than 0. This will allow us to monitor how well the net generalizes to yet unseen data. By setting this argument to 1/4, we tell the net to hold out 25% of the samples for validation.
* Finally, we set verbose to 1, which will result in the net giving us some useful information.
End of explanation
net0.fit(X, y)
Explanation: Training the neural network
To train the net, we call its fit method with our X and y data, as we would with any scikit learn classifier.
End of explanation
from nolearn.lasagne.visualize import plot_loss
from nolearn.lasagne.visualize import plot_conv_weights
from nolearn.lasagne.visualize import plot_conv_activity
from nolearn.lasagne.visualize import plot_occlusion
Explanation: As we set the verbosity to 1, nolearn will print some useful information for us:
First of all, some general information about the net and its layers is printed. Then, during training, the progress will be printed after each epoch.
The train loss is the loss/cost that the net tries to minimize. For this example, this is the log loss (cross entropy).
The valid loss is the loss for the hold out validation set. You should expect this value to indicate how well your model generalizes to yet unseen data.
train/val is simply the ratio of train loss to valid loss. If this value is very low, i.e. if the train loss is much better than your valid loss, it means that the net has probably overfitted the train data.
When we are dealing with a classification task, the accuracy score of the valdation set, valid acc, is also printed.
dur is simply the duration it took to process the given epoch.
In addition to this, nolearn will color the as of yet best train and valid loss, so that it is easy to spot whether the net makes progress.
Visualizations
Diagnosing what's wrong with your neural network if the results are unsatisfying can sometimes be difficult, something closer to an art than a science. But with nolearn's visualization tools, we should be able to get some insights that help us diagnose if something is wrong.
End of explanation
plot_loss(net0)
Explanation: Train and validation loss progress
With nolearn's visualization tools, it is possible to get some further insights into the working of the CNN. First of all, we will simply plot the log loss of the training and validation data over each epoch, as shown below:
End of explanation
plot_conv_weights(net0.layers_[1], figsize=(4, 4))
Explanation: This kind of visualization can be helpful in determining whether we want to continue training or not. For instance, here we see that both loss functions still are still decreasing and that more training will pay off. This graph can also help determine if we are overfitting: If the train loss is much lower than the validation loss, we should probably do something to regularize the net.
Visualizing layer weights
We can further have a look at the weights learned by the net. The first argument of the function should be the layer we want to visualize. The layers can be accessed through the layers_ attribute and then by name (e.g. 'conv2dcc1') or by index, as below. (Obviously, visualizing the weights only makes sense for convolutional layers.)
End of explanation
x = X[0:1]
plot_conv_activity(net0.layers_[1], x)
Explanation: As can be seen above, in our case, the results are not too interesting. If the weights just look like noise, we might have to do something (e.g. use more filters so that each can specialize better).
Visualizing the layers' activities
To see through the "eyes" of the net, we can plot the activities produced by different layers. The plot_conv_activity function is made for that. The first argument, again, is a layer, the second argument an image in the bc01 format (which is why we use X[0:1] instead of just X[0]).
End of explanation
plot_occlusion(net0, X[:5], y[:5])
Explanation: Here we can see that depending on the learned filters, the neural net represents the image in different ways, which is what we should expect. If, e.g., some images were completely black, that could indicate that the corresponding filters have not learned anything useful. When you find yourself in such a situation, training longer or initializing the weights differently might do the trick.
Plot occlusion images
A possibility to check if the net, for instance, overfits or learns important features is to occlude part of the image. Then we can check whether the net still makes correct predictions. The idea behind that is the following: If the most critical part of an image is something like the head of a person, that is probably right. If it is instead a random part of the background, the net probably overfits (see here for more).
With the plot_occlusion function, we can check this. The first argument is the neural net, the second the X data, the third the y data. Be warned that this function can be quite slow for larger images.
End of explanation
from nolearn.lasagne import PrintLayerInfo
Explanation: Here we see which parts of the number are most important for correct classification. We ses that the critical parts are all directly above the numbers, so this seems to work out. For more complex images with different objects in the scene, this function should be more useful, though.
Finding a good architecture
This section tries to help you go deep with your convolutional neural net. To do so, one cannot simply increase the number of convolutional layers at will. It is important that the layers have a sufficiently high learning capacity while they should cover approximately 100% of the incoming image (Xudong Cao, 2015).
The usual approach is to try to go deep with convolutional layers. If you chain too many convolutional layers, though, the learning capacity of the layers falls too low. At this point, you have to add a max pooling layer. Use too many max pooling layers, and your image coverage grows larger than the image, which is clearly pointless. Striking the right balance while maximizing the depth of your layer is the final goal.
It is generally a good idea to use small filter sizes for your convolutional layers, generally <b>3x3</b>. The reason for this is that this allows to cover the same receptive field of the image while using less parameters that would be required if a larger filter size were used. Moreover, deeper stacks of convolutional layers are more expressive (see here for more).
End of explanation
layers1 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net1 = NeuralNet(
layers=layers1,
update_learning_rate=0.01,
verbose=2,
)
Explanation: A shallow net
Let us try out a simple architecture and see how we fare.
End of explanation
net1.initialize()
layer_info = PrintLayerInfo()
layer_info(net1)
Explanation: To see information about the capacity and coverage of each layer, we need to set the verbosity of the net to a value of 2 and then initialize the net. We next pass the initialized net to PrintLayerInfo to see some useful information. By the way, we could also just call the fit method of the net to get the same outcome, but since we don't want to fit just now, we proceed as shown below.
End of explanation
layers2 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net2 = NeuralNet(
layers=layers2,
update_learning_rate=0.01,
verbose=2,
)
net2.initialize()
layer_info(net2)
Explanation: This net is fine. The capacity never falls below 1/6, which would be 16.7%, and the coverage of the image never exceeds 100%. However, with only 4 convolutional layers, this net is not very deep and will properly not achieve the best possible results.
What we also see is the role of max pooling. If we look at 'maxpool2d1', after this layer, the capacity of the net is increased. Max pooling thus helps to increase capacity should it dip too low. However, max pooling also significantly increases the coverage of the image. So if we use max pooling too often, the coverage will quickly exceed 100% and we cannot go sufficiently deep.
Too little maxpooling
Now let us try an architecture that uses a lot of convolutional layers but only one maxpooling layer.
End of explanation
layers3 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net3 = NeuralNet(
layers=layers3,
update_learning_rate=0.01,
verbose=2,
)
net3.initialize()
layer_info(net3)
Explanation: Here we have a very deep net but we have a problem: The lack of max pooling layers means that the capacity of the net dips below 16.7%. The corresponding layers are shown in magenta. We need to find a better solution.
Too much maxpooling
Here is an architecture with too mach maxpooling. For illustrative purposes, we set the pad parameter to 1; without it, the image size would shrink below 0, at which point the code will raise an error.
End of explanation
layers4 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net4 = NeuralNet(
layers=layers4,
update_learning_rate=0.01,
verbose=2,
)
net4.initialize()
layer_info(net4)
Explanation: This net uses too much maxpooling for too small an image. The later layers, colored in cyan, would cover more than 100% of the image. So this network is clearly also suboptimal.
A good compromise
Now let us have a look at a reasonably deep architecture that satisfies the criteria we set out to meet:
End of explanation
net4.verbose = 3
layer_info(net4)
Explanation: With 10 convolutional layers, this network is rather deep, given the small image size. Yet the learning capacity is always suffiently large and never are is than 100% of the image covered. This could just be a good solution. Maybe you would like to give this architecture a spin?
Note 1: The MNIST images typically don't cover the whole of the 28x28 image size. Therefore, an image coverage of less than 100% is probably very acceptable. For other image data sets such as CIFAR or ImageNet, it is recommended to cover the whole image.
Note 2: This analysis does not tell us how many feature maps (i.e. number of filters per convolutional layer) to use. Here we have to experiment with different values. Larger values mean that the network should learn more types of features but also increase the risk of overfitting (and may exceed the available memory). In general though, deeper layers (those farther down) are supposed to learn more complex features and should thus have more feature maps.
Even more information
It is possible to get more information by increasing the verbosity level beyond 2.
End of explanation |
696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mastr
Mastr es la nueva herramienta de valoración y agregación diseñada encima de PALM, que permite describir carteras y escenarios de riesgo de manera separada a la implementación de los pricers y los objetos. PALM se utiliza para poder distribuir la carga tanto en valoración como agregación, eliminando cuellos de botella y permitiendo la ejecución escalable en paralelo de todo el flujo de trabajo.
Uno de los problemas a resolver de las herramientas actuales es que el desarrollo de pricers y objetos está muy relacionado con la propia infraestructura. El objeto curva incluye la mayoría de la lógica para pasar de datos de mercado a lógica de evaluación.
Mastr incluye un sandbox diseñado para poder implementar pricers y objetos de manera sencilla y ensayar la ejecución de scripts IDN, un formato de descripción de carteras y escenarios de valoración.
Además, Mastr incluye una librería en Python que cubre algunas tareas comunes en la implementación de pricers y objetos, como calendarios, bootstrap, interpolaciones...
Desarrollo de pricers
Un pricer en Mastr no es más que una función. Pero antes hay que cargar el sandbox
Step1: Vamos a construir un pricer para un bono que nos vamos a inventar. Es un bono que vale 0 mientras no haya pasado 1.5 fracciones de año desde la fecha de firma. En cuanto se supere ese tiempo, el bono vale un 5% más que el strike price.
Step2: Lo mejor es que ahora lo podemos probar, hacer test unitarios... De mastr importaremos el objeto que cuenta fracciones de año.
Step3: Pero mastr está diseñado para ejecutar IDN, los scripts de descripción de carteras y de escenarios.
Step4: El intérprete de IDN está preparado para poder evaluar la cartera con múltiples escenarios. En este caso, vamos a evaluar distintos escenarios temporales.
Step5: Este nuevo fichero JSON, que contiene los datos de distintos escenarios, no es más que un nuevo argumento para el sandbox.
Step6: Finalmente, podemos utilizar las capacidades de representación gráfica para analizar los resultados y la evolucion en los tiempos dados del valor de este bono. | Python Code:
from mastr.idn.sandbox import Sandbox
sandbox = Sandbox()
Explanation: Mastr
Mastr es la nueva herramienta de valoración y agregación diseñada encima de PALM, que permite describir carteras y escenarios de riesgo de manera separada a la implementación de los pricers y los objetos. PALM se utiliza para poder distribuir la carga tanto en valoración como agregación, eliminando cuellos de botella y permitiendo la ejecución escalable en paralelo de todo el flujo de trabajo.
Uno de los problemas a resolver de las herramientas actuales es que el desarrollo de pricers y objetos está muy relacionado con la propia infraestructura. El objeto curva incluye la mayoría de la lógica para pasar de datos de mercado a lógica de evaluación.
Mastr incluye un sandbox diseñado para poder implementar pricers y objetos de manera sencilla y ensayar la ejecución de scripts IDN, un formato de descripción de carteras y escenarios de valoración.
Además, Mastr incluye una librería en Python que cubre algunas tareas comunes en la implementación de pricers y objetos, como calendarios, bootstrap, interpolaciones...
Desarrollo de pricers
Un pricer en Mastr no es más que una función. Pero antes hay que cargar el sandbox
End of explanation
from datetime import date
def newbond(strike, signed, time, daycount):
signed = date(*map(int, signed.split('-')))
time = date(*map(int, time.split('-')))
yearfrac = daycount(signed, time)
if yearfrac > 1.5:
return (1 + 0.05) * strike
else:
return 0.0
Explanation: Vamos a construir un pricer para un bono que nos vamos a inventar. Es un bono que vale 0 mientras no haya pasado 1.5 fracciones de año desde la fecha de firma. En cuanto se supere ese tiempo, el bono vale un 5% más que el strike price.
End of explanation
from mastr.bootstrapping.daycount import DayCounter
value = newbond(100, '2016-9-7', '2017-9-10', DayCounter('actual/360'))
print('2017-9-10', value)
value = newbond(100, '2016-9-7', '2018-9-10', DayCounter('actual/360'))
print('2018-9-10', value)
Explanation: Lo mejor es que ahora lo podemos probar, hacer test unitarios... De mastr importaremos el objeto que cuenta fracciones de año.
End of explanation
sandbox.add_pricer(newbond)
sandbox.add_object(DayCounter)
%cat data/script.json
with open('data/script.json') as f:
results = sandbox.eval(f.read())
print(results)
Explanation: Pero mastr está diseñado para ejecutar IDN, los scripts de descripción de carteras y de escenarios.
End of explanation
%cat data/scriptdata.json
%cat data/data.json
Explanation: El intérprete de IDN está preparado para poder evaluar la cartera con múltiples escenarios. En este caso, vamos a evaluar distintos escenarios temporales.
End of explanation
with open('data/scriptdata.json') as f:
with open('data/data.json') as g:
results = sandbox.eval(f.read(), g.read())
print(results)
Explanation: Este nuevo fichero JSON, que contiene los datos de distintos escenarios, no es más que un nuevo argumento para el sandbox.
End of explanation
import matplotlib.pyplot as plt
import json
%matplotlib notebook
dates = [
date(2016,9,10),
date(2016,12,10),
date(2017,9,10),
date(2018,9,10),
date(2019,9,10)
]
fig1 = plt.figure(1)
ax = fig1.add_subplot(1,1,1)
ax.plot(dates, [r['eval1'] for r in results])
plt.setp(ax.get_xticklabels(), rotation=30)
Explanation: Finalmente, podemos utilizar las capacidades de representación gráfica para analizar los resultados y la evolucion en los tiempos dados del valor de este bono.
End of explanation |
697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 1
Imports
Step1: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step3: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
Step4: Describe the choices you have made in building this visualization and how they make it effective.
I added a title and axis labels to give context. I also changed the x axis limit so save space. Then I took away the right and top boarders and tick marks because they served no purpose on this graph.
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 1
Imports
End of explanation
import os
assert os.path.isfile('yearssn.dat')
Explanation: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
End of explanation
data = np.loadtxt('yearssn.dat')
years = data[:,0]
ssc = data[:,1]
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
End of explanation
plt.plot(years, ssc)
plt.ylim(0, 200)
plt.xlim(right = 2025)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.title('Sunspot Count Since 1700')
axis = plt.gca()
axis.spines['top'].set_visible(False)
axis.spines['right'].set_visible(False)
axis.get_xaxis().tick_bottom()
axis.get_yaxis().tick_left()
plt.tight_layout()
assert True # leave for grading
Explanation: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
plt.subplots(2, 2, sharex=True, sharey=True, figsize=(6,6))
plt.subplot(2,2,1)
plt.plot(years, ssc)
plt.ylim(0,160)
plt.xlim(1700,1800)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.title('Sunspot Count 1700-1800')
axis = plt.gca()
axis.spines['top'].set_visible(False)
axis.spines['right'].set_visible(False)
axis.get_xaxis().tick_bottom()
axis.get_yaxis().tick_left()
axis.set_aspect(0.5)
plt.tight_layout()
plt.subplot(2,2,2)
plt.plot(years, ssc)
plt.ylim(0,150)
plt.xlim(1800,1900)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.title('Sunspot Count 1800-1900')
axis = plt.gca()
axis.spines['top'].set_visible(False)
axis.spines['right'].set_visible(False)
axis.get_xaxis().tick_bottom()
axis.get_yaxis().tick_left()
axis.set_aspect(0.5)
plt.tight_layout()
plt.subplot(2,2,3)
plt.plot(years, ssc)
plt.ylim()
plt.xlim(1900,2000)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.title('Sunspot Count 1900-2000')
axis = plt.gca()
axis.spines['top'].set_visible(False)
axis.spines['right'].set_visible(False)
axis.get_xaxis().tick_bottom()
axis.get_yaxis().tick_left()
axis.set_aspect(0.5)
plt.tight_layout()
plt.subplot(2,2,4)
plt.plot(years, ssc)
plt.ylim(0,125)
plt.xlim(2000,2015)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.title('Sunspot Count 2000-2015')
axis = plt.gca()
axis.spines['top'].set_visible(False)
axis.spines['right'].set_visible(False)
axis.get_xaxis().tick_bottom()
axis.get_yaxis().tick_left()
plt.xticks([2000,2005,2010,2015])
plt.tight_layout()
assert True # leave for grading
Explanation: Describe the choices you have made in building this visualization and how they make it effective.
I added a title and axis labels to give context. I also changed the x axis limit so save space. Then I took away the right and top boarders and tick marks because they served no purpose on this graph.
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation |
698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step5: Exercise
Step6: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step7: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step8: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step9: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step10: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step11: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step12: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step13: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step14: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step15: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step16: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
# Filter out that review with 0 length
reviews_ints = [each for each in reviews_ints if len(each) > 0]
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
np.array([1,2,3])[-2:]
seq_len = 200
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic 沉没
这是一个分类任务,特征包含离散特征和连续特征,数据如下:Kaggle地址。目标是根据数据特征预测一个人是否能在泰坦尼克的沉没事故中存活下来。接下来解释下数据的格式:
survival 目标列,是否存活,1代表存活 (0 = No; 1 = Yes)
pclass 乘坐的舱位级别 (1 = 1st; 2 = 2nd; 3 = 3rd)
name 姓名
sex 性别
age 年龄
sibsp 兄弟姐妹的数量(乘客中)
parch 父母的数量(乘客中)
ticket 票号
fare 票价
cabin 客舱
embarked 登船的港口
(C = Cherbourg; Q = Queenstown; S = Southampton)
载入数据并分析
Step1: Pclass、Sex、Embarked离散特征数据预览
除此之外Name、Ticket、Cabin也是离散特征,我们暂时不用这几个特征,直观上来讲,叫什么名字跟在事故中是否存活好像没有太大的联系。
Step2: 连续特征处理
Age、Fare是连续特征,观察数据分布查看是否有缺失值和异常值,我们看到Age中存在缺失值,我们考虑使用均值来填充缺失值。
Step3: 特征工程
Step4: 模型训练
Step5: 逻辑回归
Step6: 提交kaggle后准确率:0.78469
高斯贝叶斯
Step7: 提交kaggle后准确率:0.74163
随机森林
Step8: 提交kaggle后准确率:0.76555
寻找最佳参数 | Python Code:
# -*- coding: UTF-8 -*-
%matplotlib inline
import pandas as pd
import string
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
def substrings_in_string(big_string, substrings):
for substring in substrings:
if string.find(big_string, substring) != -1:
return substring
return np.nan
def replace_titles(x):
title=x['Title']
if title in ['Mr','Don', 'Major', 'Capt', 'Jonkheer', 'Rev', 'Col']:
return 'Mr'
elif title in ['Master']:
return 'Master'
elif title in ['Countess', 'Mme','Mrs']:
return 'Mrs'
elif title in ['Mlle', 'Ms','Miss']:
return 'Miss'
elif title =='Dr':
if x['Sex']=='Male':
return 'Mr'
else:
return 'Mrs'
elif title =='':
if x['Sex']=='Male':
return 'Master'
else:
return 'Miss'
else:
return title
title_list = ['Mrs', 'Mr', 'Master', 'Miss', 'Major', 'Rev',
'Dr', 'Ms', 'Mlle','Col', 'Capt', 'Mme', 'Countess',
'Don', 'Jonkheer']
label = train['Survived'] # 目标列
Explanation: Titanic 沉没
这是一个分类任务,特征包含离散特征和连续特征,数据如下:Kaggle地址。目标是根据数据特征预测一个人是否能在泰坦尼克的沉没事故中存活下来。接下来解释下数据的格式:
survival 目标列,是否存活,1代表存活 (0 = No; 1 = Yes)
pclass 乘坐的舱位级别 (1 = 1st; 2 = 2nd; 3 = 3rd)
name 姓名
sex 性别
age 年龄
sibsp 兄弟姐妹的数量(乘客中)
parch 父母的数量(乘客中)
ticket 票号
fare 票价
cabin 客舱
embarked 登船的港口
(C = Cherbourg; Q = Queenstown; S = Southampton)
载入数据并分析
End of explanation
# 接下来我们对每个特征进行一下分析:
train.groupby(['Pclass'])['PassengerId'].count().plot(kind='bar')
train.groupby(['SibSp'])['PassengerId'].count().plot(kind='bar')
train.groupby(['Parch'])['PassengerId'].count().plot(kind='bar')
train.groupby(['Embarked'])['PassengerId'].count().plot(kind='bar')
train.groupby(['Sex'])['PassengerId'].count().plot(kind='bar')
Explanation: Pclass、Sex、Embarked离散特征数据预览
除此之外Name、Ticket、Cabin也是离散特征,我们暂时不用这几个特征,直观上来讲,叫什么名字跟在事故中是否存活好像没有太大的联系。
End of explanation
print '检测是否有缺失值:'
print train[train['Age'].isnull()]['Age'].head()
print train[train['Fare'].isnull()]['Fare'].head()
print train[train['SibSp'].isnull()]['SibSp'].head()
print train[train['Parch'].isnull()]['Parch'].head()
train['Age'] = train['Age'].fillna(train['Age'].mean())
print '填充之后再检测:'
print train[train['Age'].isnull()]['Age'].head()
print train[train['Fare'].isnull()]['Fare'].head()
print '检测测试集是否有缺失值:'
print test[test['Age'].isnull()]['Age'].head()
print test[test['Fare'].isnull()]['Fare'].head()
print test[test['SibSp'].isnull()]['SibSp'].head()
print test[test['Parch'].isnull()]['Parch'].head()
test['Age'] = test['Age'].fillna(test['Age'].mean())
test['Fare'] = test['Fare'].fillna(test['Fare'].mean())
print '填充之后再检测:'
print test[test['Age'].isnull()]['Age'].head()
print test[test['Fare'].isnull()]['Fare'].head()
# 处理Title特征
train['Title'] = train['Name'].map(lambda x: substrings_in_string(x, title_list))
test['Title'] = test['Name'].map(lambda x: substrings_in_string(x, title_list))
train['Title'] = train.apply(replace_titles, axis=1)
test['Title'] = test.apply(replace_titles, axis=1)
# family特征
train['Family_Size'] = train['SibSp'] + train['Parch']
train['Family'] = train['SibSp'] * train['Parch']
test['Family_Size'] = test['SibSp'] + test['Parch']
test['Family'] = test['SibSp'] * test['Parch']
train['AgeFill'] = train['Age']
mean_ages = np.zeros(4)
mean_ages[0] = np.average(train[train['Title'] == 'Miss']['Age'].dropna())
mean_ages[1] = np.average(train[train['Title'] == 'Mrs']['Age'].dropna())
mean_ages[2] = np.average(train[train['Title'] == 'Mr']['Age'].dropna())
mean_ages[3] = np.average(train[train['Title'] == 'Master']['Age'].dropna())
train.loc[ (train.Age.isnull()) & (train.Title == 'Miss') ,'AgeFill'] = mean_ages[0]
train.loc[ (train.Age.isnull()) & (train.Title == 'Mrs') ,'AgeFill'] = mean_ages[1]
train.loc[ (train.Age.isnull()) & (train.Title == 'Mr') ,'AgeFill'] = mean_ages[2]
train.loc[ (train.Age.isnull()) & (train.Title == 'Master') ,'AgeFill'] = mean_ages[3]
train['AgeCat'] = train['AgeFill']
train.loc[ (train.AgeFill<=10), 'AgeCat'] = 'child'
train.loc[ (train.AgeFill>60), 'AgeCat'] = 'aged'
train.loc[ (train.AgeFill>10) & (train.AgeFill <=30) ,'AgeCat'] = 'adult'
train.loc[ (train.AgeFill>30) & (train.AgeFill <=60) ,'AgeCat'] = 'senior'
train['Fare_Per_Person'] = train['Fare'] / (train['Family_Size'] + 1)
test['AgeFill'] = test['Age']
mean_ages = np.zeros(4)
mean_ages[0] = np.average(test[test['Title'] == 'Miss']['Age'].dropna())
mean_ages[1] = np.average(test[test['Title'] == 'Mrs']['Age'].dropna())
mean_ages[2] = np.average(test[test['Title'] == 'Mr']['Age'].dropna())
mean_ages[3] = np.average(test[test['Title'] == 'Master']['Age'].dropna())
test.loc[ (test.Age.isnull()) & (test.Title == 'Miss') ,'AgeFill'] = mean_ages[0]
test.loc[ (test.Age.isnull()) & (test.Title == 'Mrs') ,'AgeFill'] = mean_ages[1]
test.loc[ (test.Age.isnull()) & (test.Title == 'Mr') ,'AgeFill'] = mean_ages[2]
test.loc[ (test.Age.isnull()) & (test.Title == 'Master') ,'AgeFill'] = mean_ages[3]
test['AgeCat'] = test['AgeFill']
test.loc[ (test.AgeFill<=10), 'AgeCat'] = 'child'
test.loc[ (test.AgeFill>60), 'AgeCat'] = 'aged'
test.loc[ (test.AgeFill>10) & (test.AgeFill <=30) ,'AgeCat'] = 'adult'
test.loc[ (test.AgeFill>30) & (test.AgeFill <=60) ,'AgeCat'] = 'senior'
test['Fare_Per_Person'] = test['Fare'] / (test['Family_Size'] + 1)
train.Embarked = train.Embarked.fillna('S')
test.Embarked = test.Embarked.fillna('S')
train.loc[ train.Cabin.isnull() == True, 'Cabin'] = 0.2
train.loc[ train.Cabin.isnull() == False, 'Cabin'] = 1
test.loc[ test.Cabin.isnull() == True, 'Cabin'] = 0.2
test.loc[ test.Cabin.isnull() == False, 'Cabin'] = 1
#Age times class
train['AgeClass'] = train['AgeFill'] * train['Pclass']
train['ClassFare'] = train['Pclass'] * train['Fare_Per_Person']
train['HighLow'] = train['Pclass']
train.loc[ (train.Fare_Per_Person < 8) ,'HighLow'] = 'Low'
train.loc[ (train.Fare_Per_Person >= 8) ,'HighLow'] = 'High'
#Age times class
test['AgeClass'] = test['AgeFill'] * test['Pclass']
test['ClassFare'] = test['Pclass'] * test['Fare_Per_Person']
test['HighLow'] = test['Pclass']
test.loc[ (test.Fare_Per_Person < 8) ,'HighLow'] = 'Low'
test.loc[ (test.Fare_Per_Person >= 8) ,'HighLow'] = 'High'
print train.head(1)
# print test.head()
Explanation: 连续特征处理
Age、Fare是连续特征,观察数据分布查看是否有缺失值和异常值,我们看到Age中存在缺失值,我们考虑使用均值来填充缺失值。
End of explanation
# 处理训练集
Pclass = pd.get_dummies(train.Pclass)
Sex = pd.get_dummies(train.Sex)
Embarked = pd.get_dummies(train.Embarked)
Title = pd.get_dummies(train.Title)
AgeCat = pd.get_dummies(train.AgeCat)
HighLow = pd.get_dummies(train.HighLow)
train_data = pd.concat([Pclass, Sex, Embarked, Title, AgeCat, HighLow], axis=1)
train_data['Age'] = train['Age']
train_data['Fare'] = train['Fare']
train_data['SibSp'] = train['SibSp']
train_data['Parch'] = train['Parch']
train_data['Family_Size'] = train['Family_Size']
train_data['Family'] = train['Family']
train_data['AgeFill'] = train['AgeFill']
train_data['Fare_Per_Person'] = train['Fare_Per_Person']
train_data['Cabin'] = train['Cabin']
train_data['AgeClass'] = train['AgeClass']
train_data['ClassFare'] = train['ClassFare']
cols = ['Age', 'Fare', 'SibSp', 'Parch', 'Family_Size', 'Family', 'AgeFill', 'Fare_Per_Person', 'AgeClass', 'ClassFare']
train_data[cols] = train_data[cols].apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
print train_data.head()
# 处理测试集
Pclass = pd.get_dummies(test.Pclass)
Sex = pd.get_dummies(test.Sex)
Embarked = pd.get_dummies(test.Embarked)
Title = pd.get_dummies(test.Title)
AgeCat = pd.get_dummies(test.AgeCat)
HighLow = pd.get_dummies(test.HighLow)
test_data = pd.concat([Pclass, Sex, Embarked, Title, AgeCat, HighLow], axis=1)
test_data['Age'] = test['Age']
test_data['Fare'] = test['Fare']
test_data['SibSp'] = test['SibSp']
test_data['Parch'] = test['Parch']
test_data['Family_Size'] = test['Family_Size']
test_data['Family'] = test['Family']
test_data['AgeFill'] = test['AgeFill']
test_data['Fare_Per_Person'] = test['Fare_Per_Person']
test_data['Cabin'] = test['Cabin']
test_data['AgeClass'] = test['AgeClass']
test_data['ClassFare'] = test['ClassFare']
test_data[cols] = test_data[cols].apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
print test_data.head()
Explanation: 特征工程
End of explanation
from sklearn.linear_model import LogisticRegression as LR
from sklearn.cross_validation import cross_val_score
from sklearn.naive_bayes import GaussianNB as GNB
from sklearn.ensemble import RandomForestClassifier
import numpy as np
Explanation: 模型训练
End of explanation
model_lr = LR(penalty = 'l2', dual = True, random_state = 0)
model_lr.fit(train_data, label)
print "逻辑回归10折交叉验证得分: ", np.mean(cross_val_score(model_lr, train_data, label, cv=10, scoring='roc_auc'))
result = model_lr.predict( test_data )
output = pd.DataFrame( data={"PassengerId":test["PassengerId"], "Survived":result} )
output.to_csv( "lr.csv", index=False, quoting=3 )
Explanation: 逻辑回归
End of explanation
model_GNB = GNB()
model_GNB.fit(train_data, label)
print "高斯贝叶斯分类器10折交叉验证得分: ", np.mean(cross_val_score(model_GNB, train_data, label, cv=10, scoring='roc_auc'))
result = model_GNB.predict( test_data )
output = pd.DataFrame( data={"PassengerId":test["PassengerId"], "Survived":result} )
output.to_csv( "gnb.csv", index=False, quoting=3 )
Explanation: 提交kaggle后准确率:0.78469
高斯贝叶斯
End of explanation
forest = RandomForestClassifier( n_estimators=500, criterion='entropy', max_depth=5, min_samples_split=1,
min_samples_leaf=1, max_features='auto', bootstrap=False, oob_score=False, n_jobs=4,
verbose=0)
%time forest = forest.fit( train_data, label )
print "随机森林分类器10折交叉验证得分: ", np.mean(cross_val_score(forest, train_data, label, cv=10, scoring='roc_auc'))
result = forest.predict( test_data )
output = pd.DataFrame( data={"PassengerId":test["PassengerId"], "Survived":result} )
output.to_csv( "rf.csv", index=False, quoting=3 )
Explanation: 提交kaggle后准确率:0.74163
随机森林
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split,StratifiedShuffleSplit,StratifiedKFold
param_grid = dict( )
pipeline=Pipeline([ ('clf', forest) ])
grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=3, scoring='accuracy',
cv=StratifiedShuffleSplit(label, n_iter=10, test_size=0.2, train_size=None)).fit(train_data, label)
print("Best score: %0.3f" % grid_search.best_score_)
Explanation: 提交kaggle后准确率:0.76555
寻找最佳参数
End of explanation |