Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
---|---|---|
15,200 | <ASSISTANT_TASK:>
Python Code:
# Authors: Pierre Ablin <pierreablin@gmail.com>
#
# License: BSD-3-Clause
from time import time
import mne
from mne.preprocessing import ICA
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname).crop(0, 60).pick('meg').load_data()
reject = dict(mag=5e-12, grad=4000e-13)
raw.filter(1, 30, fir_design='firwin')
def run_ica(method, fit_params=None):
ica = ICA(n_components=20, method=method, fit_params=fit_params,
max_iter='auto', random_state=0)
t0 = time()
ica.fit(raw, reject=reject)
fit_time = time() - t0
title = ('ICA decomposition using %s (took %.1fs)' % (method, fit_time))
ica.plot_components(title=title)
run_ica('fastica')
run_ica('picard')
run_ica('infomax')
run_ica('infomax', fit_params=dict(extended=True))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read and preprocess the data. Preprocessing consists of
Step2: Define a function that runs ICA on the raw MEG data and plots the components
Step3: FastICA
Step4: Picard
Step5: Infomax
Step6: Extended Infomax
|
15,201 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import textmining_blackboxes as tm
#see if package imported correctly
tm.icantbelieve("butter")
title_info=pd.read_csv('data/na-slave-narratives/data/toc.csv')
#this is the "metadata" of these files--we'll use today
#why does data appear twice in the filename?
title_info
title_info["Date"].str.replace("\-\?", "5")
title_info["Date"].str.replace("[^0-9]", "") #use regular expressions to clean up
title_info["Date"]=title_info["Date"].str.replace("\-\?", "5")
title_info["Date"]=title_info["Date"].str.replace("[^0-9]", "") # what assumptions have I made about the data?
title_info["Date"]=pd.to_datetime(title_info["Date"], coerce=True)
title_info["Date"]<pd.datetime(1800,1,1)
title_info[title_info["Date"]<pd.datetime(1800,1,1)]
#Let's use a brittle thing for reading in a directory of pure txt files.
our_texts=tm.readtextfiles('data/na-slave-narratives/data/texts')
#again, this is not a std python package
#returns a simple list of the document as very long strings
#note if you want the following notebook will work on any directory of text files.
our_texts=tm.data_cleanse(our_texts)
#more necessary when have messy text
#eliminate escaped characters
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer=TfidfVectorizer(min_df=0.5, stop_words='english', use_idf=True)
document_term_matrix=vectorizer.fit_transform(our_texts)
# now let's get our vocabulary--the names corresponding to the rows
vocab=vectorizer.get_feature_names()
len(vocab)
document_term_matrix.shape
vocab[1000:1100]
document_term_matrix_dense=document_term_matrix.toarray()
dtmdf=pd.DataFrame(document_term_matrix_dense, columns=vocab)
dtmdf
#easy to program, but let's use a robust version from sklearn!
from sklearn.metrics.pairwise import cosine_similarity
similarity=cosine_similarity(document_term_matrix)
#Note here that the `cosine_similiary` can take
#an entire matrix as its argument
#what'd we get?
similarity
similarity.shape
similarity[100]
#this gives the similarity of row 100 to each of the other rows
term_document_matrix=document_term_matrix.T
# .T is the easy transposition method for a
# matrix in python's matrix packages.
# import a bunch of packages we need
import matplotlib.pyplot as plt
from sklearn.metrics.pairwise import cosine_similarity
from scipy.cluster.hierarchy import ward, dendrogram
#distance is 1-similarity, so:
dist=1-cosine_similarity(term_document_matrix)
# ward is an algorithm for hierarchical clustering
linkage_matrix=ward(dist)
#plot dendogram
f=plt.figure(figsize=(9,9))
R=dendrogram(linkage_matrix, orientation="right", labels=vocab)
plt.tight_layout()
vectorizer=TfidfVectorizer(min_df=.96, stop_words='english', use_idf=True)
#try a very high min_df
#rerun the model
document_term_matrix=vectorizer.fit_transform(our_texts)
vocab=vectorizer.get_feature_names()
#check the length of the vocab
len(vocab)
#switch again to the term_document_matrix
term_document_matrix=document_term_matrix.T
dist=1-cosine_similarity(term_document_matrix)
linkage_matrix=ward(dist)
#plot dendogram
f=plt.figure(figsize=(9,9))
R=dendrogram(linkage_matrix, orientation="right", labels=vocab)
plt.tight_layout()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IMPORTANT
Step2: Let's keep using the remarkable narratives available from Documenting the American South (http
Step3: back to boolean indexing!
Step4: For now, we'll play with the cool scientists and use the powerful and fast scikit learn package.
Step5: back to vectorizer from scikit learn
Step6: so document_term_matrix is a matrix with 294 rows--the documents--and 1650 columns--the vocabulary or terms or features
Step7: right now stored super efficiently as a sparse matrix
Step8: While this data frame is lovely to look at and useful to think with, it's tough on your computer's memory
Step9: that is a symmetrical matrix relating each of the texts (rows) to another text (row)
Step10: HOMEWORK EXERCISE
Step11: OMG U...G...L...Y!
|
15,202 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import time
from matplotlib import rcParams
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from six import iteritems
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from nilmtk.disaggregate import CombinatorialOptimisation, FHMM
import nilmtk.utils
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
rcParams['figure.figsize'] = (13, 6)
train = DataSet('../datasets/REDD/low_freq.h5')
test = DataSet('../datasets/REDD/low_freq.h5')
building = 1
train.set_window(end="2011-04-30")
test.set_window(start="2011-04-30")
train_elec = train.buildings[1].elec
test_elec = test.buildings[1].elec
print(' TRAIN MAINS')
train_elec.mains().plot();
print(' TRAIN APPLIANCES')
train_elec.submeters().plot();
print(' TEST MAINS')
test_elec.mains().plot();
print(' TEST APPLIANCES')
test_elec.submeters().plot();
fridge_meter = train_elec['fridge']
fridge_df = next(fridge_meter.load())
fridge_df.head()
mains = train_elec.mains()
mains_df = next(mains.load())
mains_df.head()
top_5_train_elec = train_elec.submeters().select_top_k(k=5)
top_5_train_elec
def predict(clf, test_elec, sample_period, timezone):
pred = {}
gt= {}
# "ac_type" varies according to the dataset used.
# Make sure to use the correct ac_type before using the default parameters in this code.
for i, chunk in enumerate(test_elec.mains().load(physical_quantity = 'power', ac_type = 'apparent', sample_period=sample_period)):
chunk_drop_na = chunk.dropna()
pred[i] = clf.disaggregate_chunk(chunk_drop_na)
gt[i]={}
for meter in test_elec.submeters().meters:
# Only use the meters that we trained on (this saves time!)
gt[i][meter] = next(meter.load(physical_quantity = 'power', ac_type = 'active', sample_period=sample_period))
gt[i] = pd.DataFrame({k:v.squeeze() for k,v in iteritems(gt[i]) if len(v)}, index=next(iter(gt[i].values())).index).dropna()
# If everything can fit in memory
gt_overall = pd.concat(gt)
gt_overall.index = gt_overall.index.droplevel()
pred_overall = pd.concat(pred)
pred_overall.index = pred_overall.index.droplevel()
# Having the same order of columns
gt_overall = gt_overall[pred_overall.columns]
#Intersection of index
gt_index_utc = gt_overall.index.tz_convert("UTC")
pred_index_utc = pred_overall.index.tz_convert("UTC")
common_index_utc = gt_index_utc.intersection(pred_index_utc)
common_index_local = common_index_utc.tz_convert(timezone)
gt_overall = gt_overall.loc[common_index_local]
pred_overall = pred_overall.loc[common_index_local]
appliance_labels = [m for m in gt_overall.columns.values]
gt_overall.columns = appliance_labels
pred_overall.columns = appliance_labels
return gt_overall, pred_overall
classifiers = {'CO':CombinatorialOptimisation(), 'FHMM':FHMM()}
predictions = {}
sample_period = 120
for clf_name, clf in classifiers.items():
print("*"*20)
print(clf_name)
print("*" *20)
start = time.time()
# Note that we have given the sample period to downsample the data to 1 minute.
# If instead of top_5 we wanted to train on all appliance, we would write
# fhmm.train(train_elec, sample_period=60)
clf.train(top_5_train_elec, sample_period=sample_period)
end = time.time()
print("Runtime =", end-start, "seconds.")
gt, predictions[clf_name] = predict(clf, test_elec, sample_period, train.metadata['timezone'])
appliance_labels = [m.label() for m in gt.columns.values]
gt.columns = appliance_labels
predictions['CO'].columns = appliance_labels
predictions['FHMM'].columns = appliance_labels
gt.head()
predictions['CO'].head()
predictions['FHMM'].head()
predictions['CO']['Fridge'].head(300).plot(label="Pred")
gt['Fridge'].head(300).plot(label="GT")
plt.legend()
predictions['FHMM']['Fridge'].head(300).plot(label="Pred")
gt['Fridge'].head(300).plot(label="GT")
plt.legend()
? nilmtk.utils.compute_rmse
rmse = {}
for clf_name in classifiers.keys():
rmse[clf_name] = nilmtk.utils.compute_rmse(gt, predictions[clf_name])
rmse = pd.DataFrame(rmse)
rmse
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dividing data into train and test set
Step2: Let us use building 1 for demo purposes
Step3: Let's split data at April 30th
Step4: Visualizing the data
Step5: Test
Step6: REDD data set has got appliance level data sampled every 3 or 4 seconds and mains data sampled every 1 second. Let us verify the same.
Step7: Since, both of these are sampled at different frequencies, we will downsample both to 1 minute resolution. We will also select the top-5 appliances in terms of energy consumption and use them for training our FHMM and CO models.
Step8: Training and disaggregation
Step9: Train using 2 benchmarking algorithms - Combinatorial Optimisation (CO) and Factorial Hidden Markov Model (FHMM)
Step10: Using prettier labels!
Step11: Taking a look at the ground truth of top 5 appliance power consumption
Step12: Plotting the predictions against the actual usage
Step13: Comparing NILM algorithms (CO vs FHMM)
|
15,203 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
scale_max = 255
scale_min = 0
return (x-scale_min)/(scale_max-scale_min)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit([0,1,2,3,4,5,6,7,8,9])
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]], name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=[None, n_classes], name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
filter_height = conv_ksize[0]
filter_width = conv_ksize[1]
in_depth = x_tensor.get_shape().as_list()[3]
out_depth = conv_num_outputs
weights = tf.Variable(tf.truncated_normal([filter_height, filter_width, in_depth, out_depth],
mean = 0.0, stddev = 1.0/out_depth))
biases = tf.Variable(tf.zeros(conv_num_outputs))
conv_layer_out = tf.nn.bias_add(tf.nn.conv2d(x_tensor, weights,
strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME'),
biases)
conv_layer_out = tf.nn.relu(conv_layer_out)
maxpooling_out = tf.nn.max_pool(conv_layer_out, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
return maxpooling_out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
x_dim = x_tensor.get_shape().as_list()
return tf.reshape(x_tensor, [-1, x_dim[1]*x_dim[2]*x_dim[3]])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
input_size = x_tensor.get_shape().as_list()[1]
output_size = num_outputs
weights = tf.Variable(tf.truncated_normal([input_size, output_size], mean = 0.0, stddev = 1.0 / input_size))
biases = tf.Variable(tf.zeros(num_outputs))
fc_out = tf.add(tf.matmul(x_tensor, weights), biases)
fc_out = tf.nn.relu(fc_out)
return fc_out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
input_size = x_tensor.get_shape().as_list()[1]
output_size = num_outputs
weights = tf.Variable(tf.truncated_normal([input_size, output_size], mean = 0.0, stddev = 1.0/input_size))
biases = tf.Variable(tf.zeros(num_outputs))
out = tf.add(tf.matmul(x_tensor, weights), biases)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv = [16,32,64]
num_output = 10
conv_layer_1 = conv2d_maxpool(x, conv_num_outputs = conv[0], conv_ksize = [4, 4], conv_strides = [1, 1], pool_ksize = [2, 2], pool_strides = [2, 2])
dropout_layer_1 = tf.nn.dropout(conv_layer_1, keep_prob)
conv_layer_2 = conv2d_maxpool(dropout_layer_1, conv_num_outputs = conv[1], conv_ksize = [4, 4], conv_strides = [1, 1], pool_ksize = [2, 2], pool_strides = [2, 2])
dropout_layer_2 = tf.nn.dropout(conv_layer_2, keep_prob)
conv_layer_3 = conv2d_maxpool(dropout_layer_2, conv_num_outputs = conv[2], conv_ksize = [4, 4], conv_strides = [1, 1], pool_ksize = [2, 2], pool_strides = [2, 2])
dropout_layer_3 = tf.nn.dropout(conv_layer_3, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flatten_layer = flatten(dropout_layer_3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc1 = fully_conn(flatten_layer, num_output)
fc1_drop = tf.nn.dropout(fc1, keep_prob)
fc2 = fully_conn(fc1_drop, num_output)
fc2_drop = tf.nn.dropout(fc2, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out_layer = output(fc2_drop, num_output)
# TODO: return output
return out_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
return session.run(optimizer, feed_dict={x:feature_batch,y:label_batch,keep_prob:keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.})
print('Loss: {:>10.4f}, Validation Accuracy: {:.6f}'.format(loss, valid_acc))
# TODO: Tune Parameters
epochs = 100
batch_size = 256
keep_probability = 0.8
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
15,204 | <ASSISTANT_TASK:>
Python Code:
# Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data (just MEG here for speed, though we could use MEG+EEG)
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Right Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
# crop for speed in these examples
evoked.crop(0.05, 0.15)
inv = make_inverse_operator(evoked.info, fwd, cov, loose=0., depth=0.8,
verbose=True)
snr = 3.0
lambda2 = 1.0 / snr ** 2
kwargs = dict(initial_time=0.08, hemi='lh', subjects_dir=subjects_dir,
size=(600, 600), clim=dict(kind='percent', lims=[90, 95, 99]),
smoothing_steps=7)
stc = abs(apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True))
brain = stc.plot(figure=1, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
stc = abs(apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True))
brain = stc.plot(figure=2, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
stc = abs(apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True))
brain = stc.plot(figure=3, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
stc = abs(apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True))
brain = stc.plot(figure=4, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
del inv
inv = make_inverse_operator(evoked.info, fwd, cov, loose=1., depth=0.8,
verbose=True)
del fwd
stc = apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True)
brain = stc.plot(figure=5, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True)
brain = stc.plot(figure=6, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
stc = apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True)
brain = stc.plot(figure=7, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
stc = apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True,
method_params=dict(eps=1e-4)) # larger eps just for speed
brain = stc.plot(figure=8, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fixed orientation
Step2: Let's look at the current estimates using MNE. We'll take the absolute
Step3: Next let's use the default noise normalization, dSPM
Step4: And sLORETA
Step5: And finally eLORETA
Step6: Free orientation
Step7: Let's look at the current estimates using MNE. We'll take the absolute
Step8: Next let's use the default noise normalization, dSPM
Step9: sLORETA
Step10: And finally eLORETA
|
15,205 | <ASSISTANT_TASK:>
Python Code:
from ib_insync import *
util.startLoop()
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=14)
contract = Stock('TSLA', 'SMART', 'USD')
ib.reqHeadTimeStamp(contract, whatToShow='TRADES', useRTH=True)
bars = ib.reqHistoricalData(
contract,
endDateTime='',
durationStr='60 D',
barSizeSetting='1 hour',
whatToShow='TRADES',
useRTH=True,
formatDate=1)
bars[0]
df = util.df(bars)
display(df.head())
display(df.tail())
%matplotlib inline
df.plot(y='close');
util.barplot(bars[-100:], title=contract.symbol);
contract = Forex('EURUSD')
bars = ib.reqHistoricalData(
contract,
endDateTime='',
durationStr='900 S',
barSizeSetting='10 secs',
whatToShow='MIDPOINT',
useRTH=True,
formatDate=1,
keepUpToDate=True)
from IPython.display import display, clear_output
import matplotlib.pyplot as plt
def onBarUpdate(bars, hasNewBar):
plt.close()
plot = util.barplot(bars)
clear_output(wait=True)
display(plot)
bars.updateEvent += onBarUpdate
ib.sleep(10)
ib.cancelHistoricalData(bars)
def onBarUpdate(bars, hasNewBar):
print(bars[-1])
bars = ib.reqRealTimeBars(contract, 5, 'MIDPOINT', False)
bars.updateEvent += onBarUpdate
ib.sleep(30)
ib.cancelRealTimeBars(bars)
ib.disconnect()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Historical data
Step2: To request hourly data of the last 60 trading days
Step3: Convert the list of bars to a data frame and print the first and last rows
Step4: Instruct the notebook to draw plot graphics inline
Step5: Plot the close data
Step6: There is also a utility function to plot bars as a candlestick plot. It can accept either a DataFrame or a list of bars. Here it will print the last 100 bars
Step7: Historical data with realtime updates
Step8: Replot for every change of the last bar
Step9: Realtime bars
Step10: Then do the real request and connect the event handler,
Step11: let it run for half a minute and then cancel the realtime bars.
Step12: The advantage of reqRealTimeBars is that it behaves more robust when the connection to the IB server farms is interrupted. After the connection is restored, the bars from during the network outage will be backfilled and the live bars will resume.
|
15,206 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import json
from pandas.io.json import json_normalize
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
bank = pd.read_json('data/world_bank_projects.json')
bank.head()
bank.countryname.value_counts().head(10)
names = []
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
names.extend(list(json_normalize(namecode)['name']))
pd.Series(names).value_counts().head(10).drop('', axis=0)
codes_names = pd.DataFrame(columns=['code', 'name'])
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
codes_names = pd.concat([codes_names, json_normalize(namecode)])
codes_names_dict = (codes_names[codes_names.name != '']
.drop_duplicates()
.to_dict())
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
cell = json_normalize(namecode).replace('', np.nan)
cell = cell.fillna(codes_names_dict)
bank.set_value(i, 'mjtheme_namecode', cell.to_dict(orient='record'))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: imports for Python, Pandas
Step2: JSON example, with string
Step3: JSON example, with file
Step4: JSON exercise
Step5: 1. Find the 10 countries with most projects
Step6: 2. Find the top 10 major project themes (using column 'mjtheme_namecode')
Step7: 3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
|
15,207 | <ASSISTANT_TASK:>
Python Code:
import urllib.request
urllib.request.urlretrieve("http://www.amstat.org/publications/jse/datasets/cigarettes.dat.txt", "cigarettes.dat")
!wc -l cigarettes.dat
cat cigarettes.dat
import pandas as pd
df = pd.read_csv("cigarettes.dat", delim_whitespace=True, header=None,
names=["Marca", "Alquitrán", "Nicotina", "Peso", "Monóxido"])
df.head()
df["Clases"] = ['Rubio', 'Negro', 'Negro', 'Rubio', 'Rubio',
'Negro', 'Rubio', 'Rubio', 'Negro', 'Rubio',
'Rubio', 'Rubio', 'Rubio', 'Rubio', 'Rubio',
'Rubio', 'Negro', 'Rubio', 'Negro', 'Rubio',
'Negro', 'Rubio', 'Negro', 'Negro', 'Rubio']
df[["Clases", "Alquitrán", "Nicotina", "Peso", "Monóxido"]]
df.describe().transpose()
df.sem()
df.var()
(df.describe(percentiles=[.05, .10, .25, .50, .75, .90, .95])
[["Monóxido", "Alquitrán", "Nicotina", "Peso"]]
.transpose()
[["5%", "10%", "25%", "50%", "75%", "90%", "95%"]])
df.median()
iqr = df.quantile(.75) - df.quantile(.25)
iqr
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use("fivethirtyeight")
plt.figure(figsize=(8, 8))
plt.subplot(2, 2, 1)
df.boxplot("Monóxido", return_type='axes')
plt.subplot(2, 2, 2)
df.boxplot("Alquitrán", return_type='axes')
plt.ylim(0, 35) # Para ver una medida discordante
plt.subplot(2, 2, 3)
df.boxplot("Nicotina", return_type='axes')
plt.subplot(2, 2, 4)
df.boxplot("Peso", return_type='axes')
plt.ylim(0, 1.20) # Importante
df[df["Alquitrán"] > df["Alquitrán"].quantile(.75) + 1.5 * iqr["Alquitrán"]]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vamos a cargar los en Python con pandas. pandas es una biblioteca de Python para trabajar con tablas de datos (llamados DataFrames) de forma cómoda. En Pybonacci escribimos un tutorial de pandas desde lo más básico a usos un poco más intermedios.
Step2: Además, vamos a añadir el tipo de cigarro para que la tabla quede como la presentada en el curso.
Step3: Parte 1
Step4: Podemos añadir también el error estándar de la media y la varianza
Step5: Por tanto, contestando a las preguntas del informe
Step6: Recuperamos además la mediana y el recorrido intercuartílico
Step7: Observamos una gran variabilidad de los contenidos de alquitrán y monóxido de carbono, mientras que las cantidades de nicotina son más estables y el peso de los cigarrillos prácticamente no cambia. Los resultados son similares a los obtenidos estudiando la media y su dispersión.
Step8: Tanto el monóxido como el peso presentan distribuciones bastante simétricas, mientras que el alquitrán tiene un claro sesgo positivo. Especial atención merece el peso en este caso, pues una correcta escala vertical es esencial para no percibir una variabilidad errónea. Tanto en los datos de nicotina como en los de alquitrán se aprecian sendos valores discordantes, que invitarían a no comprar esa marca de cigarrillos.
|
15,208 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from pyunlocbox import functions, solvers
plt.rcParams['figure.figsize'] = (17, 5)
f1 = functions.norm_l2(y=[4, 5, 6, 7])
f2 = functions.dummy()
solver = solvers.forward_backward()
ret = solvers.solve([f1, f2], [0., 0, 0, 0], solver, atol=1e-5)
ret['sol']
plt.semilogy(np.array(ret['objective'])[:, 0], '.-');
class myfunc(functions.func):
def __init__(self, myparam=1, **kwargs):
self.myparam = myparam
super(myfunc, self).__init__(**kwargs)
def _eval(self, x):
return 0 # Function evaluated at x.
def _grad(self, x):
return x # Gradient evaluated at x, if available.
def _prox(self, x, T):
return x # Proximal operator evaluated at x, if available.
f = myfunc(myparam=2)
f.cap([0, 0])
# Your code here.
# Your code here.
%pip install numpy
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 Solve an optimization problem
Step2: 2 Define your own objective function
Step3: Likewise, you can implement your owns solvers and acceleration schemes by sub-classing pyunlocbox.solvers.solver and pyunlocbox.acceleration.accel.
Step4: 4 Playground
Step5: If you miss a package, you can install it with
|
15,209 | <ASSISTANT_TASK:>
Python Code:
# Init matplotlib
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (8, 8)
from mpl_toolkits.mplot3d import axes3d
import matplotlib.colors as colors
import numpy as np
import warnings
from scipy import optimize
def plot_contour_2d_solution_space(func,
fig=None,
ax=None,
show=True,
xmin=-np.ones(2),
xmax=np.ones(2),
xstar=None,
xvisited=None,
title=""):
Plot points visited during the execution of an optimization algorithm.
TODO
if (fig is None) or (ax is None): # TODO
fig, ax = plt.subplots(figsize=(12, 8))
if xvisited is not None:
xmin = np.amin(np.hstack([xmin.reshape([-1, 1]), xvisited]), axis=1)
xmax = np.amax(np.hstack([xmax.reshape([-1, 1]), xvisited]), axis=1)
x1_space = np.linspace(xmin[0], xmax[0], 200)
x2_space = np.linspace(xmin[1], xmax[1], 200)
x1_mesh, x2_mesh = np.meshgrid(x1_space, x2_space)
zz = func(np.array([x1_mesh.ravel(), x2_mesh.ravel()])).reshape(x1_mesh.shape)
############################
min_value = func(xstar)
max_value = zz.max()
levels = np.logspace(0.1, 3., 5) # TODO
im = ax.pcolormesh(x1_mesh, x2_mesh, zz,
vmin=0.1, # TODO
vmax=max_value,
norm=colors.LogNorm(), # TODO
shading='gouraud',
cmap='gnuplot2') # 'jet' # 'gnuplot2'
plt.colorbar(im, ax=ax)
cs = plt.contour(x1_mesh, x2_mesh, zz, levels,
linewidths=(2, 2, 2, 2, 3),
linestyles=('dotted', '-.', 'dashed', 'solid', 'solid'),
alpha=0.5,
colors='white')
ax.clabel(cs, inline=False, fontsize=12)
############################
if xvisited is not None:
ax.plot(xvisited[0],
xvisited[1],
'-og',
alpha=0.5,
label="$visited$")
############################
if xstar is not None:
sc = ax.scatter(xstar[0],
xstar[1],
c='red',
label="$x^*$")
sc.set_zorder(10) # put this point above every thing else
############################
ax.set_title(title)
ax.set_xlabel(r"$x_1$")
ax.set_ylabel(r"$x_2$")
ax.legend(fontsize=12)
if show:
plt.show()
return fig, ax
def plot_2d_solution_space(func,
fig=None,
ax=None,
show=True,
xmin=-np.ones(2),
xmax=np.ones(2),
xstar=None,
xvisited=None,
angle_view=None,
title=""):
Plot points visited during the execution of an optimization algorithm.
TODO
if fig is None or ax is None: # TODO
fig = plt.figure(figsize=(12, 8))
ax = axes3d.Axes3D(fig)
if angle_view is not None:
ax.view_init(angle_view[0], angle_view[1])
x1_space = np.linspace(xmin[0], xmax[0], 100)
x2_space = np.linspace(xmin[1], xmax[1], 100)
x1_mesh, x2_mesh = np.meshgrid(x1_space, x2_space)
zz = func(np.array([x1_mesh.ravel(), x2_mesh.ravel()])).reshape(x1_mesh.shape) # TODO
############################
surf = ax.plot_surface(x1_mesh,
x2_mesh,
zz,
cmap='gnuplot2', # 'jet' # 'gnuplot2'
norm=colors.LogNorm(), # TODO
rstride=1,
cstride=1,
#color='b',
shade=False)
ax.set_zlabel(r"$f(x_1, x_2)$")
fig.colorbar(surf, shrink=0.5, aspect=5)
############################
if xstar is not None:
ax.scatter(xstar[0],
xstar[1],
func(xstar),
#s=50, # TODO
c='red',
alpha=1,
label="$x^*$")
ax.set_title(title)
ax.set_xlabel(r"$x_1$")
ax.set_ylabel(r"$x_2$")
ax.legend(fontsize=12)
if show:
plt.show()
return fig, ax
class Sphere:
def __init__(self):
self.reset_log()
self.log = False
self.arg_min = np.zeros(2) # The actual optimal solution
def reset_log(self):
self.log_dict = {'x': [], 'y': []}
def __call__(self, x):
rThe Sphere function.
The Sphere function is a famous **convex** function used to test the performance of optimization algorithms.
This function is very easy to optimize and can be used as a first test to check an optimization algorithm.
$$
f(\boldsymbol{x}) = \sum_{i=1}^{n} x_{i}^2
$$
Global minimum:
$$
f(\boldsymbol{0}) = 0
$$
Search domain:
$$
\boldsymbol{x} \in \mathbb{R}^n
$$
Example: single 2D point
------------------------
To evaluate $x = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$:
>>> sphere( np.array([0, 0]) )
0.0
The result should be $f(x) = 0$.
Example: single 3D point
------------------------
To evaluate $x = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$:
>>> sphere( np.array([1, 1, 1]) )
3.0
The result should be $f(x) = 3.0$.
Example: multiple 2D points
---------------------------
To evaluate $x_1 = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$,
$x_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}$ and
$x_3 = \begin{pmatrix} 2 \\ 2 \end{pmatrix}$ at once:
>>> sphere( np.array([[0, 1, 2], [0, 1, 2]]) )
array([ 0., 2., 8.])
The result should be $f(x_1) = 0$, $f(x_2) = 1$ and $f(x_3) = 8$.
Parameters
----------
x : array_like
One dimension Numpy array of the point at which the Sphere function is to be computed
or a two dimension Numpy array of points at which the Sphere function is to be computed.
Returns
-------
float or array_like
The value(s) of the Sphere function for the given point(s) `x`.
# Remark: `sum(x**2.0)` is equivalent to `np.sum(x**2.0, axis=0)` but only the latter works if x is a scallar (e.g. x = np.float(3)).
y = np.sum(x**2.0, axis=0)
if self.log is True:
self.log_dict['x'].append(x)
self.log_dict['y'].append(y)
return y
func = Sphere()
xmin = np.full(2, -10)
xmax = np.full(2, 10)
from scipy import optimize
bounds = [[-10, 10], [-10, 10]]
res = optimize.differential_evolution(func,
bounds, # The initial point
maxiter=100, # The number of DE iterations
polish=True)
print("x* =", res.x)
print("f(x*) =", res.fun)
print("Cause of the termination:", res.message)
print("Number of evaluations of the objective functions:", res.nfev)
print("Number of iterations performed by the optimizer:", res.nit)
print(res)
%%time
bounds = np.array([[-10, 10], [-10, 10]])
it_x_list = []
it_fx_list = []
def callback(xk, convergence):
it_x_list.append(xk)
it_fx_list.append(func(xk))
print(len(it_x_list), xk, it_fx_list[-1])
func.reset_log()
func.log = True
with warnings.catch_warnings():
warnings.simplefilter("ignore")
res = optimize.differential_evolution(func,
bounds, # The initial point
maxiter=100, # The number of DE iterations
callback=callback,
polish=False,
disp=False) # Print status messages
func.log = False
eval_x_array = np.array(func.log_dict['x']).T
eval_error_array = np.array(func.log_dict['y']) - func(func.arg_min)
it_x_array = np.array(it_x_list).T
it_error_array = np.array(it_fx_list) - func(func.arg_min)
print("x* =", res.x)
print("f(x*) =", res.fun)
print("Cause of the termination:", res.message)
print("Number of evaluations of the objective functions:", res.nfev)
print("Number of iterations performed by the optimizer:", res.nit)
plot_contour_2d_solution_space(func,
xmin=xmin,
xmax=xmax,
xstar=res.x,
xvisited=it_x_array,
title="Differential Evolution (best iterations solution only)");
plot_contour_2d_solution_space(func,
xmin=xmin,
xmax=xmax,
xstar=res.x,
xvisited=eval_x_array,
title="Differential Evolution (all tested solution)");
plt.loglog(it_error_array)
plt.title("Error of the best individual per iteration")
plt.xlabel("Iteration")
plt.ylabel("Error");
plt.loglog(eval_error_array)
plt.title("Error for all tested solutions")
plt.xlabel("Evaluation")
plt.ylabel("Error");
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Plot functions
Step5: Objective function
Step6: The "Differential Evolution" (DE) algorithm
Step7: Performances analysis
|
15,210 | <ASSISTANT_TASK:>
Python Code:
import re
import os
import random
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
print("Device:", tpu.master())
strategy = tf.distribute.TPUStrategy(tpu)
except:
strategy = tf.distribute.get_strategy()
print("Number of replicas:", strategy.num_replicas_in_sync)
AUTOTUNE = tf.data.AUTOTUNE
BATCH_SIZE = 25 * strategy.num_replicas_in_sync
IMAGE_SIZE = [180, 180]
CLASS_NAMES = ["NORMAL", "PNEUMONIA"]
train_images = tf.data.TFRecordDataset(
"gs://download.tensorflow.org/data/ChestXRay2017/train/images.tfrec"
)
train_paths = tf.data.TFRecordDataset(
"gs://download.tensorflow.org/data/ChestXRay2017/train/paths.tfrec"
)
ds = tf.data.Dataset.zip((train_images, train_paths))
COUNT_NORMAL = len(
[
filename
for filename in train_paths
if "NORMAL" in filename.numpy().decode("utf-8")
]
)
print("Normal images count in training set: " + str(COUNT_NORMAL))
COUNT_PNEUMONIA = len(
[
filename
for filename in train_paths
if "PNEUMONIA" in filename.numpy().decode("utf-8")
]
)
print("Pneumonia images count in training set: " + str(COUNT_PNEUMONIA))
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, "/")
# The second to last is the class-directory
return parts[-2] == "PNEUMONIA"
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# resize the image to the desired size.
return tf.image.resize(img, IMAGE_SIZE)
def process_path(image, path):
label = get_label(path)
# load the raw data from the file as a string
img = decode_img(image)
return img, label
ds = ds.map(process_path, num_parallel_calls=AUTOTUNE)
ds = ds.shuffle(10000)
train_ds = ds.take(4200)
val_ds = ds.skip(4200)
for image, label in train_ds.take(1):
print("Image shape: ", image.numpy().shape)
print("Label: ", label.numpy())
test_images = tf.data.TFRecordDataset(
"gs://download.tensorflow.org/data/ChestXRay2017/test/images.tfrec"
)
test_paths = tf.data.TFRecordDataset(
"gs://download.tensorflow.org/data/ChestXRay2017/test/paths.tfrec"
)
test_ds = tf.data.Dataset.zip((test_images, test_paths))
test_ds = test_ds.map(process_path, num_parallel_calls=AUTOTUNE)
test_ds = test_ds.batch(BATCH_SIZE)
def prepare_for_training(ds, cache=True):
# This is a small dataset, only load it once, and keep it in memory.
# use `.cache(filename)` to cache preprocessing work for datasets that don't
# fit in memory.
if cache:
if isinstance(cache, str):
ds = ds.cache(cache)
else:
ds = ds.cache()
ds = ds.batch(BATCH_SIZE)
# `prefetch` lets the dataset fetch batches in the background while the model
# is training.
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = prepare_for_training(train_ds)
val_ds = prepare_for_training(val_ds)
image_batch, label_batch = next(iter(train_ds))
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255)
if label_batch[n]:
plt.title("PNEUMONIA")
else:
plt.title("NORMAL")
plt.axis("off")
show_batch(image_batch.numpy(), label_batch.numpy())
from tensorflow import keras
from tensorflow.keras import layers
def conv_block(filters, inputs):
x = layers.SeparableConv2D(filters, 3, activation="relu", padding="same")(inputs)
x = layers.SeparableConv2D(filters, 3, activation="relu", padding="same")(x)
x = layers.BatchNormalization()(x)
outputs = layers.MaxPool2D()(x)
return outputs
def dense_block(units, dropout_rate, inputs):
x = layers.Dense(units, activation="relu")(inputs)
x = layers.BatchNormalization()(x)
outputs = layers.Dropout(dropout_rate)(x)
return outputs
def build_model():
inputs = keras.Input(shape=(IMAGE_SIZE[0], IMAGE_SIZE[1], 3))
x = layers.Rescaling(1.0 / 255)(inputs)
x = layers.Conv2D(16, 3, activation="relu", padding="same")(x)
x = layers.Conv2D(16, 3, activation="relu", padding="same")(x)
x = layers.MaxPool2D()(x)
x = conv_block(32, x)
x = conv_block(64, x)
x = conv_block(128, x)
x = layers.Dropout(0.2)(x)
x = conv_block(256, x)
x = layers.Dropout(0.2)(x)
x = layers.Flatten()(x)
x = dense_block(512, 0.7, x)
x = dense_block(128, 0.5, x)
x = dense_block(64, 0.3, x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
initial_bias = np.log([COUNT_PNEUMONIA / COUNT_NORMAL])
print("Initial bias: {:.5f}".format(initial_bias[0]))
TRAIN_IMG_COUNT = COUNT_NORMAL + COUNT_PNEUMONIA
weight_for_0 = (1 / COUNT_NORMAL) * (TRAIN_IMG_COUNT) / 2.0
weight_for_1 = (1 / COUNT_PNEUMONIA) * (TRAIN_IMG_COUNT) / 2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print("Weight for class 0: {:.2f}".format(weight_for_0))
print("Weight for class 1: {:.2f}".format(weight_for_1))
checkpoint_cb = tf.keras.callbacks.ModelCheckpoint("xray_model.h5", save_best_only=True)
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
patience=10, restore_best_weights=True
)
initial_learning_rate = 0.015
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
with strategy.scope():
model = build_model()
METRICS = [
tf.keras.metrics.BinaryAccuracy(),
tf.keras.metrics.Precision(name="precision"),
tf.keras.metrics.Recall(name="recall"),
]
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule),
loss="binary_crossentropy",
metrics=METRICS,
)
history = model.fit(
train_ds,
epochs=100,
validation_data=val_ds,
class_weight=class_weight,
callbacks=[checkpoint_cb, early_stopping_cb],
)
fig, ax = plt.subplots(1, 4, figsize=(20, 3))
ax = ax.ravel()
for i, met in enumerate(["precision", "recall", "binary_accuracy", "loss"]):
ax[i].plot(history.history[met])
ax[i].plot(history.history["val_" + met])
ax[i].set_title("Model {}".format(met))
ax[i].set_xlabel("epochs")
ax[i].set_ylabel(met)
ax[i].legend(["train", "val"])
model.evaluate(test_ds, return_dict=True)
for image, label in test_ds.take(1):
plt.imshow(image[0] / 255.0)
plt.title(CLASS_NAMES[label[0].numpy()])
prediction = model.predict(test_ds.take(1))[0]
scores = [1 - prediction, prediction]
for score, name in zip(scores, CLASS_NAMES):
print("This image is %.2f percent %s" % ((100 * score), name))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need a Google Cloud link to our data to load the data using a TPU.
Step2: Load the data
Step3: Let's count how many healthy/normal chest X-rays we have and how many
Step4: Notice that there are way more images that are classified as pneumonia than normal. This
Step5: Let's split the data into a training and validation datasets.
Step6: Let's visualize the shape of an (image, label) pair.
Step7: Load and format the test data as well.
Step8: Visualize the dataset
Step9: Call the next batch iteration of the training data.
Step10: Define the method to show the images in the batch.
Step11: As the method takes in NumPy arrays as its parameters, call the numpy function on the
Step12: Build the CNN
Step13: The following method will define the function to build our model for us.
Step14: Correct for data imbalance
Step15: The weight for class 0 (Normal) is a lot higher than the weight for class 1
Step16: We also want to tune our learning rate. Too high of a learning rate will cause the model
Step17: Fit the model
Step18: Visualizing model performance
Step19: We see that the accuracy for our model is around 95%.
Step20: We see that our accuracy on our test data is lower than the accuracy for our validating
|
15,211 | <ASSISTANT_TASK:>
Python Code:
Fe2O3_spectrum_dataframe = pd.read_pickle('Fe2O3_computational_spectrum.pkl')
Fe2O3_spectrum_dataframe
spectrum_energy = Fe2O3_spectrum_dataframe['x_axis_energy_55eV'].values[0]
spectrum_mu = Fe2O3_spectrum_dataframe['interp_spectrum_55eV'].values[0]
Fe2O3_XANES_object1 = XANES(spectrum_energy, spectrum_mu, absorption_specie='Fe', edge='K')
Fe2O3_object1_CenvPred = CenvPrediction(Fe2O3_XANES_object1, energy_reference='lowest', energy_range=45)
##Plot interpolated spectrum used in coordination environment prediction
plt.plot(Fe2O3_object1_CenvPred.interp_energy, Fe2O3_object1_CenvPred.interp_spectrum)
Fe2O3_object1_CenvPred.cenv_prediction()
print('Predicted coordination environment label: ', Fe2O3_object1_CenvPred.pred_cenv)
Fe2O3_XANES_object2 = XANES.from_K_XANES_MP_tsv('xas.XANES.K.Fe2O3.mp-24972.tsv')
Fe2O3_object2_CenvPred = CenvPrediction(Fe2O3_XANES_object2, energy_reference='lowest', energy_range=45)
##Plot interpolated spectrum used in coordination environment prediction
plt.plot(Fe2O3_object2_CenvPred.interp_energy, Fe2O3_object2_CenvPred.interp_spectrum)
Fe2O3_object2_CenvPred.cenv_prediction()
print('Predicted coordination environment label: ', Fe2O3_object2_CenvPred.pred_cenv)
Fe2O3_xdi_file = 'fe2o3_rt.xdi'
Fe2O3_xanes_xdi = XANES.from_XDI_file(Fe2O3_xdi_file, absorption_specie='Fe')
#Using the -15eV to 45eV range around the edge energy reference point for coordination environment prediction
Fe2O3_CenvPred_xdi = CenvPrediction(Fe2O3_xanes_xdi, 'E0', [-15, 45], Fe2O3_xanes_xdi.e0)
##Plot interpolated spectrum used in coordination environment prediction
plt.plot(Fe2O3_CenvPred_xdi.interp_energy, Fe2O3_CenvPred_xdi.interp_spectrum)
Fe2O3_CenvPred_xdi.cenv_prediction()
print('Predicted coordination environment label: ', Fe2O3_CenvPred_xdi.pred_cenv)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initiate XANES object from Materials Project website downloaded spectrum file (tsv)
Step2: Initiate XANES object from XAS Data Interchange (XDI) distribution
|
15,212 | <ASSISTANT_TASK:>
Python Code:
k0, m0 = 1.0, 1.0 # ideally, dimensional units...
w20 = k0/m0
w0 = np.sqrt(w20)
k1, k2 = 2*k0, 3*k0
m1, m2 = 2*m0, 4*m0
M = np.array(((m1, 0), ( 0, m2)))
K = np.array(((k1+k2, -k2), (-k2, k2)))
p = np.array(( 0.0, 1.0)); w = 2.0*w0
print_mat(M, pre='\\boldsymbol{M}=m\\,', fmt='%d')
print_mat(K, pre='\\boldsymbol{K}=k\\,', fmt='%d')
print_mat(p[:,None], pre=r'\boldsymbol{p}(t) = p_0\,', fmt='%d',
post='\\sin(%d\\omega_0t)'%w, mt='B')
evals, Psi = eigh(K, M)
Psi[:,0] /= Psi[0,0] ; Psi[:,1] /= Psi[1,1]
Mstar = Psi.T@M@Psi
Kstar = Psi.T@K@Psi
pstar = Psi.T@p
print_mat(evals[None,:], mt='p', pre=r'\omega^2_i=\omega^2_o\,')
print_mat(Psi, pre=r'\boldsymbol{\Psi}=')
print_mat(Mstar, pre=r'\boldsymbol{M}^\star=m\,')
print_mat(Kstar, pre=r'\boldsymbol{K}^\star=k\,')
print_mat(pstar[:,None], pre=r'\boldsymbol{p}^\star=p_o\,', mt='B')
L = np.sqrt(evals)
DAF = np.linalg.inv(Mstar)@(1.0/(L**2-w**2))
beta = w/L
t = np.linspace(0,50,1001)[:,None]
q = pstar*DAF*(np.sin(w*t)-beta*np.sin(L*t))
plt.style.use(['fivethirtyeight', '00_mplrc'])
curves = plt.plot(t,q)
plt.legend(curves,['q1', 'q2'])
plt.title('Modal Response')
plt.xlabel('$\omega_0t$')
plt.ylabel('$q_i/\Delta_{st}$');
plt.tight_layout()
plt.savefig('../figures/modal_response.pdf')
x = (Psi@q.T).T
curves = plt.plot(t, x)
plt.legend(curves,['x1', 'x2'])
plt.title('Structural Response')
plt.xlabel('$\omega_0t$')
plt.ylabel('$X_i/\Delta_{st}$')
plt.tight_layout()
plt.savefig('../figures/structural_response.pdf')
plt.plot(t, (2*x[:,0]+4*x[:,1])/6, label='$(4x_2+2x_1)/6$')
plt.plot(t, (x[:,1]+x[:,0]) , label='$(x_2-x_1)$')
plt.title('Structural Response Tweaked')
plt.xlabel('$\omega_0t$')
plt.ylabel('$X_i/\Delta_{st}$')
plt.legend()
plt.tight_layout()
plt.savefig('../figures/tweaked.pdf')
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np ; from scipy.linalg import eigh
np.set_printoptions(suppress=False, linewidth=120)
from IPython.display import Latex
def print_mat(mat, pre='', post='', fmt='%.6f', mt='b'):
display(Latex(
r'$$' + pre + r'\begin{%smatrix}'%mt +
r'\\'.join('&'.join(fmt%x for x in row) for row in mat) +
r'\end{%smatrix}'%mt + post + r'$$'))
from IPython.display import HTML
HTML(open('00_custom.css').read())
import sympy as sy
sy.init_printing(use_latex=1)
o, m0, k0 = sy.symbols('omega^2 m k')
k1, k2, m1, m2 = 2*k0, 3*k0, 2*m0, 4*m0
sM = sy.Matrix(((m1,0,),(0,m2)))
sK = sy.Matrix(((k1+k2, -k2),(-k2,k2)))
KooM = sK - o*sM
display(KooM)
display(KooM.det())
display(KooM.det().expand())
iKooM = KooM.inv()
sp = sy.Matrix(((0,),(1,)))
a,b=(iKooM*sp)
display(sy.Eq(sy.symbols('D_1'),a),sy.Eq(sy.symbols('D_2'),b))
a = a.expand().simplify().subs({m0:1,k0:1})
b = b.expand().simplify().subs({m0:1,k0:1});
with plt.style.context(['classic', '00_mplrc']):
plot = sy.plot(a, b, (o, 0, 5), ylim=(-5,5), show=False)
plot[0].line_color = 'black'; plot[0].label = '$D_1$'
plot[1].line_color = 'red' ; plot[1].label = '$D_2$'
plot.xlabel = r'$\beta^2$'
plot.ylabel = r'$D_{i}$'
plot.legend = True
plot.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computing the eigenvalues and the eigenvectors
Step2: The @ operator stands, in this context, for matrix multiplication.
Step3: Modal Response
Step4: The definition of time vector is a bit complicated...
Step5: The following code cell (that is executed before any other code cell by the notebook) loads libraries (or functions from libraries) and determines the style of plots and of the notebook itself. Besides the cell defines a function to format conveniently matrices and vectors.
|
15,213 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import nengo
import numpy as np
import scipy.ndimage
import matplotlib.animation as animation
from matplotlib import pylab
from PIL import Image
import nengo.spa as spa
import cPickle
import random
from nengo_extras.data import load_mnist
from nengo_extras.vision import Gabor, Mask
#Encode categorical integer features using a one-hot aka one-of-K scheme.
def one_hot(labels, c=None):
assert labels.ndim == 1
n = labels.shape[0]
c = len(np.unique(labels)) if c is None else c
y = np.zeros((n, c))
y[np.arange(n), labels] = 1
return y
# --- load the data
img_rows, img_cols = 28, 28
(X_train, y_train), (X_test, y_test) = load_mnist()
X_train = 2 * X_train - 1 # normalize to -1 to 1
X_test = 2 * X_test - 1 # normalize to -1 to 1
train_targets = one_hot(y_train, 10)
test_targets = one_hot(y_test, 10)
rng = np.random.RandomState(9)
# --- set up network parameters
#Want to encode and decode the image
n_vis = X_train.shape[1]
n_out = X_train.shape[1]
#number of neurons/dimensions of semantic pointer
n_hid = 1000 #Try with more neurons for more accuracy
#Want the encoding/decoding done on the training images
ens_params = dict(
eval_points=X_train,
neuron_type=nengo.LIF(), #Why not use LIF? originally used LIFRate()
intercepts=nengo.dists.Choice([-0.5]),
max_rates=nengo.dists.Choice([100]),
)
#Least-squares solver with L2 regularization.
solver = nengo.solvers.LstsqL2(reg=0.01)
#solver = nengo.solvers.LstsqL2(reg=0.0001)
solver2 = nengo.solvers.LstsqL2(reg=0.01)
#network that generates the weight matrices between neuron activity and images and the labels
with nengo.Network(seed=3) as model:
a = nengo.Ensemble(n_hid, n_vis, seed=3, **ens_params)
v = nengo.Node(size_in=n_out)
conn = nengo.Connection(
a, v, synapse=None,
eval_points=X_train, function=X_train,#want the same thing out (identity)
solver=solver)
v2 = nengo.Node(size_in=train_targets.shape[1])
conn2 = nengo.Connection(
a, v2, synapse=None,
eval_points=X_train, function=train_targets, #Want to get the labels out
solver=solver2)
# linear filter used for edge detection as encoders, more plausible for human visual system
encoders = Gabor().generate(n_hid, (11, 11), rng=rng)
encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)
#Set the ensembles encoders to this
a.encoders = encoders
#Check the encoders were correctly made
plt.imshow(encoders[0].reshape(28, 28), vmin=encoders[0].min(), vmax=encoders[0].max(), cmap='gray')
#Get the one hot labels for the images
def get_outs(sim, images):
#The activity of the neurons when an image is given as input
_, acts = nengo.utils.ensemble.tuning_curves(a, sim, inputs=images)
#The activity multiplied by the weight matrix (calculated in the network) to give the one-hot labels
return np.dot(acts, sim.data[conn2].weights.T)
#Check how many of the labels were produced correctly
#def get_error(sim, images, labels):
# return np.argmax(get_outs(sim, images), axis=1) != labels
#Get label of the images
#def get_labels(sim,images):
# return np.argmax(get_outs(sim, images), axis=1)
#Get the neuron activity of an image or group of images (this is the semantic pointer in this case)
def get_activities(sim, images):
_, acts = nengo.utils.ensemble.tuning_curves(a, sim, inputs=images)
return acts
#Get the representation of the image after it has gone through the encoders (Gabor filters) but before it is in the neurons
#This must be computed to create the weight matrix for rotation from neuron activity to this step
# This allows a recurrent connection to be made from the neurons to themselves later
def get_encoder_outputs(sim,images):
#Pass the images through the encoders
outs = np.dot(images,sim.data[a].encoders.T) #before the neurons
return outs
dim =28
#Scale an image
def scale(img, scale):
newImg = scipy.ndimage.interpolation.zoom(np.reshape(img, (dim,dim), 'F').T,scale,cval=-1)
#If its scaled up
if(scale >1):
newImg = newImg[len(newImg)/2-(dim/2):-(len(newImg)/2-(dim/2)),len(newImg)/2-(dim/2):-(len(newImg)/2-(dim/2))]
if len(newImg) >28:
newImg = newImg[:28,:28]
newImg = newImg.ravel()
else: #Scaled down
m = np.zeros((dim,dim))
m.fill(-1)
m[(dim-len(newImg))/2:(dim-len(newImg))/2+len(newImg),(dim-len(newImg))/2:(dim-len(newImg))/2+len(newImg)] = newImg
newImg = m
return newImg.ravel()
#Images to train, starting at random size
orig_imgs = X_train[:100000].copy()
for img in orig_imgs:
while True:
try:
img[:] = scale(img,random.uniform(0.5,1.5))
break
except:
img[:] = img
#Images scaled up a fixed amount from the original random scaling
scaled_up_imgs = orig_imgs.copy()
for img in scaled_up_imgs:
img[:] = scale(img,1.1)
#Images scaled down a fixed amount from the original random scaling
scaled_down_imgs = orig_imgs.copy()
for img in scaled_down_imgs:
img[:] = scale(img,0.9)
#Images not used for training, but for testing (all at random orientations)
test_imgs = X_test[:1000].copy()
for img in test_imgs:
img[:] = scipy.ndimage.interpolation.rotate(np.reshape(img,(28,28)),
(np.random.randint(360)),reshape=False,mode="nearest").ravel()
#Images not used for training, but for testing (all at random sizes)
test_imgs = X_test[:1000].copy()
for img in test_imgs:
while True:
try:
img[:] = scale(img,random.uniform(0.5,1.5))
break
except:
img[:] = img
#Check to make sure images were generated correctly
plt.subplot(131)
plt.imshow(np.reshape(orig_imgs[1],(28,28)), cmap='gray')
plt.subplot(132)
plt.imshow(np.reshape(scaled_up_imgs[1],(28,28)), cmap='gray')
plt.subplot(133)
plt.imshow(np.reshape(scaled_down_imgs[1],(28,28)), cmap='gray')
plt.show
with nengo.Simulator(model) as sim:
#Neuron activities of different mnist images
#The semantic pointers
orig_acts = get_activities(sim,orig_imgs)
scaled_up_acts = get_activities(sim,scaled_up_imgs)
scaled_down_acts = get_activities(sim,scaled_down_imgs)
test_acts = get_activities(sim,test_imgs)
X_test_acts = get_activities(sim,X_test)
labels_out = get_outs(sim,X_test)
scaled_up_after_encoders = get_encoder_outputs(sim,scaled_up_imgs)
scaled_down_after_encoders = get_encoder_outputs(sim,scaled_down_imgs)
#solvers for a learning rule
solver_scale_up = nengo.solvers.LstsqL2(reg=1e-8)
solver_scale_down = nengo.solvers.LstsqL2(reg=1e-8)
solver_word = nengo.solvers.LstsqL2(reg=1e-8)
solver_scaled_up_encoder = nengo.solvers.LstsqL2(reg=1e-8)
solver_scaled_down_encoder = nengo.solvers.LstsqL2(reg=1e-8)
#find weight matrix between neuron activity of the original image and the rotated image
#weights returns a tuple including information about learning process, just want the weight matrix
scale_up_weights,_ = solver_scale_up(orig_acts, scaled_up_acts)
scale_down_weights,_ = solver_scale_down(orig_acts, scaled_down_acts)
#find weight matrix between labels and neuron activity
label_weights,_ = solver_word(labels_out,X_test_acts)
scaled_up_after_encoder_weights,_ = solver_scaled_up_encoder(orig_acts,scaled_up_after_encoders)
scaled_down_after_encoder_weights,_ = solver_scaled_down_encoder(orig_acts,scaled_down_after_encoders)
#filename = "label_weights" + str(n_hid) +".p"
#cPickle.dump(label_weights, open( filename, "wb" ) )
filename = "activity_to_img_weights_scale" + str(n_hid) +".p"
cPickle.dump(sim.data[conn].weights.T, open( filename, "wb" ) )
filename = "scale_up_weights" + str(n_hid) +".p"
cPickle.dump(scale_up_weights, open( filename, "wb" ) )
filename = "scale_down_weights" + str(n_hid) +".p"
cPickle.dump(scale_down_weights, open( filename, "wb" ) )
filename = "scale_up_after_encoder_weights" + str(n_hid) +".p"
cPickle.dump(scaled_up_after_encoder_weights, open( filename, "wb" ) )
filename = "scale_down_after_encoder_weights" + str(n_hid) +".p"
cPickle.dump(scaled_down_after_encoder_weights, open( filename, "wb" ) )
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Represent each number using a one-hot where the index of the one represents the digit value
Step2: Load the MNIST training and testing images
Step3: The Network
Step4: Evaluating the network statically
Step5: Images
Step6: Simulator
Step7: Saving weight matrices
|
15,214 | <ASSISTANT_TASK:>
Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
%%capture
!pip3 install seaborn
#@title Load the Universal Sentence Encoder's TF Hub module
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4", "https://tfhub.dev/google/universal-sentence-encoder-large/5"]
model = hub.load(module_url)
print ("module %s loaded" % module_url)
def embed(input):
return model(input)
#@title Compute a representation for each message, showing various lengths supported.
word = "Elephant"
sentence = "I am a sentence for which I would like to get its embedding."
paragraph = (
"Universal Sentence Encoder embeddings also support short paragraphs. "
"There is no hard limit on how long the paragraph is. Roughly, the longer "
"the more 'diluted' the embedding will be.")
messages = [word, sentence, paragraph]
# Reduce logging output.
logging.set_verbosity(logging.ERROR)
message_embeddings = embed(messages)
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
def plot_similarity(labels, features, rotation):
corr = np.inner(features, features)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=rotation)
g.set_title("Semantic Textual Similarity")
def run_and_plot(messages_):
message_embeddings_ = embed(messages_)
plot_similarity(messages_, message_embeddings_, 90)
messages = [
# Smartphones
"I like my phone",
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
"Global warming is real",
# Food and health
"An apple a day, keeps the doctors away",
"Eating strawberries is healthy",
"Is paleo better than keto?",
# Asking about age
"How old are you?",
"what is your age?",
]
run_and_plot(messages)
import pandas
import scipy
import math
import csv
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = pandas.read_table(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"),
error_bad_lines=False,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
sts_test = pandas.read_table(
os.path.join(
os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"),
error_bad_lines=False,
quoting=csv.QUOTE_NONE,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
# cleanup some NaN values in sts_dev
sts_dev = sts_dev[[isinstance(s, str) for s in sts_dev['sent_2']]]
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
def run_sts_benchmark(batch):
sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1)
sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1)
cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1)
clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0)
scores = 1.0 - tf.acos(clip_cosine_similarities) / math.pi
Returns the similarity scores
return scores
dev_scores = sts_data['sim'].tolist()
scores = []
for batch in np.array_split(sts_data, 10):
scores.extend(run_sts_benchmark(batch))
pearson_correlation = scipy.stats.pearsonr(scores, dev_scores)
print('Pearson correlation coefficient = {0}\np-value = {1}'.format(
pearson_correlation[0], pearson_correlation[1]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Universal Sentence Encoder
Step2: More detailed information about installing Tensorflow can be found at https
Step3: Semantic Textual Similarity Task Example
Step4: Similarity Visualized
Step5: Evaluation
Step7: Evaluate Sentence Embeddings
|
15,215 | <ASSISTANT_TASK:>
Python Code:
# Import the libraries we need
import pandas as pd
# Import the dataset from the CSV file
accidents_data_file = '/Users/robert.dempsey/Dropbox/Private/Art of Skill Hacking/' \
'Books/Python Business Intelligence Cookbook/Data/Stats19-Data1979-2004/Accidents7904.csv'
accidents = pd.read_csv(accidents_data_file,
sep=',',
header=0,
index_col=False,
parse_dates=['Date'],
dayfirst=True,
tupleize_cols=False,
error_bad_lines=True,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
accidents.head()
# Use the describe function to generate summary stats for the entire dataset
accidents.describe()
# Transpose the results provided by describe()
accidents.describe().transpose()
# By default describe() restricts the stats to numerical or categorical columns. Use the following to include object columns
accidents.describe(include=['object'])
# Show the mode of each column and transpose it so we can read everything in iPython Notebook
accidents.mode().transpose()
accidents['Weather_Conditions'].describe()
# Get the count of each unique value in the Date column.
pd.value_counts(accidents['Date'])
# Get the count of each unique value in the Date column.
print("Min Value: {}".format(accidents['Number_of_Vehicles'].min()))
print("Max Value: {}".format(accidents['Number_of_Vehicles'].max()))
accidents['Number_of_Vehicles'].quantile([.05, .1, .25, .5, .75, .9, .99])
# Mean: the average
# Median: the middle value
# Mode: the value that occurs most often
# Range: the difference between the minimum and maximum values
print("Mean: {}".format(accidents['Number_of_Vehicles'].mean()))
print("Median: {}".format(accidents['Number_of_Vehicles'].median()))
print("Mode: {}".format(accidents['Number_of_Vehicles'].mode()))
print("Range: {}".format(
range(accidents['Number_of_Vehicles'].min(),
accidents['Number_of_Vehicles'].max()
)
))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate Summary Statistics for the Entire Dataset
Step2: Generate Summary Statistics for Object Type Columns
Step3: Get the Mode of the Entire Dataset
Step4: Generate Summary Statistics for a Single Column
Step5: Get a Count of Unique Values for a Single Column
Step6: Get the Minimum and Maximum of a Single Column
Step7: Generate Quantiles for a Single Column
Step8: Get the Mean, Median, Mode and Range for a Single Column
|
15,216 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import erfc
def sInf(p, kD, b, x):
'''Return steady state head change due to fixed recharge p starting at t=0'''
return p / (2 * kD) * (b**2 - x**2)
def sDiff(p, x, b, S, kD, t):
'''Return difference between steady state and actual state'''
sign = -1
if isinstance(x, np.ndarray):
s = np.zeros_like(x)
elif isinstance(t, np.ndarray):
s = np.zeros_like(t)
for j in range(1, 20):
sign = -sign
j2 = 2 * j - 1
s += sign / j2**3 * np.cos(j2 * np.pi / (2 * b) * x) * np.exp(-((j2 * np.pi) / (2 * b))**2 * kD / S * t)
return 16 * p * b**2 /(np.pi**3 * kD) * s
L = 150 # m (strip width)
b =L/2 # [m] half width of strip
x = np.linspace(-L/2, L/2, 201) # points, taking left at zero.
kD = 600 # m2/d
S = 0.1 # [-]
a = 1.0 # m, sudden head change at x = -L/2
times = np.linspace(0, 2, 11) #
p = 0.01
plt.title("Head change in strip subjected to sudden change of recharge p")
plt.xlabel('x [m]')
plt.ylabel('s [m]')
plt.xlim((-b, b))
plt.grid()
for t in times:
s = sInf(p, kD, b, x) - sDiff(p, x, b, S, kD, t)
plt.plot(x, s, label='t={:.2f} d'.format(t))
plt.plot(x, sInf(p, kD, b, x ), 'k', lw=3, label='t=$\infty$')
plt.legend()
plt.show()
# simulate a time series with varying recharge.
# The recharge changes at switch times swt (may be irregularly spaced)
swt = [0, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
# The absolute recharge values starting at swt
Pabs= np.array([0, 10, 0, -3, -1, 5, -2, 0, 4, 0]) * 1e-3
# The recharge changes, using Pabs[0] as the first change
P = np.hstack((Pabs[0], np.diff(Pabs)))
# Choose a point where the head time series is desired
x = 0.3 * b
# Choose times at which the head is to be computed (frequency is higher than that of the swt)
t = np.linspace(0, 5, 51)
# Initialize the time series with all zeros
stot = np.zeros_like(t)
plt.title("Simulated time series at x={:.1f} m".format(x))
plt.xlabel("time [d]")
plt.ylabel("head in time series")
plt.grid()
# Start uperposition
for p, st in zip(P, swt):
I = t >= st # These are the times after the current switch time
# heads for these times due to only this recharge change
s = sInf(p, kD, b, x) - sDiff(p, x, b, S, kD, t[I] - st)
# show it by way of illustration
plt.plot(t[I], s, label='p={:.2f} mm/d'.format(1e3 *p))
# superposition
stot[I] += s
# Plot the final result
plt.plot(t, stot, 'k', lw=3, label='heat time series x={:.1f} m'.format(x))
plt.legend(loc='right')
plt.show()
T = (2/np.pi) * b**2 * S / kD
print('characteristic system time T = {:.2f} d'.format(T))
print('halftime of the system = {:.2f} d'.format(np.log(2) * T))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It seems profitable to define a function that computes the sum
Step2: Discussion
|
15,217 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime
# To illustrate the order of arguments
my_year = 2017
my_month = 1
my_day = 2
my_hour = 13
my_minute = 30
my_second = 15
# January 2nd, 2017
my_date = datetime(my_year,my_month, my_day)
# Defaults to 0:00
my_date
# January 2nd, 2017 at 13:30:15
my_date_time = datetime(my_year, my_month, my_day, my_hour, my_minute, my_second)
my_date_time
my_date.day
my_date_time.hour
# Create an example datetime list/array
first_two = [datetime(2016, 1, 1), datetime(2016, 1, 2)]
first_two
# Converted to an index
dt_ind = pd.DatetimeIndex(first_two)
dt_ind
# Attached to some random data
data = np.random.randn(2, 2)
print(data)
cols = ['A','B']
df = pd.DataFrame(data,dt_ind,cols)
df
df.index
# Latest Date Location
df.index.argmax()
df.index.max()
# Earliest Date Index Location
df.index.argmin()
df.index.min()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can grab any part of the datetime object you want
Step2: Pandas with Datetime Index
|
15,218 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
import fig_code
fig_code.plot_example_decision_tree()
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
from fig_code import visualize_tree, plot_tree_interactive
plot_tree_interactive(X, y);
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
def fit_randomized_tree(random_state=0):
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
clf = DecisionTreeClassifier(max_depth=15)
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,
xlim=(X[:, 0].min(), X[:, 0].max()),
ylim=(X[:, 1].min(), X[:, 1].max()))
from ipywidgets import interact
interact(fit_randomized_tree, random_state=(0, 100));
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, random_state=0)
visualize_tree(clf, X, y, boundaries=False);
from sklearn.ensemble import RandomForestRegressor
x = 10 * np.random.rand(100)
def model(x, sigma=0.3):
fast_oscillation = np.sin(5 * x)
slow_oscillation = np.sin(0.5 * x)
noise = sigma * np.random.randn(len(x))
return slow_oscillation + fast_oscillation + noise
y = model(x)
plt.errorbar(x, y, 0.3, fmt='o');
xfit = np.linspace(0, 10, 1000)
yfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None])
ytrue = model(xfit, 0)
plt.errorbar(x, y, 0.3, fmt='o')
plt.plot(xfit, yfit, '-r');
plt.plot(xfit, ytrue, '-k', alpha=0.5);
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
X = digits.data
y = digits.target
print(X.shape)
print(y.shape)
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
from sklearn.model_selection import train_test_split
from sklearn import metrics
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
clf = DecisionTreeClassifier(max_depth=11)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
metrics.accuracy_score(ypred, ytest)
plt.imshow(metrics.confusion_matrix(ypred, ytest),
interpolation='nearest', cmap=plt.cm.binary)
plt.grid(False)
plt.colorbar()
plt.xlabel("predicted label")
plt.ylabel("true label");
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Motivating Random Forests
Step2: The binary splitting makes this extremely efficient.
Step3: We have some convenience functions in the repository that help
Step4: Now using IPython's interact (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits
Step5: Notice that at each increase in depth, every node is split in two except those nodes which contain only a single class.
Step6: The details of the classifications are completely different! That is an indication of over-fitting
Step7: See how the details of the model change as a function of the sample, while the larger characteristics remain the same!
Step8: By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!
Step9: As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model!
Step10: To remind us what we're looking at, we'll visualize the first few data points
Step11: We can quickly classify the digits using a decision tree as follows
Step12: We can check the accuracy of this classifier
Step13: and for good measure, plot the confusion matrix
|
15,219 | <ASSISTANT_TASK:>
Python Code:
import types
@types.coroutine
def gen():
yield 42
async def delegating():
await gen()
coro = delegating()
coro
coro.send(None)
# coro.send(None) # --> StopIteration
@types.coroutine
def gen123():
return (i for i in range(1, 4))
async def delegating():
await gen123()
coro = delegating()
coro.send(None)
coro.send(None)
coro.send(None)
# coro.send(None) # --> StopIteration
# coro.send(None) # --> RuntimeError
import types
@types.coroutine
def times10(terms):
n = yield 'Ready to begin!'
for _ in range(terms):
n = yield n * 10
return n * 10
async def delegating(terms):
res = await times10(terms)
return res
coro = delegating(3)
coro.send(None)
coro.send(5)
coro.send(6)
coro.send(7)
try:
coro.send(8)
except StopIteration as e:
res = e.value
res
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The driving code starts here
Step2: A slightly more interesting demo
Step3: Driving code
Step4: A generator-coroutine that receives values
Step5: Driving code must prime the coroutine by sending None initially
Step6: To retrieve the last result, we must catch StopIteration and get its value attribute
|
15,220 | <ASSISTANT_TASK:>
Python Code:
def load_frames(path):
frame_size = 220
frames = np.fromfile(path, dtype = 'uint8')
frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size))
return frames
frames = load_frames('ATA_2021-09-18/ce5_frames.u8')
aos = [CE5_AOSFrame.parse(f) for f in frames]
collections.Counter([a.primary_header.transfer_frame_version_number for a in aos])
collections.Counter([a.primary_header.spacecraft_id for a in aos
if a.primary_header.transfer_frame_version_number == 1])
collections.Counter([a.primary_header.virtual_channel_id for a in aos
if a.primary_header.transfer_frame_version_number == 1
and a.primary_header.spacecraft_id == 108])
[a.primary_header for a in aos if a.primary_header.virtual_channel_id == 1][:10]
vc1 = [a for a in aos if a.primary_header.virtual_channel_id == 1]
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc1])
[a.insert_zone for a in aos[:10]]
t_vc1 = np.datetime64('2012-08-01') + np.timedelta64(1, 's') * np.array([a.insert_zone.timestamp for a in vc1])
t_vc1[0]
plt.figure(figsize = (10,6), facecolor = 'w')
plt.plot(t_vc1, fc, '.')
plt.title("Chang'e 5 virtual channel 1 timestamps")
plt.xlabel('AOS frame timestamp')
plt.ylabel('AOS virtual channel frame counter');
plt.figure(figsize = (10,6), facecolor = 'w')
plt.plot(t_vc1[1:], np.diff(fc)-1, '.')
plt.title("Chang'e 5 spacecraft 91 virtual channel 1 frame loss")
plt.xlabel('AOS frame timestamp')
plt.ylabel('Frame loss');
vc1_packets = list(ccsds.extract_space_packets(vc1, 108, 1, get_timestamps = True))
vc1_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p[0]) for p in vc1_packets]
vc1_apids = collections.Counter([p.APID for p in vc1_sp_headers])
vc1_apids
vc1_by_apid = {apid : [p for h,p in zip(vc1_sp_headers, vc1_packets)
if h.APID == apid] for apid in vc1_apids}
plot_apids(vc1_by_apid, 108, 1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: AOS frames
Step2: Virtual channel 1
Step3: We need to sort the data, since the different files we've loaded up are not in chronological order.
Step4: There are space packets in may APIDs. The contents of each APID are shown belown in plot form, but it's not easy to guess what any of the values mean.
|
15,221 | <ASSISTANT_TASK:>
Python Code:
try:
dot_product
except:
assert False
else:
assert True
import numpy as np
np.random.seed(56985)
x = np.random.random(48)
y = np.random.random(48)
np.testing.assert_allclose(14.012537210130272, dot_product(x, y))
x = np.random.random(48)
y = np.random.random(49)
assert dot_product(x, y) is None
try:
mv_multiply
except:
assert False
else:
assert True
import numpy as np
np.random.seed(487543)
A = np.random.random((92, 458))
v = np.random.random(458)
np.testing.assert_allclose(mv_multiply(A, v), np.dot(A, v))
import numpy as np
np.random.seed(49589)
A = np.random.random((83, 75))
v = np.random.random(83)
assert mv_multiply(A, v) is None
try:
mm_multiply
except:
assert False
else:
assert True
import numpy as np
np.random.seed(489547)
A = np.random.random((48, 683))
B = np.random.random((683, 58))
np.testing.assert_allclose(mm_multiply(A, B), A @ B)
A = np.random.random((359, 45))
B = np.random.random((83, 495))
assert mm_multiply(A, B) is None
import numpy as np
np.random.seed(466525)
A = np.random.random((58, 683))
B = np.random.random((683, 58))
np.testing.assert_allclose(mm_multiply(B, A), B @ A)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: B
Step2: C
|
15,222 | <ASSISTANT_TASK:>
Python Code:
__builtins__? # ipython help on object (module) __builtins__
__builtins__?? # should also show code if present (not built in)
help(__builtins__) # extended help (python)
help() # type keywords below to see keywords and quit to quit
#help('modules') # there is an error in my Ipython implementation
import sys
sys.executable
sys.platform
sys.version
sys.path
sys.modules
help('modules keywords') # There is an error in my python implemetation (TODO, reinstall)
import keyword
dir(keyword)
import sys
dir(sys)
dir()
__builtins__
dir(__builtins__)
dir('this is a string')
dir(42) # Integer (and the meaning of life)
dir([]) # List (an empty list, actually)
dir(()) # Tuple (also empty)
dir({}) # Dictionary (ditto)
dir(dir) # Function (functions are also objects)
class Person(object):
Person class.
def __init__(self, name, age):
self.name = name
self.age = age
def intro(self):
Return an introduction.
return "Hello, my name is {:s} and I'm {:d}.".format(self.name, self.age)
bob = Person("Robert", 35) # Create a Person instance
joe = Person("Joseph", 17) # Create another
joe.sport = "football" # Assign a new attribute to one instance
dir(Person) # Attributes of the Person class
dir(bob) # Attributes of bob
dir(joe) # Note that joe has an additional attribute
bob.intro() # Calling bob's intro method
dir(bob.intro) # Attributes of the intro method
print(__builtins__.__doc__) # Module docstring
Person.__doc__ # Class docstring
Person.intro.__doc__ # Class method docstring
print(dir.__doc__) # Function docstring
dir() # The dir() function
directory = dir # Create a new variable
directory() # Works just like the original object
dir.__name__ # What's your name?
directory.__name__ # My name is the same
__name__ # And now for something completely different
if __name__ == '__main__':
# Do something appropriate here, like calling a
# main() function defined elsewhere in this module.
main()
else:
# Do nothing. This module has been imported by another
# module that wants to make use of the functions,
# classes and other useful bits it has defined.
pass
import types
dir(types)
def s(x): return x**2
type(s)
if type(s) is types.FunctionType: print("s is a function")
type(42)
type([])
type({})
type(dir)
print(id.__doc__)
alist = [1, 2, 3]
blist = [1, 2, 3]
clist = blist
clist
blist
alist
id(alist)
id(blist)
id(clist)
alist is blist
blist is clist
clist.append(4)
clist
blist
alist
print(hasattr.__doc__)
print(getattr.__doc__)
hasattr(id, '__doc__')
print(getattr(id, '__doc__'))
print(callable.__doc__)
callable('a string')
callable(dir)
print(isinstance.__doc__)
isinstance(42, str)
isinstance('a string', int)
isinstance(42, int)
isinstance('a string', str)
print(issubclass.__doc__)
issubclass(float, complex)
class SuperHero(Person): # SuperHero inherits from Person...
def intro(self): # but with a new SuperHero intro
Return an introduction.
return "Hello, I'm SuperHero %s and I'm %s." % (self.name, self.age)
issubclass(SuperHero, Person)
issubclass(Person, SuperHero)
def interrogate(item):
Print useful information about item.
if hasattr(item, '__name__'):
print("NAME: ", item.__name__)
if hasattr(item, '__class__'):
print("CLASS: ", item.__class__.__name__)
print("ID: ", id(item))
print("TYPE: ", type(item))
print("VALUE: ", repr(item))
print("CALLABLE: ", end="")
if callable(item):
print("Yes")
else:
print("No")
if hasattr(item, '__doc__'):
doc = getattr(item, '__doc__')
doc = doc.strip() # Remove leading/trailing whitespace.
firstline = doc.split('\n')[0]
print("DOC: ", firstline)
print()
interrogate('a string') # String object
interrogate(43)
interrogate([1, 2, 'a'])
interrogate(('a', 'b', 3))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: When we typed help(), we were greeted with a message and some instructions, followed by the help prompt. At the prompt, we entered keywords and were shown a list of Python keywords. Having gotten the answer to our question, we then quit the help utility, saw a brief farewell message, and were returned to the Python prompt.
Step2: The sys module
Step3: When we enter a line of code that consists of nothing more than the name of an object, Python responds by displaying a representation of the object, which, for simple objects, tends to be the value of the object. In this case, since the displayed value is enclosed in quotes, we get a clue that sys.executable is probably a string object. We'll look at other, more precise, ways to determine an object's type later, but simply typing the name of an object at the Python prompt is a quick and easy form of introspection.
Step4: The current Python version is available as a string, and as a tuple (a tuple contains a sequence of objects)
Step5: The argv variable is a list containing command line arguments, if any were specified. The first item, argv[0], is the path of the script that was run. When we run Python interactively this value is an empty string
Step6: The modules variable is a dictionary that maps module names to module objects for all the currently loaded modules. As you can see, Python loads certain modules by default
Step7: The keyword module
Step8: Here is a list of matching modules. Enter any module name to get more help.
Step9: The dir() function
Step10: And how about the sys module we looked at earlier?
Step11: Without any argument, dir() returns names in the current scope. Notice how keyword and sys appear in the list, since we imported them earlier. Importing a module adds the module's name to the current scope
Step12: So builtins appears to be a name in the current scope that's bound to the module object named builtin. (Since modules are not simple objects with single values, Python displays information about the module inside angle brackets instead.) Note that if you look for a builtin.py file on disk you'll come up empty-handed. This particular module object is created out of thin air by the Python interpreter, because it contains items that are always available to the interpreter. And while there is no physical file to look at, we can still apply our dir() function to this object to see all the built-in functions, error objects, and a few miscellaneous attributes that it contains
Step13: The dir() function works on all object types, including strings, integers, lists, tuples, dictionaries, functions, custom classes, class instances, and class methods. Let's apply dir() to a string object and see what Python returns. As you can see, even a simple Python string has a number of attributes
Step14: Try the following examples yourself to see what they return. Note that the # character marks the start of a comment. Everything from the start of the comment to the end of the line is ignored by Python
Step17: To illustrate the dynamic nature of Python's introspection capabilities, let's look at some examples using dir() on a custom class and some class instances. We're going to define our own class interactively, create some instances of the class, add a unique attribute to only one of the instances, and see if Python can keep all of this straight. Here are the results
Step18: Documentation strings
Step19: Once again, Python even maintains docstrings on classes and methods that are defined interactively in the Python shell. Let's look at the docstrings for our Person class and its intro method
Step20: Because docstrings provide such valuable information, many Python development environments have ways of automatically displaying the docstrings for objects. Let's look at one more docstring, for the dir() function
Step21: Interrogating Python objects
Step22: Modules have names, and the Python interpreter itself is considered the top-level, or main, module. When you run Python interactively the local name variable is assigned a value of 'main'. Likewise, when you execute a Python module from the command line, rather than importing it into another module, its name attribute is assigned a value of 'main', rather than the actual name of the module. In this way, modules can look at their own name value to determine for themselves how they are being used, whether as support for another program or as the main application executed from the command line. Thus, the following idiom is quite common in Python modules
Step23: Type
Step24: Types that are part of optional modules (e.g. array) are not listed.
Step25: Identity
Step26: Attributes
Step27: Callables
Step28: Instances
Step30: Subclasses
Step32: Interrogation time
|
15,223 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(777)
from skopt import gp_minimize
noise_level = 0.1
def obj_fun(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level
res = gp_minimize(obj_fun, # the function to minimize
[(-2.0, 2.0)], # the bounds on each dimension of x
x0=[0.], # the starting point
acq_func="LCB", # the acquisition function (optional)
n_calls=15, # the number of evaluations of f including at x0
n_random_starts=0, # the number of random initialization points
random_state=777)
from skopt import dump, load
dump(res, 'result.pkl')
res_loaded = load('result.pkl')
res_loaded.fun
dump(res, 'result.gz', compress=9)
from os.path import getsize
print('Without compression: {} bytes'.format(getsize('result.pkl')))
print('Compressed with gz: {} bytes'.format(getsize('result.gz')))
dump(res, 'result_without_objective.pkl', store_objective=False)
res_loaded_without_objective = load('result_without_objective.pkl')
print('Loaded object: ', res_loaded_without_objective.specs['args'].keys())
print('Local variable:', res.specs['args'].keys())
del res.specs['args']['func']
dump(res, 'result_without_objective_2.pkl')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem statement
Step2: As long as your Python session is active, you can access all the optimization results via the res object.
Step3: And load from file using skopt.load()
Step4: You can fine-tune the serialization and deserialization process by calling skopt.dump() and skopt.load() with additional keyword arguments. See the joblib documentation (dump and load) for the additional parameters.
Step5: Unserializable objective functions
Step6: Notice that the entry 'func' is absent in the loaded object but is still present in the local variable
Step7: Possible problems
|
15,224 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import subprocess
import os
from os.path import join as opj
import re
import nibabel as nib
# paths = np.genfromtxt('/home1/varunk/results_again_again/anat_file_paths.txt', dtype='str') #Didn't work
# paths = np.genfromtxt('/home1/varunk/results_again_again/skullstrip_anat.txt', dtype='str')
paths = np.genfromtxt('/home1/varunk/results_again_again/bet_resample_reg_brain_paths.txt', dtype='str')
# paths = np.genfromtxt('/home1/varunk/results_again_again/bet_resample_brain_paths.txt', dtype='str')
paths = np.sort(paths)
save_destination = '/home1/varunk/results_again_again/registered_anat_qc/'
if not os.path.exists(save_destination):
os.mkdir(save_destination)
os.chdir(save_destination)
# print(np.sort(paths)[0:10])
paths
# paths = paths[0:3]
res_voxels = []
for path in paths:
base_dir = '/home1/varunk/results_again_again/ABIDE1_Preprocess/motion_correction_bet/coreg_reg'
anat_path = base_dir + path[1:]
print(anat_path)
sub_id_extracted = re.search('.+_subject_id_(\d+)', anat_path).group(1)
# out_file = 'sub_' + sub_id_extracted + '_residual'
res_file = opj(os.getcwd(),out_file)
proc = subprocess.Popen(['fslstats', anat_path, '-V'],
stdout=subprocess.PIPE)
voxels = int((proc.communicate()[0]).decode("utf-8").split(' ')[0])
res_voxels.append(voxels)
res_voxels
%matplotlib inline
import matplotlib.pyplot as plt
bins = np.arange(min(res_voxels), max(res_voxels), 500) # fixed bin size
res = plt.hist(res_voxels,
bins=bins,
alpha=0.5,
color='#887E43',
label='Defender')
np.mean(res_voxels),np.std(res_voxels)
#number_of_outliers =
(res_voxels > 10.0*np.std(res_voxels)).sum()
paths[np.where((res_voxels > 20.0*np.std(res_voxels)) == True)]
np.mean(res_voxels),np.std(res_voxels)
#number_of_outliers =
s = 3
(res_voxels > np.mean(res_voxels) + s*np.std(res_voxels)).sum(), (res_voxels < np.mean(res_voxels) - s*np.std(res_voxels)).sum()
# index of outliers
np.where((res_voxels > 6.0*np.std(res_voxels)) == True)
# Subjects that are outliers
paths[np.where((res_voxels > 15.0*np.std(res_voxels)) == True)]
# Subjects that are outliers
set(paths[np.where((res_voxels > 12.0*np.std(res_voxels)) == True)])
# Subjects that are outliers
set(paths[np.where((res_voxels > 11.0*np.std(res_voxels)) == True)]) - set(paths[np.where((res_voxels > 12.0*np.std(res_voxels)) == True)])
set(paths[np.where((res_voxels > 10.0*np.std(res_voxels)) == True)]) - set(paths[np.where((res_voxels > 11.0*np.std(res_voxels)) == True)])
paths[np.where((res_voxels > 10.0*np.std(res_voxels)) == True)].shape
subject_ids_corrupted = []
for path in paths[np.where((res_voxels > 6.0*np.std(res_voxels)) == True)]:
sub_id_extracted = re.search('.+_subject_id_(\d+)', path).group(1)
subject_ids_corrupted.append(str(int(sub_id_extracted)))
subject_ids_corrupted
# Find the error:
import nibabel as nib
res_voxels = []
res_file = '/home1/varunk/results_again_again/registered_anat_qc/0050977_residual.nii.gz'
proc = subprocess.Popen(['fslstats', res_file, '-V'],
stdout=subprocess.PIPE)
voxels = int((proc.communicate()[0]).decode("utf-8").split(' ')[0])
res_voxels.append(voxels)
res_voxels
!fslmaths /home1/varunk/results_again_again/ABIDE1_Preprocess/motion_correction_bet/coreg_reg/_subject_id_0050977/skullStrip/sub-0050977_T1w_resample_brain.nii.gz -mul /home1/varunk/results_again_again/registered_anat_qc/mask_inverted.nii.gz 0050977_residual
paths = paths[0:3]
for path in paths:
base_dir = '/home1/varunk/results_again_again/ABIDE1_Preprocess/motion_correction_bet/coreg_reg'
anat_path = base_dir + path[1:]
print(anat_path)
# (stddata[0]).decode("utf-8").split(' ')[0]
mask = '/home1/varunk/results_again_again/ABIDE1_Preprocess/motion_correction_bet/coreg_reg/resample_mni/MNI152_T1_2mm_brain_resample_mask.nii.gz'
proc = subprocess.Popen(['fslmaths', mask, '-mul', '-1', '-add' ,'1', 'mask_inverted'],
stdout=subprocess.PIPE)
stdoutdata= proc.communicate()
print("The commandline is: {}".format(subprocess.list2cmdline(proc.args)))
cwd = os.getcwd()
mask_inverted_path = opj(cwd, 'mask_inverted.nii.gz')
# /home1/varunk/results_again_again/registered_anat_qc/mask_inverted.nii.gz
# paths
%%time
# paths = paths[0:3]
res_voxels = []
for path in paths:
base_dir = '/home1/varunk/results_again_again/ABIDE1_Preprocess/motion_correction_bet/coreg_reg'
anat_path = base_dir + path[1:]
# print(anat_path)
sub_id_extracted = re.search('.+_subject_id_(\d+)', anat_path).group(1)
out_file = 'sub_' + sub_id_extracted + '_residual'
proc = subprocess.Popen(['fslmaths', anat_path, '-mul', mask_inverted_path, out_file],
stdout=subprocess.PIPE)
stdoutdata= proc.communicate()
print("The command executed is: {}".format(subprocess.list2cmdline(proc.args)))
# print('Created the residual file for Subject', sub_id_extracted)
res_file = opj(os.getcwd(),out_file)
proc = subprocess.Popen(['fslstats', res_file, '-V'],
stdout=subprocess.PIPE)
voxels = int((proc.communicate()[0]).decode("utf-8").split(' ')[0])
res_voxels.append(voxels)
# print(stdoutdata)
# , stderrdata ,
# , stderr=subprocess.STDOUT
!fslmaths /home1/varunk/results_again_again/ABIDE1_Preprocess/motion_correction_bet/coreg_reg/_subject_id_0050002/skullStrip/sub-0050002_T1w_resample_brain.nii.gz -mul /home1/varunk/results_again_again/registered_anat_qc/mask_inverted.nii.gz sub_0050002_residual
res_voxels
np.mean(res_voxels),np.std(res_voxels)
#number_of_outliers =
(res_voxels > 6*np.std(res_voxels)).sum()
# index of outliers
np.where((res_voxels > 6*np.std(res_voxels)) == True)
# Subjects that are outliers
paths[np.where((res_voxels > 6*np.std(res_voxels)) == True)]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Metric
Step2: The metric that did not work.
|
15,225 | <ASSISTANT_TASK:>
Python Code:
from globus_sdk import AuthClient, TransferClient, AccessTokenAuthorizer, NativeAppAuthClient, TransferData
CLIENT_ID = '2f9482c4-67b3-4783-bac7-12b37d6f8966'
client = NativeAppAuthClient(CLIENT_ID)
client.oauth2_start_flow()
authorize_url = client.oauth2_get_authorize_url()
print('Please go to this URL and login: {0}'.format(authorize_url))
# this is to work on Python2 and Python3 -- you can just use raw_input() or
# input() for your specific version
get_input = getattr(__builtins__, 'raw_input', input)
auth_code = get_input(
'Please enter the code you get after login here: ').strip()
token_response = client.oauth2_exchange_code_for_tokens(auth_code)
AUTH_TOKEN = token_response.by_resource_server['auth.globus.org']['access_token']
TRANSFER_TOKEN = token_response.by_resource_server['transfer.api.globus.org']['access_token']
tc = TransferClient(AccessTokenAuthorizer(TRANSFER_TOKEN))
ac = AuthClient(authorizer=AccessTokenAuthorizer(AUTH_TOKEN))
# discover an Endpoint ID
search_str = "Globus Tutorial Endpoint"
endpoints = tc.endpoint_search(search_str)
print("==== Displaying endpoint matches for search: '{}' ===".format(search_str))
for ep in endpoints:
print("{} ({})".format(ep["display_name"] or ep["canonical_name"], ep["id"]))
import sys, random, uuid
from globus_sdk import AuthClient, TransferClient, AccessTokenAuthorizer, TransferData
host_id = 'ddb59aef-6d04-11e5-ba46-22000b92c6ec' # Endpoint for shared endpoint
source_path = '/share/godata/' # Directory to copy data from
email ='chard@uchicago.edu' # Email address to share with
tc = TransferClient(AccessTokenAuthorizer(TRANSFER_TOKEN))
ac = AuthClient(authorizer=AccessTokenAuthorizer(AUTH_TOKEN))
share_path = '/~/' + str(uuid.uuid4()) + '/'
r = tc.operation_mkdir(host_id, path=share_path)
print (r['message'])
shared_ep_data = {
'DATA_TYPE': 'shared_endpoint',
'host_endpoint': host_id,
'host_path': share_path,
'display_name': 'RDP shared endpoint',
'description': 'RDP shared endpoint'
}
r = tc.create_shared_endpoint(shared_ep_data)
share_id = r['id']
print(share_id)
tc.endpoint_autoactivate(share_id)
tdata = TransferData(tc, host_id, share_id, label='RDP copy data', sync_level='checksum')
tdata.add_item(source_path, '/', recursive=True)
r = tc.submit_transfer(tdata)
o = tc.task_wait(r['task_id'], timeout=1000, polling_interval=10)
print (r['task_id'])
for f in tc.operation_ls(share_id):
print (f['name'])
r = ac.get_identities(usernames=email)
user_id = r['identities'][0]['id']
rule_data = {
'DATA_TYPE': 'access',
'principal_type': 'identity', # Grantee is
'principal': user_id, # a user.
'path': '/', # Path is /
'permissions': 'r', # Read-only
'notify_email': email, # Email invite
'notify_message': # Invite msg
'Requested data is available.'
}
r = tc.add_endpoint_acl_rule(share_id, rule_data)
print (r['message'])
r = tc.delete_endpoint(share_id)
print (r['message'])
from globus_sdk import TransferClient, TransferData, AccessTokenAuthorizer
from globus_sdk import AuthClient
import sys, random, uuid
def rdp(host_id, # Endpoint for shared endpoint
source_path, # Directory to copy data from
email): # Email address to share with
# Instantiate transfer and auth clients
tc = TransferClient(AccessTokenAuthorizer(TRANSFER_TOKEN))
ac = AuthClient(authorizer=AccessTokenAuthorizer(AUTH_TOKEN))
tc.endpoint_autoactivate(host_id)
# (1) Create shared endpoint:
# (a) Create directory to be shared
share_path = '/~/' + str(uuid.uuid4()) + '/'
tc.operation_mkdir(host_id, path=share_path)
# (b) Create shared endpoint on directory
shared_ep_data = {
'DATA_TYPE': 'shared_endpoint',
'host_endpoint': host_id,
'host_path': share_path,
'display_name': 'RDP shared endpoint',
'description': 'RDP shared endpoint'
}
r = tc.create_shared_endpoint(shared_ep_data)
share_id = r['id']
# (2) Copy data into the shared endpoint
tc.endpoint_autoactivate(share_id)
tdata = TransferData(tc, host_id, share_id, label='RDP copy data', sync_level='checksum')
tdata.add_item(source_path, '/', recursive=True)
r = tc.submit_transfer(tdata)
tc.task_wait(r['task_id'], timeout=1000, polling_interval=10)
# (3) Enable access by user
r = ac.get_identities(usernames=email)
user_id = r['identities'][0]['id']
rule_data = {
'DATA_TYPE': 'access',
'principal_type': 'identity', # Grantee is
'principal': user_id, # a user.
'path': '/', # Path is /
'permissions': 'r', # Read-only
'notify_email': email, # Email invite
'notify_message': # Invite msg
'Requested data is available.'
}
tc.add_endpoint_acl_rule(share_id, rule_data)
# (4) Ultimately, delete the shared endpoint
#tc.delete_endpoint(share_id)
rdp('ddb59aef-6d04-11e5-ba46-22000b92c6ec', '/share/godata/' , 'chard@uchicago.edu')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the Globus SDK
Step2: The Research Data Portal function
Step3: We use the Globus SDK function operation_mkdir to create a directory (in our example call, a UUID) on the existing endpoint with identifier host_id.
Step4: Then we use the Globus SDK function create_shared_endpoint to create a shared endpoint for the new directory. At this point, the new shared endpoint exists and is associated with the new directory. However, only the creating user has access to this new shared endpoint at this point.
Step5: To provide access to the requested data we copy data to the shared endpoint. We use sample data contained on the Globus Tutorial Endpoint under path "/share/godata".
Step6: To confirm all data is in place for sharing we check the contents of the shared endpoint.
Step7: We now share the endpoint with the appropriate user. We first use the Globus SDK function get_identities to retrieve the user identifier associated with the supplied email address; this is the user for whom sharing is to be enabled. (If this user is not known to Globus, an identity is created.) We then use the function add_endpoint_acl_rule to add an access control rule to the new shared endpoint to grant the specified user readonly access to the endpoint. The various elements in the rule_data structure specify, among other things
Step8: The shared endpoint will typically be left operational for some period, after which it can be deleted. Note that deleting a shared endpoint does not delete the data that it contains. The portal admin may want to retain the data for other purposes. If not, we can use the Globus SDK function submit_delete to delete the folder.
Step9: Putting it all together
|
15,226 | <ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:1000]
encoded[:100]
len(vocab)
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape(n_seqs, -1)
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob)] for _ in range(num_layers))
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output, axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob =
# Build the LSTM cell
cell, self.initial_state =
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot =
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state =
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits =
# Loss and optimizer (with gradient clipping)
self.loss =
self.optimizer =
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
tf.train.get_checkpoint_state('checkpoints')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
Step8: LSTM Cell
Step9: RNN Output
Step10: Training loss
Step11: Optimizer
Step12: Build the network
Step13: Hyperparameters
Step14: Time for training
Step15: Saved checkpoints
Step16: Sampling
Step17: Here, pass in the path to a checkpoint and sample from the network.
|
15,227 | <ASSISTANT_TASK:>
Python Code:
%load_ext ipython_unittest
def add(x, y):
return x + y
%%unittest
assert add(1, 1) == 2
assert add(1, 2) == 3
assert add(2, 2) == 4
%load_ext ipython_unittest
def fizzbuzz():
pass
%%unittest -p 1
assert fizzbuzz() == 0
import unittest
import sys
class JupyterTest(unittest.TestCase):
def test_add_1_1_returns_2(self):
self.assertEqual(add(1, 1), 2)
def test_add_1_2_returns_3(self):
self.assertEqual(add(1, 2), 3)
def test_add_2_2_returns_4(self):
self.assertEqual(add(2, 2), 4)
suite = unittest.TestLoader().loadTestsFromTestCase(JupyterTest)
unittest.TextTestRunner(verbosity=1, stream=sys.stdout).run(suite)
!pip install astunparse
%%unittest -u
"add 1 + 1 returns 2"
assert add(1, 1) == 2
"add 1 + 2 returns 3"
assert add(1, 2) == 3
"add 2 + 2 returns 4"
assert add(2, 2) == 4
%%write javascript test.js
var assert = require('assert');
describe('Array', function() {
describe('#indexOf()', function() {
it('should return -1 when the value is not present', function() {
assert.equal(-1, [1,2,3].indexOf(4));
});
});
});
%%external --previous 1
mocha test.js
%%unittest_main
class JupyterTest(unittest.TestCase):
def test_add_1_1_returns_2(self):
self.assertEqual(add(1, 1), 2)
def test_add_1_2_returns_3(self):
self.assertEqual(add(1, 2), 3)
def test_add_2_2_returns_4(self):
self.assertEqual(add(2, 2), 4)
%%unittest_testcase
def test_add_1_1_returns_2(self):
self.assertEqual(add(1, 1), 2)
def test_add_1_2_returns_3(self):
self.assertEqual(add(1, 2), 3)
def test_add_2_2_returns_4(self):
self.assertEqual(add(2, 2), 4)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Other magics
|
15,228 | <ASSISTANT_TASK:>
Python Code:
import SimpleITK as sitk
import registration_utilities as ru
import registration_callbacks as rc
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
# utility method that either downloads data from the Girder repository or
# if already downloaded returns the file name for reading from disk (cached data)
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
%run popi_utilities_setup.py
images = []
masks = []
points = []
for i in range(0, 10):
image_file_name = f"POPI/meta/{i}0-P.mhd"
mask_file_name = f"POPI/masks/{i}0-air-body-lungs.mhd"
points_file_name = f"POPI/landmarks/{i}0-Landmarks.pts"
images.append(
sitk.ReadImage(fdata(image_file_name), sitk.sitkFloat32)
) # read and cast to format required for registration
masks.append(sitk.ReadImage(fdata(mask_file_name)))
points.append(read_POPI_points(fdata(points_file_name)))
interact(
display_coronal_with_overlay,
temporal_slice=(0, len(images) - 1),
coronal_slice=(0, images[0].GetSize()[1] - 1),
images=fixed(images),
masks=fixed(masks),
label=fixed(lung_label),
window_min=fixed(-1024),
window_max=fixed(976),
);
label_shape_statistics_filter = sitk.LabelShapeStatisticsImageFilter()
for i, mask in enumerate(masks):
label_shape_statistics_filter.Execute(mask)
print(
f"Lung volume in image {i} is {0.000001*label_shape_statistics_filter.GetPhysicalSize(lung_label)} liters."
)
def bspline_intra_modal_registration(
fixed_image,
moving_image,
fixed_image_mask=None,
fixed_points=None,
moving_points=None,
):
registration_method = sitk.ImageRegistrationMethod()
# Determine the number of BSpline control points using the physical spacing we want for the control grid.
grid_physical_spacing = [50.0, 50.0, 50.0] # A control point every 50mm
image_physical_size = [
size * spacing
for size, spacing in zip(fixed_image.GetSize(), fixed_image.GetSpacing())
]
mesh_size = [
int(image_size / grid_spacing + 0.5)
for image_size, grid_spacing in zip(image_physical_size, grid_physical_spacing)
]
initial_transform = sitk.BSplineTransformInitializer(
image1=fixed_image, transformDomainMeshSize=mesh_size, order=3
)
registration_method.SetInitialTransform(initial_transform)
registration_method.SetMetricAsMeanSquares()
# Settings for metric sampling, usage of a mask is optional. When given a mask the sample points will be
# generated inside that region. Also, this implicitly speeds things up as the mask is smaller than the
# whole image.
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
if fixed_image_mask:
registration_method.SetMetricFixedMask(fixed_image_mask)
# Multi-resolution framework.
registration_method.SetShrinkFactorsPerLevel(shrinkFactors=[4, 2, 1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2, 1, 0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsLBFGSB(
gradientConvergenceTolerance=1e-5, numberOfIterations=100
)
# If corresponding points in the fixed and moving image are given then we display the similarity metric
# and the TRE during the registration.
if fixed_points and moving_points:
registration_method.AddCommand(
sitk.sitkStartEvent, rc.metric_and_reference_start_plot
)
registration_method.AddCommand(
sitk.sitkEndEvent, rc.metric_and_reference_end_plot
)
registration_method.AddCommand(
sitk.sitkIterationEvent,
lambda: rc.metric_and_reference_plot_values(
registration_method, fixed_points, moving_points
),
)
return registration_method.Execute(fixed_image, moving_image)
#%%timeit -r1 -n1
# Select the fixed and moving images, valid entries are in [0,9].
fixed_image_index = 0
moving_image_index = 7
tx = bspline_intra_modal_registration(
fixed_image=images[fixed_image_index],
moving_image=images[moving_image_index],
fixed_image_mask=(masks[fixed_image_index] == lung_label),
fixed_points=points[fixed_image_index],
moving_points=points[moving_image_index],
)
(
initial_errors_mean,
initial_errors_std,
_,
initial_errors_max,
initial_errors,
) = ru.registration_errors(
sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index]
)
(
final_errors_mean,
final_errors_std,
_,
final_errors_max,
final_errors,
) = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index])
plt.hist(initial_errors, bins=20, alpha=0.5, label="before registration", color="blue")
plt.hist(final_errors, bins=20, alpha=0.5, label="after registration", color="green")
plt.legend()
plt.title("TRE histogram")
print(
f"Initial alignment errors in millimeters, mean(std): {initial_errors_mean:.2f}({initial_errors_std:.2f}), max: {initial_errors_max:.2f}"
)
print(
f"Final alignment errors in millimeters, mean(std): {final_errors_mean:.2f}({final_errors_std:.2f}), max: {final_errors_max:.2f}"
)
# Transfer the segmentation via the estimated transformation. Use Nearest Neighbor interpolation to retain the labels.
transformed_labels = sitk.Resample(
masks[moving_image_index],
images[fixed_image_index],
tx,
sitk.sitkNearestNeighbor,
0.0,
masks[moving_image_index].GetPixelID(),
)
segmentations_before_and_after = [masks[moving_image_index], transformed_labels]
interact(
display_coronal_with_label_maps_overlay,
coronal_slice=(0, images[0].GetSize()[1] - 1),
mask_index=(0, len(segmentations_before_and_after) - 1),
image=fixed(images[fixed_image_index]),
masks=fixed(segmentations_before_and_after),
label=fixed(lung_label),
window_min=fixed(-1024),
window_max=fixed(976),
)
# Compute the Dice coefficient and Hausdorff distance between the segmentations before, and after registration.
ground_truth = masks[fixed_image_index] == lung_label
before_registration = masks[moving_image_index] == lung_label
after_registration = transformed_labels == lung_label
label_overlap_measures_filter = sitk.LabelOverlapMeasuresImageFilter()
label_overlap_measures_filter.Execute(ground_truth, before_registration)
print(
f"Dice coefficient before registration: {label_overlap_measures_filter.GetDiceCoefficient():.2f}"
)
label_overlap_measures_filter.Execute(ground_truth, after_registration)
print(
f"Dice coefficient after registration: {label_overlap_measures_filter.GetDiceCoefficient():.2f}"
)
hausdorff_distance_image_filter = sitk.HausdorffDistanceImageFilter()
hausdorff_distance_image_filter.Execute(ground_truth, before_registration)
print(
f"Hausdorff distance before registration: {hausdorff_distance_image_filter.GetHausdorffDistance():.2f}"
)
hausdorff_distance_image_filter.Execute(ground_truth, after_registration)
print(
f"Hausdorff distance after registration: {hausdorff_distance_image_filter.GetHausdorffDistance():.2f}"
)
def bspline_intra_modal_registration2(
fixed_image,
moving_image,
fixed_image_mask=None,
fixed_points=None,
moving_points=None,
):
registration_method = sitk.ImageRegistrationMethod()
# Determine the number of BSpline control points using the physical spacing we
# want for the finest resolution control grid.
grid_physical_spacing = [50.0, 50.0, 50.0] # A control point every 50mm
image_physical_size = [
size * spacing
for size, spacing in zip(fixed_image.GetSize(), fixed_image.GetSpacing())
]
mesh_size = [
int(image_size / grid_spacing + 0.5)
for image_size, grid_spacing in zip(image_physical_size, grid_physical_spacing)
]
# The starting mesh size will be 1/4 of the original, it will be refined by
# the multi-resolution framework.
mesh_size = [int(sz / 4 + 0.5) for sz in mesh_size]
initial_transform = sitk.BSplineTransformInitializer(
image1=fixed_image, transformDomainMeshSize=mesh_size, order=3
)
# Instead of the standard SetInitialTransform we use the BSpline specific method which also
# accepts the scaleFactors parameter to refine the BSpline mesh. In this case we start with
# the given mesh_size at the highest pyramid level then we double it in the next lower level and
# in the full resolution image we use a mesh that is four times the original size.
registration_method.SetInitialTransformAsBSpline(
initial_transform, inPlace=True, scaleFactors=[1, 2, 4]
)
registration_method.SetMetricAsMeanSquares()
# Settings for metric sampling, usage of a mask is optional. When given a mask the sample points will be
# generated inside that region. Also, this implicitly speeds things up as the mask is smaller than the
# whole image.
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
if fixed_image_mask:
registration_method.SetMetricFixedMask(fixed_image_mask)
# Multi-resolution framework.
registration_method.SetShrinkFactorsPerLevel(shrinkFactors=[4, 2, 1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2, 1, 0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
registration_method.SetInterpolator(sitk.sitkLinear)
# Use the LBFGS2 instead of LBFGS. The latter cannot adapt to the changing control grid resolution.
registration_method.SetOptimizerAsLBFGS2(
solutionAccuracy=1e-2, numberOfIterations=100, deltaConvergenceTolerance=0.01
)
# If corresponding points in the fixed and moving image are given then we display the similarity metric
# and the TRE during the registration.
if fixed_points and moving_points:
registration_method.AddCommand(
sitk.sitkStartEvent, rc.metric_and_reference_start_plot
)
registration_method.AddCommand(
sitk.sitkEndEvent, rc.metric_and_reference_end_plot
)
registration_method.AddCommand(
sitk.sitkIterationEvent,
lambda: rc.metric_and_reference_plot_values(
registration_method, fixed_points, moving_points
),
)
return registration_method.Execute(fixed_image, moving_image)
#%%timeit -r1 -n1
# Select the fixed and moving images, valid entries are in [0,9].
fixed_image_index = 0
moving_image_index = 7
tx = bspline_intra_modal_registration2(
fixed_image=images[fixed_image_index],
moving_image=images[moving_image_index],
fixed_image_mask=(masks[fixed_image_index] == lung_label),
fixed_points=points[fixed_image_index],
moving_points=points[moving_image_index],
)
(
initial_errors_mean,
initial_errors_std,
_,
initial_errors_max,
initial_errors,
) = ru.registration_errors(
sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index]
)
(
final_errors_mean,
final_errors_std,
_,
final_errors_max,
final_errors,
) = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index])
plt.hist(initial_errors, bins=20, alpha=0.5, label="before registration", color="blue")
plt.hist(final_errors, bins=20, alpha=0.5, label="after registration", color="green")
plt.legend()
plt.title("TRE histogram")
print(
f"Initial alignment errors in millimeters, mean(std): {initial_errors_mean:.2f}({initial_errors_std:.2f}), max: {initial_errors_max:.2f}"
)
print(
f"Final alignment errors in millimeters, mean(std): {final_errors_mean:.2f}({final_errors_std:.2f}), max: {final_errors_max:.2f}"
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Utilities
Step2: Loading Data
Step3: Getting to know your data
Step4: Free Form Deformation
Step5: Perform Registration
Step6: Another option for evaluating the registration is to use segmentation. In this case, we transfer the segmentation from one image to the other and compare the overlaps, both visually, and quantitatively.
Step7: Multi-resolution control point grid
|
15,229 | <ASSISTANT_TASK:>
Python Code:
mkdir -p hello_world
from asdf import AsdfFile
# Make the tree structure, and create a AsdfFile from it.
tree = {'hello': 'world'}
ff = AsdfFile(tree)
ff.write_to("hello_world/test.asdf")
# You can also make the AsdfFile first, and modify its tree directly:
ff = AsdfFile()
ff.tree['hello'] = 'world'
ff.write_to("hello_world/test_hello_world.asdf")
cat hello_world/test.asdf
cat hello_world/test_hello_world.asdf
mkdir -p array_storage
from asdf import AsdfFile
import numpy as np
tree = {'my_array': np.random.rand(8, 8)}
ff = AsdfFile(tree)
ff.write_to("array_storage/test_arrays.asdf")
cat array_storage/test_arrays.asdf
from asdf import ValidationError
from asdf import AsdfFile
tree = {'data': 'Not an array'}
try:
AsdfFile(tree)
except:
raise ValidationError('data needs an array!')
mkdir -p data_sharing
from asdf import AsdfFile
import numpy as np
my_array = np.random.rand(8, 8)
subset = my_array[2:4,3:6]
tree = {
'my_array': my_array,
'subset': subset
}
ff = AsdfFile(tree)
ff.write_to("data_sharing/test_overlap.asdf")
cat data_sharing/test_overlap.asdf
mkdir -p streaming_data
from asdf import AsdfFile, Stream
import numpy as np
tree = {
# Each "row" of data will have 128 entries.
'my_stream': Stream([128], np.float64)
}
ff = AsdfFile(tree)
with open('streaming_data/stream_test.asdf', 'wb') as fd:
ff.write_to(fd)
# Write 100 rows of data, one row at a time. ``write``
# expects the raw binary bytes, not an array, so we use
# ``tostring()``.
for i in range(10):
fd.write(np.array([i] * 128, np.float64).tostring())
cat streaming_data/stream_test.asdf
mkdir -p exploded_data
from asdf import AsdfFile
import numpy as np
my_array = np.random.rand(3, 4)
tree = {'my_array': my_array}
my_big_array = np.random.rand(8, 8)
tree['my_big_array'] = my_big_array
ff = AsdfFile(tree)
ff.set_array_storage(my_array, 'inline')
ff.set_array_storage(my_big_array, 'external')
ff.write_to("exploded_data/test_exploded.asdf")
# Or for every block:
# ff.write_to("test.asdf", all_array_storage='external')
ls exploded_data/
cat exploded_data/test_exploded.asdf
cat exploded_data/test_exploded0000.asdf
mkdir -p provenance
from asdf import AsdfFile
import numpy as np
tree = {
'some_random_data': np.random.rand(5, 5)
}
ff = AsdfFile(tree)
ff.add_history_entry(
u"Initial random numbers",
{u'name': u'asdf examples',
u'author': u'John Q. Public',
u'homepage': u'http://github.com/spacetelescope/asdf',
u'version': u'0.1',
u'spase_dict': {u'resource_id': 5}})
ff.write_to('provenance/provenance.asdf')
cat provenance/provenance.asdf
mkdir -p compression
from asdf import AsdfFile
import numpy as np
x = np.linspace(-20, 20, 30)
y = np.linspace(-30, 30, 50)
xx,yy = np.meshgrid(x,y)
tree = dict(variables = dict(x = xx,
y = yy
)
)
ff = AsdfFile(tree)
ff.write_to("compression/uncompressed_data.asdf", all_array_compression=None)
ff.write_to("compression/compressed_data.asdf", all_array_compression='bzp2')
import os
print 'uncompressed:', os.path.getsize("compression/uncompressed_data.asdf"), 'bytes'
print 'compressed (bz2):', os.path.getsize("compression/compressed_data.asdf"), 'bytes'
mkdir -p time
from asdf import AsdfFile
from astropy.time import Time
astrot = Time('2016-10-3')
from asdf.tags.time import TimeType
tree = {'my_time': astrot}
ff = AsdfFile(tree)
ff.write_to("time/test_time.asdf")
ff.close()
cat time/test_time.asdf
sample_time = AsdfFile.open('time/test_time.asdf')
my_time = sample_time.tree['my_time']
type(my_time) == type(astrot)
mkdir -p units
from astropy import units as u
rho_unit = u.kg*u.cm**-3
density = np.linspace(0, 11, 5)*rho_unit
density.unit
from asdf import AsdfFile
tree = dict(variables=dict(density = dict(data=density.value, unit = density.unit)))
ff = AsdfFile(tree)
ff.set_array_storage(density, 'inline')
ff.write_to("units/units_test.asdf", all_array_compression=None)
ff.close()
cat units/units_test.asdf
units_file = AsdfFile.open('units/units_test.asdf')
rho = units_file.tree['variables']['density']
rho
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Storing Arrays
Step2: Schema validation
Step3: Data Sharing
Step4: Streaming Data
Step5: Explosive Storage
Step6: Data Provenance
Step7: Compression
Step8: Custom types
Step9: verify that time matches astropy type
Step10: Units
Step11: verify variables load - comes as dictionary
|
15,230 | <ASSISTANT_TASK:>
Python Code:
x = np.array([1, 2, 3])
x.dtype
x = np.array([1, 2, 3])
x.dtype #2.7과 3버전의 차이인가?
np.exp(-np.inf)
-np.inf
np.exp(1)
np.array([1, 0]) / np.array([0, 0])
np.array([1, 0]) / np.array([0, 0])
x = np.array([1, 2, 3])
x
a = np.zeros(5)
a
b = np.zeros((5, 2), dtype="f8")
b
c = np.zeros(5, dtype='S4')
c
c = np.zeros(5, dtype="S4")
c[0] = 'abcd'
c[1] = 'ABCDE'
c
d = np.ones((2,3,2,4), dtype='i8')
d
e = range(10)
print(e)
f=np.ones_like(e, dtype="f")
f
g = np.empty((3,6))
g
np.arange(10) # 0 . . . n-1
np.arange(3, 21, 2) # start, end (exclusive), step
np.linspace(0, 100, 5) # start, end, num-points
np.logspace(0, 4, 4, endpoint=False)
np.random.seed(0)
np.random.rand(4)
np.random.randn(3,5)
a = np.arange(12)
a
b = a.reshape(3, 4)
b
a.reshape(2,2,-1)
a.reshape(2,-1,2)
a.flatten()
x = np.arange(5)
x
y = x.reshape(5, 1)
y
z = x[:, np.newaxis]
z
a1 = np.ones((2, 3))
a1
a2 = np.zeros((2, 2))
a2
np.hstack([a1, a2])
b1 = np.ones((2, 3))
b1
b2 = np.zeros((3, 3))
b2
np.vstack([b1, b2])
c1 = np.ones((2,3))
c1
c2 = np.zeros((2,3))
c2
np.dstack([c1, c2])
np.stack([c1, c2])
np.stack([c1, c2], axis=0)
np.stack([c1, c2], axis=1)
np.stack([c1, c2], axis=2)
np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])]
a = np.array([0, 1, 2])
np.tile(a, 2)
np.tile(a, [2,3])
b = np.array([2,3])
np.tile(a,b)
np.tile(a, (3, 2))
x = np.arange(3)
x
y = np.arange(5)
y
X, Y = np.meshgrid(x, y)
X
Y
[zip(x, y)]
[zip(X, Y)]
for x, y in zip(X, Y):
print (x, y)
for x, y in zip(X,Y):
print (x, y)
[zip(x, y) for x, y in zip(X, Y)]
X
Y
plt.scatter(X, Y, linewidths=10);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 만약 부동소수점을 사용하는 경우에는 무한대를 표현하기 위한 np.inf와 정의할 수 없는 숫자를 나타내는 np.nan 을 사용할 수 있다.
Step2: The irrational number e is also known as Euler’s number. It is approximately 2.718281, and is the base of the natural logarithm
Step3: 배열 생성
Step4: 앞에서 파이썬 리스트를 NumPy의 ndarray 객체로 변환하여 생성하려면 array 명령을 사용하였다. 그러나 보통은 이러한 기본 객체없이 다음과 같은 명령을 사용하여 바로 ndarray 객체를 생성한다.
Step5: dtype 인수를 명시하면 해당 자료형 원소를 가진 배열을 만든다.
Step6: 문자열 배열도 가능하지면 모든 원소의 문자열 크기가 같아야 한다. 만약 더 큰 크기의 문자열을 할당하면 잘릴 수 있다.
Step7: 0이 아닌 1로 초기화된 배열을 생성하려면 ones 명령을 사용한다.
Step8: 만약 크기를 튜플(tuple)로 명시하지 않고 특정한 배열 혹은 리스트와 같은 크기의 배열을 생성하고 싶다면 ones_like, zeros_like 명령을 사용한다.
Step9: 배열의 크기가 커지면 배열을 초기화하는데도 시간이 걸린다. 이 시간을 단축하려면 생성만 하고 초기화를 하지 않는 empty 명령을 사용할 수 있다. empty 명령으로 생성된 배열에 어떤 값이 들어있을지는 알 수 없다.
Step10: arange 명령은 NumPy 버전의 range 명령이라고 볼 수 있다. 해당하는 범위의 숫자 순열을 생성한다.
Step11: linspace 명령이나 logspace 명령은 선형 구간 혹은 로그 구간을 지정한 구간의 수만큼 분할한다.
Step12: 임의의 난수를 생성하고 싶다면 random 서브패키지의 rand 혹은 randn 명령을 사용한다. rand 명령을 uniform 분포를 따르는 난수를 생성하고 randn 명령을 가우시안 정규 분포를 따르는 난수를 생성한다. 생성할 시드(seed)값을 지정하려면 seed 명령을 사용한다.
Step13: 배열의 크기 변형
Step14: 사용하는 원소의 갯수가 정해저 있기 때문에 reshape 명령의 형태 튜플의 원소 중 하나는 -1이라는 숫자로 대체할 수 있다. -1을 넣으면 해당 숫자는 다른 값에서 계산되어 사용된다.
Step15: 다차원 배열을 무조건 1차원으로 펼치기 위해서는 flatten 명령이나 메서드를 사용한다.
Step16: 길이가 5인 1차원 배열과 행, 열의 갯수가 (5,1)인 2차원 배열은 데이터는 같아도 엄연히 다른 객체이다.
Step17: 이렇게 같은 배열에 대해 차원만 1차원 증가시키는 경우에는 newaxis 명령을 사용하기도 한다.
Step18: 배열 연결
Step19: vstack 명령은 열의 수가 같은 두 개 이상의 배열을 위아래로 연결하여 행의 수가 더 많은 배열을 만든다. 연결할 배열은 마찬가지로 하나의 리스트에 담아야 한다.
Step20: dstack 명령은 제3의 축 즉, 행이나 열이 아닌 깊이(depth) 방향으로 배열을 합친다.
Step21: stack 명령은 새로운 차원(축으로) 배열을 연결하며 당연히 연결하고자 하는 배열들의 크기가 모두 같아야 한다.
Step22: r_ 메서드는 hstack 명령과 유사하다. 다만 메서드임에도 불구하고 소괄호(parenthesis, ())를 사용하지 않고 인덱싱과 같이 대괄호(bracket, [])를 사용한다.
Step23: tile 명령은 동일한 배열을 반복하여 연결한다.
Step24: 그리드 생성
|
15,231 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import display, Image
## eis a imagem da notícia
infograficoG1 = Image('reservatorios1403.jpg')
display(infograficoG1)
import urllib.request
req = urllib.request.urlopen("https://sabesp-api.herokuapp.com/").read().decode()
import json
data = json.loads(req)
import datetime as dt
print('dados disponibilizados pela sabesb hoje, %s \n-----' % dt.date.today())
for x in data:
print (x['name'])
for i in range(len(x['data'])):
item = x['data'][i]
print ('item %d) %35s = %s' % (i, item['key'], item['value']))
#print ( [item['value'] for item in x['data'] ])
print('-----')
## com isso posso usar list comprehension para pegar os dados que me interessam
[ (x['name'], x['data'][0]['value']) for x in data ]
import datetime as dt
# datas usadas no grafico do G1
today = dt.date(2015,3,14)
yr = dt.timedelta(days=365)
last_year = today - yr
today=today.isoformat()
last_year=last_year.isoformat()
def getData(date):
recebe um objeto date ou uma string com a data no
formato YYYY-MM-DD e retorna uma 'Série' (do pacote pandas)
com os níveis dos reservatórios da sabesp
# def parsePercent(s):
# recebe uma string no formato '\d*,\d* %' e retorna o float equivalente
# return float(s.replace(",",".").replace("%",""))
# da pra fazer com o lambda tbm, huehue
fixPercent = lambda s: float(s.replace(",",".").replace("%",""))
import datetime
if type(date) == datetime.date:
date = date.isoformat()
## requisição
import urllib.request
req = urllib.request.urlopen("https://sabesp-api.herokuapp.com/" + date).read().decode()
## transforma o json em dicionario
import json
data = json.loads(req)
## serie
dados = [ fixPercent(x['data'][0]['value']) for x in data ]
sistemas = [ x['name'] for x in data ]
import pandas as pd
return pd.Series(dados, index=sistemas, name=date)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns ## só pra deixar o matplotlib com o estilo bonitão do seaborn ;)
sns.set_context("talk")
#pd.options.display.mpl_style = 'default'
df = pd.DataFrame([getData(today), getData(last_year)]) #, index=[today, last_year])
df.T.plot(kind='bar', rot=0, figsize=(8,4))
plt.show()
df
datas = [last_year,
'2014-05-15', # pré-volume morto
'2014-05-16', # estréia da "primeira reserva técnica", a.k.a. volume morto
'2014-07-12',
'2014-10-23',
'2014-10-24', # "segunda reserva técnica" ou "VOLUME MORTO 2: ELECTRIC BOOGALOO"
'2015-01-01', # feliz ano novo ?
today]
import numpy as np
df = pd.DataFrame(pd.concat(map(getData, datas), axis=1))
df = df.T
df
def plotSideBySide(dfTupl, cm=['Spectral', 'coolwarm']):
fig, axes = plt.subplots(1,2, figsize=(17,5))
for i, ax in enumerate(axes):
dfTupl[i].ix[:].T.plot(
kind='bar', ax=ax,
rot=0, colormap=cm[i])
#ax[i].
for j in range(len(dfTupl[i].columns)):
itens = dfTupl[i].ix[:,j]
y = 0
if itens.max() > 0:
y = itens.max()
ax.text(j, y +0.5,
'$\Delta$\n{:0.1f}%'.format(itens[1] - itens[0]),
ha='center', va='bottom',
fontsize=14, color='k')
plt.show()
#%psource plotaReservatecnica
dados = df.ix[['2014-05-15','2014-05-16']], df.ix[['2014-10-23','2014-10-24']]
plotSideBySide(dados)
def fixCantareira(p, data):
corrige o percentual divulgado pela sabesp
def str2date(data, format='%Y-%m-%d'):
converte uma string contendo uma data e retorna um objeto date
import datetime as dt
return dt.datetime.strptime(data,format)
vm1day = str2date('16/05/2014', format='%d/%m/%Y')
vm2day = str2date('24/10/2014', format='%d/%m/%Y')
vm1 = 182.5
vm2 = 105.4
def percReal(perc,volumeMorto=0):
a = perc/100
volMax = 982.07
volAtual = volMax*a -volumeMorto
b = 100*volAtual/volMax
b = np.round(b,1)
return b
if str2date(data) < vm1day:
print(data, p, end=' ')
perc = percReal(p)
print('===>', perc)
return perc
elif str2date(data) < vm2day:
print('primeira reserva técnica em uso', data, p, end=' ')
perc = percReal(p, volumeMorto=vm1)
print('===>', perc)
return perc
else:
print('segunda reserva técnica em uso', data, p, end=' ')
perc = percReal(p, volumeMorto=vm1+vm2)
print('===>', perc)
return perc
dFixed = df.copy()
dFixed.Cantareira = ([fixCantareira(p, dia) for p, dia in zip(df.Cantareira, df.index)])
dados = dFixed.ix[['2014-05-15','2014-05-16']], dFixed.ix[['2014-10-23','2014-10-24']]
plotSideBySide(dados)
dias = ['2014-03-14','2015-03-14']
dados = df.ix[dias,:], dFixed.ix[dias,:]
plotSideBySide(dados,cm=[None,None])
dFixed.ix[dias]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: A sabesp disponibiliza dados para consulta neste endereço, mas não faço idéia de como pegar os dados com o python...
Step4: OK. Tudo certo. Bate com os gráficos mostrados pelo G1, apenas está sendo mostrado de uma forma diferente.
Step7: o cantareira tem capacidade total de quase 1 trilhão de litros, segundo a matéria do G1.
Step8: AAAAAAAAH, AGORA SIM! Corrigido. Agora vamos comparar o grafico com os dados usados pelo G1 e o com dados corrigidos
Step9: G1 errou 30%. errou feio, errou rude.
|
15,232 | <ASSISTANT_TASK:>
Python Code:
# import load_iris function from datasets module
from sklearn.datasets import load_iris
# save "bunch" object containing iris dataset and its attributes
iris = load_iris()
# store feature matrix in "X"
X = iris.data
# store response vector in "y"
y = iris.target
# print the shapes of X and y
print X.shape
print y.shape
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
print knn
knn.fit(X, y)
print(knn.predict([[3, 5, 4, 2]]))
X_new = [[3, 5, 4, 2], [5, 4, 3, 2]]
knn.predict(X_new)
# instantiate the model (using the value K=5)
knn = KNeighborsClassifier(n_neighbors=5)
# fit the model with data
knn.fit(X, y)
# predict the response for new observations
knn.predict(X_new)
# import the class
from sklearn.linear_model import LogisticRegression
# instantiate the model (using the default parameters)
logreg = LogisticRegression()
# fit the model with data
logreg.fit(X, y)
# predict the response for new observations
logreg.predict(X_new)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: scikit-learn 4-step modeling pattern
Step 1
Step2: Step 2
Step3: Name of the object does not matter
Step4: Step 3
Step5: Step 4
Step6: Returns a NumPy array
Step7: Using a different value for K
Step8: Using a different classification model
|
15,233 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tf-agents
import abc
import numpy as np
import tensorflow as tf
from tf_agents.agents import tf_agent
from tf_agents.drivers import driver
from tf_agents.environments import py_environment
from tf_agents.environments import tf_environment
from tf_agents.environments import tf_py_environment
from tf_agents.policies import tf_policy
from tf_agents.specs import array_spec
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
from tf_agents.trajectories import trajectory
from tf_agents.trajectories import policy_step
nest = tf.nest
class BanditPyEnvironment(py_environment.PyEnvironment):
def __init__(self, observation_spec, action_spec):
self._observation_spec = observation_spec
self._action_spec = action_spec
super(BanditPyEnvironment, self).__init__()
# Helper functions.
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _empty_observation(self):
return tf.nest.map_structure(lambda x: np.zeros(x.shape, x.dtype),
self.observation_spec())
# These two functions below should not be overridden by subclasses.
def _reset(self):
Returns a time step containing an observation.
return ts.restart(self._observe(), batch_size=self.batch_size)
def _step(self, action):
Returns a time step containing the reward for the action taken.
reward = self._apply_action(action)
return ts.termination(self._observe(), reward)
# These two functions below are to be implemented in subclasses.
@abc.abstractmethod
def _observe(self):
Returns an observation.
@abc.abstractmethod
def _apply_action(self, action):
Applies `action` to the Environment and returns the corresponding reward.
class SimplePyEnvironment(BanditPyEnvironment):
def __init__(self):
action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=2, name='action')
observation_spec = array_spec.BoundedArraySpec(
shape=(1,), dtype=np.int32, minimum=-2, maximum=2, name='observation')
super(SimplePyEnvironment, self).__init__(observation_spec, action_spec)
def _observe(self):
self._observation = np.random.randint(-2, 3, (1,), dtype='int32')
return self._observation
def _apply_action(self, action):
return action * self._observation
environment = SimplePyEnvironment()
observation = environment.reset().observation
print("observation: %d" % observation)
action = 2 #@param
print("action: %d" % action)
reward = environment.step(action).reward
print("reward: %f" % reward)
tf_environment = tf_py_environment.TFPyEnvironment(environment)
class SignPolicy(tf_policy.TFPolicy):
def __init__(self):
observation_spec = tensor_spec.BoundedTensorSpec(
shape=(1,), dtype=tf.int32, minimum=-2, maximum=2)
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.BoundedTensorSpec(
shape=(), dtype=tf.int32, minimum=0, maximum=2)
super(SignPolicy, self).__init__(time_step_spec=time_step_spec,
action_spec=action_spec)
def _distribution(self, time_step):
pass
def _variables(self):
return ()
def _action(self, time_step, policy_state, seed):
observation_sign = tf.cast(tf.sign(time_step.observation[0]), dtype=tf.int32)
action = observation_sign + 1
return policy_step.PolicyStep(action, policy_state)
sign_policy = SignPolicy()
current_time_step = tf_environment.reset()
print('Observation:')
print (current_time_step.observation)
action = sign_policy.action(current_time_step).action
print('Action:')
print (action)
reward = tf_environment.step(action).reward
print('Reward:')
print(reward)
step = tf_environment.reset()
action = 1
next_step = tf_environment.step(action)
reward = next_step.reward
next_observation = next_step.observation
print("Reward: ")
print(reward)
print("Next observation:")
print(next_observation)
class TwoWayPyEnvironment(BanditPyEnvironment):
def __init__(self):
action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=2, name='action')
observation_spec = array_spec.BoundedArraySpec(
shape=(1,), dtype=np.int32, minimum=-2, maximum=2, name='observation')
# Flipping the sign with probability 1/2.
self._reward_sign = 2 * np.random.randint(2) - 1
print("reward sign:")
print(self._reward_sign)
super(TwoWayPyEnvironment, self).__init__(observation_spec, action_spec)
def _observe(self):
self._observation = np.random.randint(-2, 3, (1,), dtype='int32')
return self._observation
def _apply_action(self, action):
return self._reward_sign * action * self._observation[0]
two_way_tf_environment = tf_py_environment.TFPyEnvironment(TwoWayPyEnvironment())
class TwoWaySignPolicy(tf_policy.TFPolicy):
def __init__(self, situation):
observation_spec = tensor_spec.BoundedTensorSpec(
shape=(1,), dtype=tf.int32, minimum=-2, maximum=2)
action_spec = tensor_spec.BoundedTensorSpec(
shape=(), dtype=tf.int32, minimum=0, maximum=2)
time_step_spec = ts.time_step_spec(observation_spec)
self._situation = situation
super(TwoWaySignPolicy, self).__init__(time_step_spec=time_step_spec,
action_spec=action_spec)
def _distribution(self, time_step):
pass
def _variables(self):
return [self._situation]
def _action(self, time_step, policy_state, seed):
sign = tf.cast(tf.sign(time_step.observation[0, 0]), dtype=tf.int32)
def case_unknown_fn():
# Choose 1 so that we get information on the sign.
return tf.constant(1, shape=(1,))
# Choose 0 or 2, depending on the situation and the sign of the observation.
def case_normal_fn():
return tf.constant(sign + 1, shape=(1,))
def case_flipped_fn():
return tf.constant(1 - sign, shape=(1,))
cases = [(tf.equal(self._situation, 0), case_unknown_fn),
(tf.equal(self._situation, 1), case_normal_fn),
(tf.equal(self._situation, 2), case_flipped_fn)]
action = tf.case(cases, exclusive=True)
return policy_step.PolicyStep(action, policy_state)
class SignAgent(tf_agent.TFAgent):
def __init__(self):
self._situation = tf.Variable(0, dtype=tf.int32)
policy = TwoWaySignPolicy(self._situation)
time_step_spec = policy.time_step_spec
action_spec = policy.action_spec
super(SignAgent, self).__init__(time_step_spec=time_step_spec,
action_spec=action_spec,
policy=policy,
collect_policy=policy,
train_sequence_length=None)
def _initialize(self):
return tf.compat.v1.variables_initializer(self.variables)
def _train(self, experience, weights=None):
observation = experience.observation
action = experience.action
reward = experience.reward
# We only need to change the value of the situation variable if it is
# unknown (0) right now, and we can infer the situation only if the
# observation is not 0.
needs_action = tf.logical_and(tf.equal(self._situation, 0),
tf.not_equal(reward, 0))
def new_situation_fn():
This returns either 1 or 2, depending on the signs.
return (3 - tf.sign(tf.cast(observation[0, 0, 0], dtype=tf.int32) *
tf.cast(action[0, 0], dtype=tf.int32) *
tf.cast(reward[0, 0], dtype=tf.int32))) / 2
new_situation = tf.cond(needs_action,
new_situation_fn,
lambda: self._situation)
new_situation = tf.cast(new_situation, tf.int32)
tf.compat.v1.assign(self._situation, new_situation)
return tf_agent.LossInfo((), ())
sign_agent = SignAgent()
# We need to add another dimension here because the agent expects the
# trajectory of shape [batch_size, time, ...], but in this tutorial we assume
# that both batch size and time are 1. Hence all the expand_dims.
def trajectory_for_bandit(initial_step, action_step, final_step):
return trajectory.Trajectory(observation=tf.expand_dims(initial_step.observation, 0),
action=tf.expand_dims(action_step.action, 0),
policy_info=action_step.info,
reward=tf.expand_dims(final_step.reward, 0),
discount=tf.expand_dims(final_step.discount, 0),
step_type=tf.expand_dims(initial_step.step_type, 0),
next_step_type=tf.expand_dims(final_step.step_type, 0))
step = two_way_tf_environment.reset()
for _ in range(10):
action_step = sign_agent.collect_policy.action(step)
next_step = two_way_tf_environment.step(action_step.action)
experience = trajectory_for_bandit(step, action_step, next_step)
print(experience)
sign_agent.train(experience)
step = next_step
# Imports for example.
from tf_agents.bandits.agents import lin_ucb_agent
from tf_agents.bandits.environments import stationary_stochastic_py_environment as sspe
from tf_agents.bandits.metrics import tf_metrics
from tf_agents.drivers import dynamic_step_driver
from tf_agents.replay_buffers import tf_uniform_replay_buffer
import matplotlib.pyplot as plt
batch_size = 2 # @param
arm0_param = [-3, 0, 1, -2] # @param
arm1_param = [1, -2, 3, 0] # @param
arm2_param = [0, 0, 1, 1] # @param
def context_sampling_fn(batch_size):
Contexts from [-10, 10]^4.
def _context_sampling_fn():
return np.random.randint(-10, 10, [batch_size, 4]).astype(np.float32)
return _context_sampling_fn
class LinearNormalReward(object):
A class that acts as linear reward function when called.
def __init__(self, theta, sigma):
self.theta = theta
self.sigma = sigma
def __call__(self, x):
mu = np.dot(x, self.theta)
return np.random.normal(mu, self.sigma)
arm0_reward_fn = LinearNormalReward(arm0_param, 1)
arm1_reward_fn = LinearNormalReward(arm1_param, 1)
arm2_reward_fn = LinearNormalReward(arm2_param, 1)
environment = tf_py_environment.TFPyEnvironment(
sspe.StationaryStochasticPyEnvironment(
context_sampling_fn(batch_size),
[arm0_reward_fn, arm1_reward_fn, arm2_reward_fn],
batch_size=batch_size))
observation_spec = tensor_spec.TensorSpec([4], tf.float32)
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.BoundedTensorSpec(
dtype=tf.int32, shape=(), minimum=0, maximum=2)
agent = lin_ucb_agent.LinearUCBAgent(time_step_spec=time_step_spec,
action_spec=action_spec)
def compute_optimal_reward(observation):
expected_reward_for_arms = [
tf.linalg.matvec(observation, tf.cast(arm0_param, dtype=tf.float32)),
tf.linalg.matvec(observation, tf.cast(arm1_param, dtype=tf.float32)),
tf.linalg.matvec(observation, tf.cast(arm2_param, dtype=tf.float32))]
optimal_action_reward = tf.reduce_max(expected_reward_for_arms, axis=0)
return optimal_action_reward
regret_metric = tf_metrics.RegretMetric(compute_optimal_reward)
num_iterations = 90 # @param
steps_per_loop = 1 # @param
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.policy.trajectory_spec,
batch_size=batch_size,
max_length=steps_per_loop)
observers = [replay_buffer.add_batch, regret_metric]
driver = dynamic_step_driver.DynamicStepDriver(
env=environment,
policy=agent.collect_policy,
num_steps=steps_per_loop * batch_size,
observers=observers)
regret_values = []
for _ in range(num_iterations):
driver.run()
loss_info = agent.train(replay_buffer.gather_all())
replay_buffer.clear()
regret_values.append(regret_metric.result())
plt.plot(regret_values)
plt.ylabel('Average Regret')
plt.xlabel('Number of Iterations')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tutorial on Multi Armed Bandits in TF-Agents
Step2: Imports
Step7: Introduction
Step8: The above interim abstract class implements PyEnvironment's _reset and _step functions and exposes the abstract functions _observe and _apply_action to be implemented by subclasses.
Step9: Now we can use this environment to get observations, and receive rewards for our actions.
Step10: TF Environments
Step11: Policies
Step12: Now we can request an observation from the environment, call the policy to choose an action, then the environment will output the reward
Step13: The way bandit environments are implemented ensures that every time we take a step, we not only receive the reward for the action we took, but also the next observation.
Step14: Agents
Step15: A More Complicated Policy
Step17: The Agent
Step18: In the above code, the agent defines the policy, and the variable situation is shared by the agent and the policy.
Step19: Training an Agent
Step20: From the output one can see that after the second step (unless the observation was 0 in the first step), the policy chooses the action in the right way and thus the reward collected is always non-negative.
Step23: Stationary Stochastic Environment with Linear Payoff Functions
Step24: The LinUCB Agent
Step25: Regret Metric
Step26: Training
|
15,234 | <ASSISTANT_TASK:>
Python Code:
import rebound
import numpy as np
import matplotlib.pyplot as plt
def setupSimulation():
''' Setup the 3-Body scenario'''
sim = rebound.Simulation()
sim.integrator = "ias15" # IAS15 is the default integrator, so we don't need this line
sim.add(m=1.)
sim.add(m=1e-3, a=1., r=np.sqrt(1e-3/3.)) # we now set collision radii!
sim.add(m=5e-3, a=1.25, r=1.25*np.sqrt(5e-3/3.))
sim.move_to_com()
return sim
sim = setupSimulation()
sim.collision = "direct"
sim.collision_resolve = "merge" # Built in function
print("Particles in the simulation at t=%6.1f: %d"%(sim.t,sim.N))
print("System Mass: {}".format([p.m for p in sim.particles]))
sim.integrate(100.)
print("Particles in the simulation at t=%6.1f: %d"%(sim.t,sim.N))
print("System Mass: {}".format([p.m for p in sim.particles]))
def my_merge(sim_pointer, collided_particles_index):
sim = sim_pointer.contents # retreive the standard simulation object
ps = sim.particles # easy access to list of particles
i = collided_particles_index.p1 # Note that p1 < p2 is not guaranteed.
j = collided_particles_index.p2
# This part is exciting! We can execute additional code during collisions now!
fig, ax = rebound.OrbitPlot(sim, xlim = (-1.3, 1.3), ylim = (-1.3, 1.3), color=['blue', 'green'])
ax.set_title("Merging particle {} into {}".format(j, i))
ax.text(ps[1].x, ps[1].y, "1");
ax.text(ps[2].x, ps[2].y, "2")
# So we plot the scenario exactly at the timestep that the collision function is triggered
# Merging Logic
total_mass = ps[i].m + ps[j].m
merged_planet = (ps[i] * ps[i].m + ps[j] * ps[j].m)/total_mass # conservation of momentum
# merged radius assuming a uniform density
merged_radius = (ps[i].r**3 + ps[j].r**3)**(1/3)
ps[i] = merged_planet # update p1's state vector (mass and radius will need corrections)
ps[i].m = total_mass # update to total mass
ps[i].r = merged_radius # update to joined radius
return 2 # remove particle with index j
sim = setupSimulation()
sim.collision = "direct"
ps = sim.particles
sim.collision_resolve = my_merge # user defined collision resolution function
sim.integrate(100.)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To reiterate the previous method, let's run the built-in merge collision resolution method
Step2: We can see above that two particles merged into one with a combined mass of 0.006.
Step3: Now we can set our new collision resolution function in the simulation object.
|
15,235 | <ASSISTANT_TASK:>
Python Code:
!pip install --upgrade apache-beam[gcp]
import os
import shutil
import numpy as np
import tensorflow as tf
from google import api_core
from google.cloud import aiplatform, bigquery
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
from matplotlib import pyplot as plt
print(tf.__version__)
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%%bash
gcloud config set project $PROJECT
gcloud config set ai/region $REGION
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except api_core.exceptions.Conflict:
print("Dataset already exists.")
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except api_core.exceptions.Conflict:
print("Table already exists.")
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
# TODO 2a. Write a function to take most recent entry in `traffic_realtime`
# table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string =
TODO: Your code goes here
trips = bq.query(query_string).to_dataframe()["trips_last_5min"][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
add_traffic_last_5min(
instance={
"dayofweek": 4,
"hourofday": 13,
"pickup_longitude": -73.99,
"pickup_latitude": 40.758,
"dropoff_latitude": 41.742,
"dropoff_longitude": -73.07,
}
)
# TODO 2b. Write code to call prediction on instance using realtime traffic
# info. Hint: Look at this sample
# https://github.com/googleapis/python-aiplatform/blob/master/samples/snippets/predict_custom_trained_model_sample.py
# TODO: Copy the `ENDPOINT_RESOURCENAME` from the deployment in the previous
# lab.
ENDPOINT_RESOURCENAME = ""
api_endpoint = f"{REGION}-aiplatform.googleapis.com"
# The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple
# requests.
client = aiplatform.gapic.PredictionServiceClient(client_options=client_options)
instance = {
"dayofweek": 4,
"hourofday": 13,
"pickup_longitude": -73.99,
"pickup_latitude": 40.758,
"dropoff_latitude": 41.742,
"dropoff_longitude": -73.07,
}
# The format of each instance should conform to the deployed model's
# prediction input schema.
instance_dict = # TODO: Your code goes here.
instance = json_format.ParseDict(instance, Value())
instances = [instance]
response = # TODO: Your code goes here.
# The predictions are a google.protobuf.Value representation of the model's
# predictions.
print(" prediction:",
# TODO: Your code goes here.
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel).
Step2: Re-train our model with trips_last_5min feature
Step3: Next, we create a table called traffic_realtime and set up the schema.
Step4: Launch Streaming Dataflow Pipeline
Step6: Make predictions from the new data
Step7: The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
Step8: Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
|
15,236 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from thetis import *
lx = 40e3
ly = 2e3
nx = 25
ny = 2
mesh2d = RectangleMesh(nx, ny, lx, ly)
fig, ax = plt.subplots(figsize=(12,1))
triplot(mesh2d, axes=ax);
P1_2d = FunctionSpace(mesh2d, 'CG', 1)
bathymetry_2d = Function(P1_2d, name='Bathymetry')
depth = 20.0
bathymetry_2d.assign(depth);
solver_obj = solver2d.FlowSolver2d(mesh2d, bathymetry_2d)
options = solver_obj.options
options.simulation_end_time = 12 * 3600
options.simulation_export_time = 1200.0
options.output_directory = 'outputs_2d_channel'
options.fields_to_export_hdf5 = ['elev_2d', 'uv_2d']
options.timestepper_type = 'CrankNicolson'
options.timestep = 200.0
left_bnd_id = 1
right_bnd_id = 2
swe_bnd = {}
swe_bnd[left_bnd_id] = {'elev': Constant(0.0)}
tide_amp = 0.01 # amplitude of the tide
tide_t = 12 * 3600. # period of the tide
def tidal_elevation(simulation_time):
Time-dependent tidal elevation
elev = tide_amp * sin(2 * pi * simulation_time / tide_t)
return elev
plt.plot(np.arange(0,100000,500),[tidal_elevation(i) for i in np.arange(0,100000,500)])
plt.ylabel('Elev (m)')
plt.xlabel('t (sec)');
tide_elev_const = Constant(tidal_elevation(0))
swe_bnd[right_bnd_id] = {'elev': tide_elev_const}
solver_obj.bnd_functions['shallow_water'] = swe_bnd
def update_forcings(t_new):
Callback function that updates all time dependent forcing fields
uv, elev = solver_obj.fields.solution_2d.split()
tide_elev_const.assign(tidal_elevation(t_new))
solver_obj.iterate(update_forcings=update_forcings)
elev = Function(solver_obj.function_spaces.H_2d, name='elev_2d')
uv = Function(solver_obj.function_spaces.U_2d, name='uv_2d')
idx = 9 # which hdf5 do we want to open
filename = os.path.join(options.output_directory, 'hdf5','Elevation2d_%05d' % idx)
dc = DumbCheckpoint(filename, mode=FILE_READ)
dc.load(elev)
fig, ax = plt.subplots(figsize=(12,2))
tricontourf(elev, axes=ax, cmap=matplotlib.cm.coolwarm, levels=50)
plt.axis('equal');
idx = 9 # which hdf5 do we want to open
filename = os.path.join(options.output_directory, 'hdf5','Velocity2d_%05d' % idx)
dc = DumbCheckpoint(filename, mode=FILE_READ)
dc.load(uv)
fig, ax = plt.subplots(figsize=(12,2))
quiver(uv, axes=ax, cmap=matplotlib.cm.coolwarm)
plt.axis('equal');
last_idx = solver_obj.i_export
matplotlib.rcParams['figure.max_open_warning'] = last_idx+1 # avoid warning
for idx in range(last_idx+1):
filename = os.path.join(options.output_directory, 'hdf5','Elevation2d_%05d' % idx)
dc = DumbCheckpoint(filename, mode=FILE_READ)
dc.load(elev)
dc.close()
fig, ax = plt.subplots(figsize=(16,1))
tricontourf(elev, axes=ax, cmap=matplotlib.cm.coolwarm, levels=50)
# Firedrake sets an automatic colorbar range which therefore changes per timestep
# instead, we want to fix it:
cbar = fig.axes[0].collections[-1].set_clim(-tide_amp, tide_amp)
plt.axis('equal')
# setup the mesh:
lx = 40e3
ly = 2e3
nx = 25
ny = 2
mesh2d = RectangleMesh(nx, ny, lx, ly)
# setup the bathymetry:
P1_2d = FunctionSpace(mesh2d, 'CG', 1)
bathymetry_2d = Function(P1_2d, name='Bathymetry')
depth = 20.0
bathymetry_2d.assign(depth);
# setup the solver object and set some options
solver_obj = solver2d.FlowSolver2d(mesh2d, bathymetry_2d)
options = solver_obj.options
options.simulation_end_time = 12 * 3600
options.simulation_export_time = 1200.0
options.output_directory = 'outputs_2d_channel'
options.fields_to_export_hdf5 = ['elev_2d', 'uv_2d']
options.timestepper_type = 'CrankNicolson'
options.timestep = 200.0
# setup boundary conditions:
left_bnd_id = 1
right_bnd_id = 2
swe_bnd = {}
swe_bnd[left_bnd_id] = {'elev': Constant(0.0)}
tide_amp = 0.01 # amplitude of the tide
tide_t = 12 * 3600. # period of the tide
def tidal_elevation(simulation_time):
Time-dependent tidal elevation
elev = tide_amp * sin(2 * pi * simulation_time / tide_t)
return elev
tide_elev_const = Constant(tidal_elevation(0))
swe_bnd[right_bnd_id] = {'elev': tide_elev_const}
solver_obj.bnd_functions['shallow_water'] = swe_bnd
def update_forcings(t_new):
Callback function that updates all time dependent forcing fields
uv, elev = solver_obj.fields.solution_2d.split()
tide_elev_const.assign(tidal_elevation(t_new))
solver_obj.iterate(update_forcings=update_forcings)
x, y = SpatialCoordinate(mesh2d)
bathymetry_2d.interpolate(depth * (1-0.5*max_value(cos((x-lx/2)/lx*pi*2), 0)));
options.use_wetting_and_drying = True
x, y = SpatialCoordinate(mesh2d)
bathymetry_2d.interpolate(depth * x/lx);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We create a rectangular mesh using the builtin Firedrake mesh utility.
Step2: We can use the built in plot function of firedrake to visualise the mesh.
Step3: Next we define a bathymetry function in this domain using continuous linear elements, and set the bathymetry to a constant 20 m depth
Step4: We are now ready to create a 2D solver object. This object can be used to solve the depth-averaged shallow water equations controlled by some options that can be set by the user.
Step5: First we set the options for the total duration and export intervals. The latter determines how frequently Thetis writes output files.
Step6: We also select the directory in which the output files will appear, which will be created automatically if it does not yet exist
Step7: The standard output of Thetis is in the form of a .pvd file with an associated series of .vtu files (and .pvtu when running in parallel). For visualisation we recommend the program paraview. This program is however not available inside this jupyter environment. Therefore we will here use hdf5 output files instead, which can be used to read back in the solution in Firedrake and Thetis (unlike the .pvd+.vtu output which cannot be read back in by Firedrake). This output can also be used as checkpoints, from which Thetis can be restarted. Here we will use the hdf5 output to read it back into Firedrake and plot it.
Step8: Next we define the time integrator, and set the time step, which can be chosen freely since Crank-Nicolson is unconditionally stable. Of course it will influence the accuracy of the simulation.
Step9: Boundary conditions
Step10: At each boundary we can define the external value of the prognostic variables, i.e. in this case the water elevation and velocity.The value should be either a Firedrake Constant or a Firedrake Function (in case the boundary condition is not uniform in space).
Step12: Above we set the water elevation (using key 'elev') on the left bounday. Alternatively, we could also prescribe the normal velocity (with key 'un'), the full 2D velocity vector ('uv') or the total flux through the boundary ('flux'). For all supported boundary conditions, see module shallowwater_eq.
Step13: We then create a Constant object with the initial value, and assign it to the left boundary
Step14: Boundary conditions are now complete, and we assign them to the solver object
Step16: Note that if boundary conditions are not assigned for some boundaries (the lateral boundaries 3 and 4 in this case), Thetis assumes impermeable land conditions.
Step17: Finally we pass this callback to the time iterator and run the model
Step18: While the model is running, Thetis prints some statistics on the command line
Step19: The following code opens the 9th output. We use os.path.join to combine the different parts (base outputdirectory, hdf5 sub directory, and actual filename) into a filename.
Step20: Now we can read the solution into the elev and plot it
Step21: In the same way we can also load and plot the velocity field
Step22: The following plots the elevations of all exported timesteps
Step25: Exercises
Step26: Try to make the following changes
Step27: For now let's make sure the bathymetry is sufficiently deep, so that the total depth (including the free surface elevation) does not go negative! This avoids us having to deal with wetting and drying.
Step28: To simulate the run up onto a beach, you could try the following bathymetry
|
15,237 | <ASSISTANT_TASK:>
Python Code:
#$HIDE_INPUT$
import pandas as pd
pd.set_option('max_rows', 5)
import numpy as np
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
reviews
reviews.points.describe()
reviews.taster_name.describe()
reviews.points.mean()
reviews.taster_name.unique()
reviews.taster_name.value_counts()
review_points_mean = reviews.points.mean()
reviews.points.map(lambda p: p - review_points_mean)
def remean_points(row):
row.points = row.points - review_points_mean
return row
reviews.apply(remean_points, axis='columns')
reviews.head(1)
review_points_mean = reviews.points.mean()
reviews.points - review_points_mean
reviews.country + " - " + reviews.region_1
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Summary functions
Step2: This method generates a high-level summary of the attributes of the given column. It is type-aware, meaning that its output changes based on the data type of the input. The output above only makes sense for numerical data; for string data here's what we get
Step3: If you want to get some particular simple summary statistic about a column in a DataFrame or a Series, there is usually a helpful pandas function that makes it happen.
Step4: To see a list of unique values we can use the unique() function
Step5: To see a list of unique values and how often they occur in the dataset, we can use the value_counts() method
Step6: Maps
Step7: The function you pass to map() should expect a single value from the Series (a point value, in the above example), and return a transformed version of that value. map() returns a new Series where all the values have been transformed by your function.
Step8: If we had called reviews.apply() with axis='index', then instead of passing a function to transform each row, we would need to give a function to transform each column.
Step9: Pandas provides many common mapping operations as built-ins. For example, here's a faster way of remeaning our points column
Step10: In this code we are performing an operation between a lot of values on the left-hand side (everything in the Series) and a single value on the right-hand side (the mean value). Pandas looks at this expression and figures out that we must mean to subtract that mean value from every value in the dataset.
|
15,238 | <ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'auth_write':'service', # Credentials used for writing data.
'query':'', # SQL with newlines and all.
'dataset':'', # Existing BigQuery dataset.
'table':'', # Table to create from this query.
'legacy':True, # Query type must match source tables.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'from':{
'query':{'field':{'name':'query','kind':'text','order':1,'default':'','description':'SQL with newlines and all.'}},
'legacy':{'field':{'name':'legacy','kind':'boolean','order':4,'default':True,'description':'Query type must match source tables.'}}
},
'to':{
'dataset':{'field':{'name':'dataset','kind':'string','order':2,'default':'','description':'Existing BigQuery dataset.'}},
'table':{'field':{'name':'table','kind':'string','order':3,'default':'','description':'Table to create from this query.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter BigQuery Query To Table Recipe Parameters
Step3: 4. Execute BigQuery Query To Table
|
15,239 | <ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
import matplotlib.pyplot as plt
bost = datasets.load_boston()
fig = plt.figure(figsize=(15, 10))
for i in range(0, 12):
ax = fig.add_subplot(3, 4, i + 1)
ax.set_xlabel(bost.feature_names[i])
xs, ys = bost.data[:, i], bost.target
plt.scatter(xs, ys, marker='.')
plt.show()
diab = datasets.load_diabetes()
fig = plt.figure(figsize=(15, 15))
for i in range(0, 10):
ax = fig.add_subplot(4, 3, i + 1)
ax.set_xlabel(diab['feature_names'][i])
xs, ys = diab.data[:, i], diab.target
plt.scatter(xs, ys, marker='.')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Goals
|
15,240 | <ASSISTANT_TASK:>
Python Code:
import torch
import gpytorch
import math
from matplotlib import cm
from matplotlib import pyplot as plt
import numpy as np
%matplotlib inline
%load_ext autoreload
%autoreload 2
def franke(X, Y):
term1 = .75*torch.exp(-((9*X - 2).pow(2) + (9*Y - 2).pow(2))/4)
term2 = .75*torch.exp(-((9*X + 1).pow(2))/49 - (9*Y + 1)/10)
term3 = .5*torch.exp(-((9*X - 7).pow(2) + (9*Y - 3).pow(2))/4)
term4 = .2*torch.exp(-(9*X - 4).pow(2) - (9*Y - 7).pow(2))
f = term1 + term2 + term3 - term4
dfx = -2*(9*X - 2)*9/4 * term1 - 2*(9*X + 1)*9/49 * term2 + \
-2*(9*X - 7)*9/4 * term3 + 2*(9*X - 4)*9 * term4
dfy = -2*(9*Y - 2)*9/4 * term1 - 9/10 * term2 + \
-2*(9*Y - 3)*9/4 * term3 + 2*(9*Y - 7)*9 * term4
return f, dfx, dfy
xv, yv = torch.meshgrid([torch.linspace(0, 1, 10), torch.linspace(0, 1, 10)])
train_x = torch.cat((
xv.contiguous().view(xv.numel(), 1),
yv.contiguous().view(yv.numel(), 1)),
dim=1
)
f, dfx, dfy = franke(train_x[:, 0], train_x[:, 1])
train_y = torch.stack([f, dfx, dfy], -1).squeeze(1)
train_y += 0.05 * torch.randn(train_y.size()) # Add noise to both values and gradients
class GPModelWithDerivatives(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPModelWithDerivatives, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMeanGrad()
self.base_kernel = gpytorch.kernels.RBFKernelGrad(ard_num_dims=2)
self.covar_module = gpytorch.kernels.ScaleKernel(self.base_kernel)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultitaskMultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=3) # Value + x-derivative + y-derivative
model = GPModelWithDerivatives(train_x, train_y, likelihood)
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.05) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
print("Iter %d/%d - Loss: %.3f lengthscales: %.3f, %.3f noise: %.3f" % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.squeeze()[0],
model.covar_module.base_kernel.lengthscale.squeeze()[1],
model.likelihood.noise.item()
))
optimizer.step()
# Set into eval mode
model.eval()
likelihood.eval()
# Initialize plots
fig, ax = plt.subplots(2, 3, figsize=(14, 10))
# Test points
n1, n2 = 50, 50
xv, yv = torch.meshgrid([torch.linspace(0, 1, n1), torch.linspace(0, 1, n2)])
f, dfx, dfy = franke(xv, yv)
# Make predictions
with torch.no_grad(), gpytorch.settings.fast_computations(log_prob=False, covar_root_decomposition=False):
test_x = torch.stack([xv.reshape(n1*n2, 1), yv.reshape(n1*n2, 1)], -1).squeeze(1)
predictions = likelihood(model(test_x))
mean = predictions.mean
extent = (xv.min(), xv.max(), yv.max(), yv.min())
ax[0, 0].imshow(f, extent=extent, cmap=cm.jet)
ax[0, 0].set_title('True values')
ax[0, 1].imshow(dfx, extent=extent, cmap=cm.jet)
ax[0, 1].set_title('True x-derivatives')
ax[0, 2].imshow(dfy, extent=extent, cmap=cm.jet)
ax[0, 2].set_title('True y-derivatives')
ax[1, 0].imshow(mean[:, 0].detach().numpy().reshape(n1, n2), extent=extent, cmap=cm.jet)
ax[1, 0].set_title('Predicted values')
ax[1, 1].imshow(mean[:, 1].detach().numpy().reshape(n1, n2), extent=extent, cmap=cm.jet)
ax[1, 1].set_title('Predicted x-derivatives')
ax[1, 2].imshow(mean[:, 2].detach().numpy().reshape(n1, n2), extent=extent, cmap=cm.jet)
ax[1, 2].set_title('Predicted y-derivatives')
None
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Franke function
Step2: Setting up the training data
Step3: Setting up the model
Step4: The model training is similar to training a standard GP regression model
Step5: Model predictions are also similar to GP regression with only function values, but we need more CG iterations to get accurate estimates of the predictive variance
|
15,241 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import pinkfish as pf
# format price data
pd.options.display.float_format = '{:0.2f}'.format
# increase display of dataframe rows
pd.set_option('display.max_rows', 1000)
df = pf.get_symbol_metadata(symbols=['msft', 'orcl', 'tsla'])
df.sort_values('num_years', ascending=False).reset_index(drop=True)
df = pf.get_symbol_metadata(dir_name='data')
df.sort_values('num_years', ascending=False).reset_index(drop=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get metadata for the symbols below.
Step2: Get metadata for all symbols in the cache directory
|
15,242 | <ASSISTANT_TASK:>
Python Code:
import pints
import pints.toy
import numpy as np
import matplotlib.pyplot as plt
# Create log pdf (default is 2-dimensional with r0=10 and sigma=1)
log_pdf = pints.toy.AnnulusLogPDF()
# Contour plot of pdf
num_points = 100
x = np.linspace(-15, 15, num_points)
y = np.linspace(-15, 15, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
samples = log_pdf.sample(100)
num_points = 100
x = np.linspace(-15, 15, num_points)
y = np.linspace(-15, 15, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.scatter(samples[:, 0], samples[:, 1])
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2], [8, 8], size=(4, 2))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(4000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
stacked = np.vstack(chains)
plt.contour(X, Y, Z, colors='k', alpha=0.5)
plt.scatter(stacked[:,0], stacked[:,1], marker='.', alpha=0.2)
plt.xlim(-15, 15)
plt.ylim(-15, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2], [8, 8], size=(4, 2))
sigma0 = [2, 2]
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HamiltonianMCMC, sigma0=sigma0)
# Set maximum number of iterations
mcmc.set_max_iterations(500)
# Disable logging
# mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[200:] for chain in chains]
plt.contour(X, Y, Z, colors='k', alpha=0.5)
plt.plot(chains[0][:, 0], chains[0][:, 1])
plt.xlim(-15, 15)
plt.ylim(-15, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
log_pdf = pints.toy.AnnulusLogPDF(dimensions=3, r0=20, sigma=0.5)
# Create an adaptive covariance MCMC routine
x0 = np.zeros(log_pdf.n_parameters()) + np.random.normal(0, 1, size=(4, log_pdf.n_parameters()))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(4000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
stacked = np.vstack(chains)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(stacked[:, 0], stacked[:, 1], stacked[:, 2], '.', alpha=0.1)
ax.legend()
plt.show()
a_mean = np.mean(stacked, axis=0)
print("True mean = " + str(log_pdf.mean()))
print("Sample mean = " + str(a_mean))
log_pdf = pints.toy.AnnulusLogPDF(dimensions=10, r0=15, sigma=2)
# Create an adaptive covariance MCMC routine
x0 = np.zeros(log_pdf.n_parameters()) + np.random.normal(0, 1, size=(4, log_pdf.n_parameters()))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(8000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
chain = np.vstack(chains)
d = list(map(lambda x: np.linalg.norm(x), chain))
a_mean = np.mean(d)
a_var = np.var(d)
print("True normed mean = " + str(log_pdf.mean_normed()))
print("Sample normed mean = " + str(a_mean))
print("True normed var = " + str(log_pdf.var_normed()))
print("Sample normed var = " + str(a_var))
a_mean = np.mean(chain, axis=0)
print("True mean = " + str(log_pdf.mean()))
print("Sample mean = " + str(a_mean))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate independent samples from this distribution and plot them
Step2: Use adaptive covariance MCMC to sample from this (un-normalised) pdf.
Step3: Scatter plot of the samples. Adaptive covariance MCMC seems to do ok at sampling from this distribution.
Step4: Try Hamiltonian Monte Carlo on same problem.
Step5: A single chain of HMC moves much more naturally around the annulus.
Step6: 3-dimensional annulus
Step7: The samples are near to the surface of a sphere of radius 20.
Step8: We can see that the mean of the samples is a long way from the true value (0, 0, 0)
Step9: 10-dimensional annulus
Step10: Compare the theoretical mean and variance of the normed distance from the origin with the sample-based estimates. Does ok!
Step11: Less good at recapitulating the actual mean.
|
15,243 | <ASSISTANT_TASK:>
Python Code:
# You might have to set the path to run this notebook directly from ipython notebook
import sys
my_path_to_modules = "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/"
sys.path.append(my_path_to_modules)
from pergola import mapping
# load mapping file
mapping_info = mapping.MappingInfo("../../sample_data/feeding_behavior/b2p.txt")
mapping_info.write()
from pergola import parsers
from pergola import intervals
# load the data into an IntData object that will store the sequence of events
int_data = intervals.IntData("../../sample_data/feeding_behavior/feeding_behavior_HF_mice.csv", map_dict=mapping_info.correspondence)
#Displays first 10 tuples of data list
int_data.data[:10]
int_data.data_types
int_data.min
int_data.max
int_data.tracks
mapping_info.write()
mapping_info.correspondence['EndT']
int_data_read = int_data.read(relative_coord=True)
int_data_read.list_tracks
int_data_read.range_values
dict_bed = int_data_read.convert(mode='bed')
#dict_bed = data_read.convert(mode='bed')
for key in dict_bed:
print "key.......: ",key#del
bedSingle = dict_bed [key]
print "::::::::::::::",bedSingle.data_types
bed_12_food_sc = dict_bed[('2', 'food_sc')]
bed_12_food_sc.range_values
type(bed_12_food_sc)
bed_12_food_sc.data
# Code to print the data inside a bed object (generator object)
#for row in bed_12_food_sc.data:
# print row
dict_bedGraph = int_data_read.convert(mode='bedGraph')
for key in dict_bedGraph:
print "key.......: ",key#del
bedGraphSingle = dict_bedGraph [key]
print "::::::::::::::",bedGraphSingle.data_types
bedG_8_food_sc = dict_bedGraph[('8', 'food_sc')]
bedG_8_food_sc.data
# Code to print the data inside a bed object (generator object)
#for row in bedG_8_food_sc:
# print row
type(int_data_read)
type(int_data_read.data)
int_data_read.range_values
int_data_read.list_tracks
int_data_read.data[-10]
int_data_read.data_types
#data_read.convert(mode=write_format, tracks=sel_tracks, tracks_merge=tracks2merge,
# data_types=data_types_list, dataTypes_actions=dataTypes_act,
# window=window_size)
mapping.write_chr (int_data_read)
# Generate a cytoband file and a bed file with phases
mapping.write_cytoband(end = int_data.max - int_data.min, delta=43200, start_phase="dark", lab_bed=False)
#data_read = intData.read(relative_coord=True, multiply_t=1)
data_read = int_data.read(relative_coord=True)
#for i in data_read.data:
# print i
data_type_col = {'food_sc': 'orange', 'food_fat':'blue'}
bed_str = data_read.convert(mode="bed", data_types=["food_sc", "food_fat"], dataTypes_actions="all",
color_restrictions=data_type_col)
for key in bed_str:
bedSingle = bed_str[key]
bedSingle.save_track()
data_type_col_bedGraph = {'food_sc':'orange', 'food_fat_food_sc':'blue'}
bedGraph_str = data_read.convert(mode="bedGraph", window=1800, data_types=["food_sc", "food_fat"], dataTypes_actions="all", color_restrictions=data_type_col_bedGraph)
for key in bedGraph_str:
bedGraph_single = bedGraph_str[key]
bedGraph_single.save_track()
## Bed file showing the files (recordings)
# reading correspondence file
mapping_file_data = mapping.MappingInfo("../../sample_data/feeding_behavior/f2g.txt")
mapping_file_data.write()
# Reading file info
files_data = intervals.IntData("../../sample_data/feeding_behavior/files.csv", map_dict=mapping_file_data.correspondence)
data_file_read = files_data.read(relative_coord=True)
bed_file = data_file_read.convert(mode="bed", dataTypes_actions="all", tracks_merge=files_data.tracks)
for key in bed_file:
bed_file_single = bed_file[key]
bed_file_single.save_track(name_file = "files_data")
# Reading phase info
phase_data = intervals.IntData("../../sample_data/feeding_behavior/phases_exp.csv", map_dict=mapping_file_data.correspondence)
data_phase_read = phase_data.read(relative_coord=True)
bed_file = data_phase_read.convert(mode="bed", dataTypes_actions="all", tracks_merge=phase_data.tracks)
for key in bed_file:
bed_file_single = bed_file[key]
bed_file_single.save_track(name_file = "phase_exp")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MappingInfo objects
Step2: To view the mappings MappingInfo objects provide the
Step3: MappingInfo objects are needed to load data into IntData objects as it will be explained in the lines below.
Step4: Intervals when loaded are stored in a list of tuples that can be accessed by data attribute
Step5: IntData objects also provide some other attributes like the set of different tracks (term for IDs in pergola ontology) contained in the data
Step6: The minimun value present in the data
Step7: The maximun value
Step8: The set of different tracks present in the data (term for different IDs in pergola ontology). In this case the different IDs for each mice
Step9: And finally the dataTypes (term for different types of data in pergola ontology) that can be used to encode for example different behaviours
Step10: Data conversion
Step11: Track object
Step12: Output data
Step13: bedGraph files
|
15,244 | <ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -p numpy,pandas,matplotlib,sklearn
# Use the IPython/jupyter feature to show images inline with the notebook
# output rather than have images popup.
from IPython.display import Image
%matplotlib inline
# Sample csv
import pandas as pd
from io import StringIO
import sys
csv_data = \
'''A,B,C,D
1.0,2.0,3.0,4.0
5.0,6.0,,8.0
10.0,11.0,12.0,'''
# If you are using Python 2.7, you need
# to convert the string to unicode:
if (sys.version_info < (3, 0)):
csv_data = unicode(csv_data)
df = pd.read_csv(StringIO(csv_data))
df
# Give a count of null values for each column
df.isnull().sum()
# access the underlying NumPy array
# via the `values` attribute
df.values
# remove rows that contain missing values
df.dropna(axis=0)
# remove columns that contain missing values
df.dropna(axis=1)
# remove columns that contain missing values
df.dropna(axis=1)
csv_data = \
'''A,B,C,D
1.0,2.0,3.0,4.0
,,,
4.0,,,
4.0,6.0,,
5.0,6.0,,8.0
10.0,11.0,12.0,'''
# If you are using Python 2.7, you need
# to convert the string to unicode:
if (sys.version_info < (3, 0)):
csv_data = unicode(csv_data)
df2 = pd.read_csv(StringIO(csv_data))
df2
# only drop rows where all columns are NaN
df2.dropna(how='all')
# drop rows that have less than 3 real values
df2.dropna(thresh=3)
# only drop rows where NaN appear in specific columns (here: 'C')
df2.dropna(subset=['C'])
# again: our original array
df.values
# impute missing values via the column mean
from sklearn.preprocessing import Imputer
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(df.values)
imputed_data = imr.transform(df.values)
imputed_data
# impute missing values via the row mean
imr = Imputer(missing_values='NaN', strategy='mean', axis=1)
imr = imr.fit(df.values)
imputed_data = imr.transform(df.values)
imputed_data
Image(filename='images/04_01.png', width=400)
Image(filename='images/04_02.png', width=300)
import pandas as pd
df = pd.DataFrame([['green', 'M', 10.1, 'class1'],
['red', 'L', 13.5, 'class2'],
['blue', 'XL', 15.3, 'class1']])
df.columns = ['color', 'size', 'price', 'classlabel']
df
size_mapping = {'XL': 3,
'L': 2,
'M': 1}
df['size'] = df['size'].map(size_mapping)
df
inv_size_mapping = {v: k for k, v in size_mapping.items()}
df['size'].map(inv_size_mapping)
import numpy as np
# create a mapping dict
# to convert class labels from strings to integers
class_mapping = {label: idx for idx, label in enumerate(np.unique(df['classlabel']))}
class_mapping
# class_mapping is the code we pass to the map function
# to convert class labels from strings to integers
df['classlabel'] = df['classlabel'].map(class_mapping)
df
# reverse the class label mapping
inv_class_mapping = {v: k for k, v in class_mapping.items()}
df['classlabel'] = df['classlabel'].map(inv_class_mapping)
df
# To avoid doing this by hand we can use the sklearn.preprocessing library
# LabelEncoder method
from sklearn.preprocessing import LabelEncoder
# Label encoding with sklearn's LabelEncoder
class_le = LabelEncoder()
y = class_le.fit_transform(df['classlabel'].values)
y
# reverse mapping
class_le.inverse_transform(y)
# Just looking at color, size & price we can convert non numeric data with
# the LabelEncoder
X = df[['color', 'size', 'price']].values
color_le = LabelEncoder()
X[:, 0] = color_le.fit_transform(X[:, 0])
X
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(categorical_features=[0])
ohe.fit_transform(X).toarray()
# return dense array so that we can skip
# the toarray step
ohe = OneHotEncoder(categorical_features=[0], sparse=False)
ohe.fit_transform(X)
df
# one-hot encoding via pandas - just color as a nominal value
pd.get_dummies(df[['price', 'color', 'size']])
# one-hot encoding via pandas - both color and class label as nominal values
pd.get_dummies(df[['price', 'color', 'size','classlabel']])
# multicollinearity guard in get_dummies
pd.get_dummies(df[['price', 'color', 'size']], drop_first=True)
# multicollinearity guard in get_dummies
# - both color and class label as nominal values
pd.get_dummies(df[['price', 'color', 'size','classlabel']], drop_first=True)
X
# multicollinearity guard for the OneHotEncoder
ohe = OneHotEncoder(categorical_features=[0])
ohe.fit_transform(X).toarray()
ohe.fit_transform(X).toarray()[:, 1:]
df_wine = pd.read_csv('https://archive.ics.uci.edu/'
'ml/machine-learning-databases/wine/wine.data',
header=None)
# if the Wine dataset is temporarily unavailable from the
# UCI machine learning repository, un-comment the following line
# of code to load the dataset from a local path:
# df_wine = pd.read_csv('wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines',
'Proline']
print('Class labels', np.unique(df_wine['Class label']))
df_wine.head()
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test =\
train_test_split(X, y,
test_size=0.3,
random_state=0,
stratify=y)
# X data
# y class label that will be used to train
# Test size 0.3 = 30% test data, the rest training data
from sklearn.preprocessing import MinMaxScaler
mms = MinMaxScaler()
X_train_norm = mms.fit_transform(X_train)
X_test_norm = mms.transform(X_test)
X[0,:]
X_train[0,:]
X_train_norm[0,:]
from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test)
X_train_std[0,:]
ex = np.array([0, 1, 2, 3, 4, 5])
print('standardized:', (ex - ex.mean()) / ex.std())
# Please note that pandas uses ddof=1 (sample standard deviation)
# by default, whereas NumPy's std method and the StandardScaler
# uses ddof=0 (population standard deviation)
# normalize
print('normalized:', (ex - ex.min()) / (ex.max() - ex.min()))
Image(filename='images/04_04.png', width=500)
Image(filename='images/04_05.png', width=500)
Image(filename='images/04_06.png', width=500)
from sklearn.linear_model import LogisticRegression
LogisticRegression(penalty='l1')
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(penalty='l1', C=1.0)
lr.fit(X_train_std, y_train)
print('Training accuracy:', lr.score(X_train_std, y_train))
print('Test accuracy :', lr.score(X_test_std, y_test))
lr.intercept_
# A numpy function to set precision
np.set_printoptions(8)
lr.coef_[lr.coef_!=0].shape
lr.coef_
import matplotlib.pyplot as plt
fig = plt.figure()
ax = plt.subplot(111)
colors = ['blue', 'green', 'red', 'cyan',
'magenta', 'yellow', 'black',
'pink', 'lightgreen', 'lightblue',
'gray', 'indigo', 'orange']
weights, params = [], []
for c in np.arange(-4., 6.):
lr = LogisticRegression(penalty='l1', C=10.**c, random_state=0)
lr.fit(X_train_std, y_train)
weights.append(lr.coef_[1])
params.append(10**c)
weights = np.array(weights)
for column, color in zip(range(weights.shape[1]), colors):
plt.plot(params, weights[:, column],
label=df_wine.columns[column + 1],
color=color)
plt.axhline(0, color='black', linestyle='--', linewidth=3)
plt.xlim([10**(-5), 10**5])
plt.ylabel('weight coefficient')
plt.xlabel('C')
plt.xscale('log')
plt.legend(loc='upper left')
ax.legend(loc='upper center',
bbox_to_anchor=(1.38, 1.03),
ncol=1, fancybox=True)
#plt.savefig('images/04_07.png', dpi=300,
# bbox_inches='tight', pad_inches=0.2)
plt.show()
lr = LogisticRegression(penalty='l1', C=0.01)
lr.fit(X_train_std, y_train)
print('Training accuracy:', lr.score(X_train_std, y_train))
print('Test accuracy :', lr.score(X_test_std, y_test))
lr = LogisticRegression(penalty='l1', C=0.1)
lr.fit(X_train_std, y_train)
print('Training accuracy:', lr.score(X_train_std, y_train))
print('Test accuracy :', lr.score(X_test_std, y_test))
lr = LogisticRegression(penalty='l1', C=1)
lr.fit(X_train_std, y_train)
print('Training accuracy:', lr.score(X_train_std, y_train))
print('Test accuracy :', lr.score(X_test_std, y_test))
lr = LogisticRegression(penalty='l1', C=10)
lr.fit(X_train_std, y_train)
print('Training accuracy:', lr.score(X_train_std, y_train))
print('Test accuracy :', lr.score(X_test_std, y_test))
from sklearn.base import clone
from itertools import combinations
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
class SBS():
def __init__(self, estimator, k_features, scoring=accuracy_score,
test_size=0.25, random_state=1):
self.scoring = scoring
self.estimator = clone(estimator)
self.k_features = k_features
self.test_size = test_size
self.random_state = random_state
def fit(self, X, y):
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=self.test_size,
random_state=self.random_state)
dim = X_train.shape[1]
self.indices_ = tuple(range(dim))
self.subsets_ = [self.indices_]
score = self._calc_score(X_train, y_train,
X_test, y_test, self.indices_)
self.scores_ = [score]
while dim > self.k_features:
scores = []
subsets = []
for p in combinations(self.indices_, r=dim - 1):
score = self._calc_score(X_train, y_train,
X_test, y_test, p)
scores.append(score)
subsets.append(p)
best = np.argmax(scores)
self.indices_ = subsets[best]
self.subsets_.append(self.indices_)
dim -= 1
self.scores_.append(scores[best])
self.k_score_ = self.scores_[-1]
return self
def transform(self, X):
return X[:, self.indices_]
def _calc_score(self, X_train, y_train, X_test, y_test, indices):
self.estimator.fit(X_train[:, indices], y_train)
y_pred = self.estimator.predict(X_test[:, indices])
score = self.scoring(y_test, y_pred)
return score
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
# selecting features
sbs = SBS(knn, k_features=1)
sbs.fit(X_train_std, y_train)
# plotting performance of feature subsets
k_feat = [len(k) for k in sbs.subsets_]
plt.plot(k_feat, sbs.scores_, marker='o')
plt.ylim([0.7, 1.02])
plt.ylabel('Accuracy')
plt.xlabel('Number of features')
plt.grid()
plt.tight_layout()
# plt.savefig('images/04_08.png', dpi=300)
plt.show()
k3 = list(sbs.subsets_[10])
print(df_wine.columns[1:][k3])
k6 = list(sbs.subsets_[7])
print(df_wine.columns[1:][k6])
knn.fit(X_train_std, y_train)
print('Training accuracy: %0.3f' % knn.score(X_train_std, y_train))
print('Test accuracy : %0.3f' % knn.score(X_test_std, y_test))
knn.fit(X_train_std[:, k3], y_train)
print('Training accuracy: %0.3f' % knn.score(X_train_std[:, k3], y_train))
print('Test accuracy : %0.3f' % knn.score(X_test_std[:, k3], y_test))
knn.fit(X_train_std[:, k6], y_train)
print('Training accuracy: %0.3f' % knn.score(X_train_std[:, k6], y_train))
print('Test accuracy : %0.3f' % knn.score(X_test_std[:, k6], y_test))
from sklearn.ensemble import RandomForestClassifier
feat_labels = df_wine.columns[1:]
forest = RandomForestClassifier(n_estimators=500,
random_state=1)
forest.fit(X_train, y_train)
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30,
feat_labels[indices[f]],
importances[indices[f]]))
plt.title('Feature Importance')
plt.bar(range(X_train.shape[1]),
importances[indices],
align='center')
plt.xticks(range(X_train.shape[1]),
feat_labels[indices], rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.tight_layout()
#plt.savefig('images/04_09.png', dpi=300)
plt.show()
from sklearn.feature_selection import SelectFromModel
sfm = SelectFromModel(forest, threshold=0.1, prefit=True)
X_selected = sfm.transform(X_train)
print('Number of samples that meet this criterion: %d out of in the training set %d' % (X_selected.shape[0], X_train.shape[0]))
for f in range(X_selected.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30,
feat_labels[indices[f]],
importances[indices[f]]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see
Step2: Dealing with missing data
Step3: <br>
Step4: <br>
Step5: <br>
Step6: Here we are using fit and transform methods as a preprocessor, for example from the Min Max Scaler, to map data from it's original form to one better suited for Machine learning.
Step7: This time the fit method from, for example from Logistic Regression, is used with training data and training labels to generate a model.
Step8: <br>
Step9: And we can map these back if required
Step10: <br>
Step11: <br>
Step12: What is the problem with this approach?
Step13: <br>
Step14: <br>
Step15: A visual example
Step16: <br>
Step17: Sparse solutions with L1-regularization
Step18: For regularized models in scikit-learn that support L1 regularization, we can simply set the penalty parameter to 'l1' to obtain a sparse solution
Step19: Applied to the standardized Wine data ...
Step20: This shows the intercept of each of the three models being used.
Step21: Here we can see the total number of weights that have not been brought to zero by using L1 regularization out of the maximum of 39.
Step22: Here we can see the all the weights for the three classes and the 13 dimensions in the wine dataset.
Step23: With this information we can graph now the regularization strength effects the weights.
Step24: Below we can see the effect on the training and test accuracy.
Step25: <br>
Step26: <br>
Step27: This is great for finding discriminative features with one gotcha, if two or more features are highly correlated one feature may be highly ranked and information on the other feature(s) may not be fully captured. Not a problem if model performance is key but it would be if understanding feature importance is.
Step28: <br>
|
15,245 | <ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform[full] $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
! pip3 install $USER kfp google-cloud-pipeline-components --upgrade
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
! python3 -c "import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))"
PROJECT_ID = "python-docs-samples-tests" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "python-docs-samples-tests":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
import google.cloud.aiplatform as aip
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
PIPELINE_ROOT = "{}/pipeline_root/beans".format(BUCKET_NAME)
from typing import NamedTuple
import kfp
from google_cloud_pipeline_components import aiplatform as gcc_aip
from kfp.v2 import dsl
from kfp.v2.dsl import (Artifact, ClassificationMetrics, Input, Metrics,
Output, component)
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
@component(
base_image="gcr.io/deeplearning-platform-release/tf2-cpu.2-6:latest",
output_component_file="tabular_eval_component.yaml",
packages_to_install=["google-cloud-aiplatform"],
)
def classification_model_eval_metrics(
project: str,
location: str, # "us-central1",
api_endpoint: str, # "us-central1-aiplatform.googleapis.com",
thresholds_dict_str: str,
model: Input[Artifact],
metrics: Output[Metrics],
metricsc: Output[ClassificationMetrics],
) -> NamedTuple("Outputs", [("dep_decision", str)]): # Return parameter.
import json
import logging
from google.cloud import aiplatform as aip
# Fetch model eval info
def get_eval_info(client, model_name):
from google.protobuf.json_format import MessageToDict
response = client.list_model_evaluations(parent=model_name)
metrics_list = []
metrics_string_list = []
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
logging.info("metric: %s, value: %s", metric, metrics[metric])
metrics_str = json.dumps(metrics)
metrics_list.append(metrics)
metrics_string_list.append(metrics_str)
return (
evaluation.name,
metrics_list,
metrics_string_list,
)
# Use the given metrics threshold(s) to determine whether the model is
# accurate enough to deploy.
def classification_thresholds_check(metrics_dict, thresholds_dict):
for k, v in thresholds_dict.items():
logging.info("k {}, v {}".format(k, v))
if k in ["auRoc", "auPrc"]: # higher is better
if metrics_dict[k] < v: # if under threshold, don't deploy
logging.info("{} < {}; returning False".format(metrics_dict[k], v))
return False
logging.info("threshold checks passed.")
return True
def log_metrics(metrics_list, metricsc):
test_confusion_matrix = metrics_list[0]["confusionMatrix"]
logging.info("rows: %s", test_confusion_matrix["rows"])
# log the ROC curve
fpr = []
tpr = []
thresholds = []
for item in metrics_list[0]["confidenceMetrics"]:
fpr.append(item.get("falsePositiveRate", 0.0))
tpr.append(item.get("recall", 0.0))
thresholds.append(item.get("confidenceThreshold", 0.0))
print(f"fpr: {fpr}")
print(f"tpr: {tpr}")
print(f"thresholds: {thresholds}")
metricsc.log_roc_curve(fpr, tpr, thresholds)
# log the confusion matrix
annotations = []
for item in test_confusion_matrix["annotationSpecs"]:
annotations.append(item["displayName"])
logging.info("confusion matrix annotations: %s", annotations)
metricsc.log_confusion_matrix(
annotations,
test_confusion_matrix["rows"],
)
# log textual metrics info as well
for metric in metrics_list[0].keys():
if metric != "confidenceMetrics":
val_string = json.dumps(metrics_list[0][metric])
metrics.log_metric(metric, val_string)
# metrics.metadata["model_type"] = "AutoML Tabular classification"
logging.getLogger().setLevel(logging.INFO)
aip.init(project=project)
# extract the model resource name from the input Model Artifact
model_resource_path = model.metadata["resourceName"]
logging.info("model path: %s", model_resource_path)
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
client = aip.gapic.ModelServiceClient(client_options=client_options)
eval_name, metrics_list, metrics_str_list = get_eval_info(
client, model_resource_path
)
logging.info("got evaluation name: %s", eval_name)
logging.info("got metrics list: %s", metrics_list)
log_metrics(metrics_list, metricsc)
thresholds_dict = json.loads(thresholds_dict_str)
deploy = classification_thresholds_check(metrics_list[0], thresholds_dict)
if deploy:
dep_decision = "true"
else:
dep_decision = "false"
logging.info("deployment decision is %s", dep_decision)
return (dep_decision,)
DISPLAY_NAME = "automl-beans{}".format(TIMESTAMP)
PIPELINE_NAME = "automl-tabular-beans-training-v2"
MACHINE_TYPE = "n1-standard-4"
@kfp.dsl.pipeline(name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT)
def pipeline(
bq_source: str = "bq://aju-dev-demos.beans.beans1",
display_name: str = DISPLAY_NAME,
project: str = PROJECT_ID,
gcp_region: str = REGION,
api_endpoint: str = API_ENDPOINT,
thresholds_dict_str: str = '{"auRoc": 0.95}',
):
dataset_create_op = gcc_aip.TabularDatasetCreateOp(
project=project, display_name=display_name, bq_source=bq_source
)
training_op = gcc_aip.AutoMLTabularTrainingJobRunOp(
project=project,
display_name=display_name,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
budget_milli_node_hours=1000,
column_specs={
"Area": "numeric",
"Perimeter": "numeric",
"MajorAxisLength": "numeric",
"MinorAxisLength": "numeric",
"AspectRation": "numeric",
"Eccentricity": "numeric",
"ConvexArea": "numeric",
"EquivDiameter": "numeric",
"Extent": "numeric",
"Solidity": "numeric",
"roundness": "numeric",
"Compactness": "numeric",
"ShapeFactor1": "numeric",
"ShapeFactor2": "numeric",
"ShapeFactor3": "numeric",
"ShapeFactor4": "numeric",
"Class": "categorical",
},
dataset=dataset_create_op.outputs["dataset"],
target_column="Class",
)
model_eval_task = classification_model_eval_metrics(
project,
gcp_region,
api_endpoint,
thresholds_dict_str,
training_op.outputs["model"],
)
with dsl.Condition(
model_eval_task.outputs["dep_decision"] == "true",
name="deploy_decision",
):
endpoint_op = gcc_aip.EndpointCreateOp(
project=project,
location=gcp_region,
display_name="train-automl-beans",
)
gcc_aip.ModelDeployOp(
model=training_op.outputs["model"],
endpoint=endpoint_op.outputs["endpoint"],
dedicated_resources_min_replica_count=1,
dedicated_resources_max_replica_count=1,
dedicated_resources_machine_type=MACHINE_TYPE,
)
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="tabular classification_pipeline.json".replace(" ", "_"),
)
DISPLAY_NAME = "beans_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="tabular classification_pipeline.json".replace(" ", "_"),
pipeline_root=PIPELINE_ROOT,
parameter_values={"project": PROJECT_ID, "display_name": DISPLAY_NAME},
enable_caching=False,
)
job.submit()
! rm tabular_classification_pipeline.json
pipeline_df = aip.get_pipeline_df(pipeline=PIPELINE_NAME)
print(pipeline_df.head(2))
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
try:
if delete_model and "DISPLAY_NAME" in globals():
models = aip.Model.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
model = models[0]
aip.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
try:
if delete_endpoint and "DISPLAY_NAME" in globals():
endpoints = aip.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
aip.Endpoint.delete(endpoint.resource_name)
print("Deleted endpoint:", endpoint)
except Exception as e:
print(e)
if delete_dataset and "DISPLAY_NAME" in globals():
if "tabular" == "tabular":
try:
datasets = aip.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TabularDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "image":
try:
datasets = aip.ImageDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.ImageDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "text":
try:
datasets = aip.TextDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TextDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "video":
try:
datasets = aip.VideoDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.VideoDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the latest GA version of google-cloud-pipeline-components library as well.
Step3: Restart the kernel
Step4: Check the versions of the packages you installed. The KFP SDK version should be >=1.8.
Step5: Before you begin
Step6: Region
Step7: Timestamp
Step8: Authenticate your Google Cloud account
Step9: Create a Cloud Storage bucket
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Service Account
Step13: Set service account access for Vertex AI Pipelines
Step14: Set up variables
Step15: Vertex AI constants
Step16: Vertex AI Pipelines constants
Step17: Additional imports.
Step18: Initialize Vertex AI SDK for Python
Step19: Define a metrics evaluation custom component
Step20: Define an AutoML tabular classification pipeline that uses components from google_cloud_pipeline_components
Step21: Compile the pipeline
Step22: Run the pipeline
Step23: Click on the generated link to see your run in the Cloud Console.
Step24: Cleaning up
|
15,246 | <ASSISTANT_TASK:>
Python Code:
!pip install --pre deepchem[jax]
import numpy as np
import functools
try:
import jax
import jax.numpy as jnp
import haiku as hk
import optax
from deepchem.models import PINNModel, JaxModel
from deepchem.data import NumpyDataset
from deepchem.models.optimizers import Adam
from jax import jacrev
has_haiku_and_optax = True
except:
has_haiku_and_optax = False
import matplotlib.pyplot as plt
give_size = 10
in_given = np.linspace(-2 * np.pi, 2 * np.pi, give_size)
out_given = np.cos(in_given) + 0.1*np.random.normal(loc=0.0, scale=1, size=give_size)
# red for numpy.sin()
plt.figure(figsize=(13, 7))
plt.scatter(in_given, out_given, color = 'green', marker = "o")
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Supervised Data"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
import matplotlib.pyplot as plt
test = np.expand_dims(np.linspace(-2.5 * np.pi, 2.5 * np.pi, 100), 1)
out_array = np.cos(test)
plt.figure(figsize=(13, 7))
plt.plot(test, out_array, color = 'blue', alpha = 0.5)
plt.scatter(in_given, out_given, color = 'green', marker = "o")
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Actual data" ,"Supervised Data"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
# defining the Haiku model
# A neural network is defined as a function of its weights & operations.
# NN(x) = F(x, W)
# forward function defines the F which describes the mathematical operations like Matrix & dot products, Signmoid functions, etc
# W is the init_params
def f(x):
net = hk.nets.MLP(output_sizes=[256, 128, 1], activation=jax.nn.softplus)
val = net(x)
return val
init_params, forward_fn = hk.transform(f)
rng = jax.random.PRNGKey(500)
params = init_params(rng, np.random.rand(1000, 1))
train_dataset = NumpyDataset(np.expand_dims(in_given, axis=1), np.expand_dims(out_given, axis=1))
rms_loss = lambda pred, tar, w: jnp.mean(optax.l2_loss(pred, tar))
# JaxModel Working
nn_model = JaxModel(
forward_fn,
params,
rms_loss,
batch_size=100,
learning_rate=0.001,
log_frequency=2)
nn_model.fit(train_dataset, nb_epochs=10000, deterministic=True)
dataset_test = NumpyDataset(test)
nn_output = nn_model.predict(dataset_test)
plt.figure(figsize=(13, 7))
plt.plot(test, out_array, color = 'blue', alpha = 0.5)
plt.scatter(in_given, out_given, color = 'green', marker = "o")
plt.plot(test, nn_output, color = 'red', marker = "o", alpha = 0.7)
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Actual data", "Vanilla NN", "Supervised Data"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
def create_eval_fn(forward_fn, params):
Calls the function to evaluate the model
@jax.jit
def eval_model(x, rng=None):
bu = forward_fn(params, rng, x)
return jnp.squeeze(bu)
return eval_model
def gradient_fn(forward_fn, loss_outputs, initial_data):
This function calls the gradient function, to implement the backpropagation
boundary_data = initial_data['X0']
boundary_target = initial_data['u0']
@jax.jit
def model_loss(params, target, weights, rng, x_train):
@functools.partial(jax.vmap, in_axes=(None, 0))
def periodic_loss(params, x):
diffrential equation => grad(f(x)) = - sin(x)
minimize f(x) := grad(f(x)) + sin(x)
x = jnp.expand_dims(x, 0)
u_x = jacrev(forward_fn, argnums=(2))(params, rng, x)
return u_x + jnp.sin(x)
u_pred = forward_fn(params, rng, boundary_data)
loss_u = jnp.mean((u_pred - boundary_target)**2)
f_pred = periodic_loss(params, x_train)
loss_f = jnp.mean((f_pred**2))
return loss_u + loss_f
return model_loss
initial_data = {
'X0': jnp.expand_dims(in_given, 1),
'u0': jnp.expand_dims(out_given, 1)
}
opt = Adam(learning_rate=1e-3)
pinn_model= PINNModel(
forward_fn=forward_fn,
params=params,
initial_data=initial_data,
batch_size=1000,
optimizer=opt,
grad_fn=gradient_fn,
eval_fn=create_eval_fn,
deterministic=True,
log_frequency=1000)
# defining our training data. We feed 100 points between [-2.5pi, 2.5pi] without the labels,
# which will be used as the differential loss(regulariser)
X_f = np.expand_dims(np.linspace(-3 * np.pi, 3 * np.pi, 1000), 1)
dataset = NumpyDataset(X_f)
pinn_model.fit(dataset, nb_epochs=3000)
import matplotlib.pyplot as plt
pinn_output = pinn_model.predict(dataset_test)
plt.figure(figsize=(13, 7))
plt.plot(test, out_array, color = 'blue', alpha = 0.5)
plt.scatter(in_given, out_given, color = 'green', marker = "o")
# plt.plot(test, nn_output, color = 'red', marker = "x", alpha = 0.3)
plt.scatter(test, pinn_output, color = 'red', marker = "o", alpha = 0.7)
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Actual data" ,"Supervised Data", "PINN"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
plt.figure(figsize=(13, 7))
# plt.plot(test, out_array, color = 'blue', alpha = 0.5)
# plt.scatter(in_given, out_given, color = 'green', marker = "o")
plt.scatter(test, nn_output, color = 'blue', marker = "x", alpha = 0.3)
plt.scatter(test, pinn_output, color = 'red', marker = "o", alpha = 0.7)
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Vanilla NN", "PINN"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Given Physical Data
Step2: From simple integeration, we can easily solve the diffrential equation and the solution will be -
Step3: Building a Simple Neural Network Model -
Step4: Fitting a simple Neural Network solution to the Physical Data
Step8: Learning to fit the Data using the underlying Diffrential equation
Step9: Comparing the results between PINN & Vanilla NN model
|
15,247 | <ASSISTANT_TASK:>
Python Code:
import errno
import os
import shutil
import zipfile
import numpy as np
import pandas as pd
# In[22]:
# TARGETDIR = '../btc/graphs_njp.zip'
# In[23]:
# with open(doc, "rb") as zipsrc:
# zfile = zipfile.ZipFile(zipsrc)
# for member in zfile.infolist():
# target_path = os.path.join(TARGETDIR, member.filename)
# if target_path.endswith('/'): # folder entry, create
# try:
# os.makedirs(target_path)
# except (OSError, IOError) as err:
# # Windows may complain if the folders already exist
# if err.errno != errno.EEXIST:
# raise
# continue
# with open(target_path, 'wb') as outfile, zfile.open(member) as infile:
# shutil.copyfileobj(infile, outfile)
!ls
!unzip bitcoin_dataset.zip
# addresses
addr = pd.read_csv('addresses.txt', header = None, names = ['user:ID', 'addr'], delimiter= '\t')
addr[':LABEL'] = 'address'
addr.to_csv('addresses.csv', index = False)
addr.head()
# blocks
blks = pd.read_csv('blockhash.txt', header = None, names = ['block:ID', 'bhash', 'btime', 'txs' ], delimiter= '\t')
blks[':LABEL'] = 'blockchain'
blks.to_csv('blockhash.csv', index = False)
blks.head()
!rm addresses.txt
!rm blockhash.txt
# transactions + transaction time
txns = pd.read_csv('txhash.txt', header = None, names = ['tx:ID', 'txhash'], delimiter= '\t')
txns2 = pd.read_csv('tx.txt', header = None, names = ['tx:ID', 'block' ,'n_inputs', 'n_outputs'], delimiter= '\t')
# txns = txns.drop(['Unnamed: 0', ':LABEL'], axis = 1)
txnstime = pd.read_csv('txtime.txt', header = None, names = ['tx:ID', 'unixtime'], delimiter= '\t')
txnstime['time'] = pd.to_datetime(txnstime['unixtime'],unit='s')
txns = txns.merge(txnstime, how = 'left', on = 'tx:ID')
txns = txns.merge(txns2, how = 'left', on = 'tx:ID')
txns[':LABEL'] = 'transaction'
txns.head()
txns.to_csv('tx.csv', index = False)
print(txns.columns)
!rm txhash.txt
!rm txtime.txt
!rm tx.txt
# txnsin
txnsin = pd.read_csv('txin.txt', header = None, names = ['tx:ID', 'user' ,'value'], delimiter= '\t')
txnsin[':LABEL'] = 'incoming_payment'
txnsin.to_csv('txin.csv', index = False)
txnsin.head()
# txnsout
txnsout = pd.read_csv('txout.txt', header = None, names = ['tx:ID', 'user' ,'value'], delimiter= '\t')
txnsout[':LABEL'] = 'sent_coins'
txnsout.to_csv('txout.csv', index = False)
txnsout.head()
!rm txin.txt
!rm txout.txt
import numpy as np
import pandas as pd
!ls
rels_txns_to_block = pd.read_csv('tx.csv') #, compression='zip')
rels_txns_to_block.columns
rels_txns_to_block = rels_txns_to_block.drop(['txhash', 'unixtime', 'time', 'n_inputs', 'n_outputs',
':LABEL'], axis = 1)
rels_txns_to_block.head()
rels_txns_to_block[':TYPE'] = 'part_of_block'
rels_txns_to_block.to_csv('relsPaymentsToAddress.csv', index = False)
txnsout.head()
# In[ ]:
file = open("poc_transaction.txt", "w")
fileIn = open("poc_transactionsIn.txt", "w")
fileOut = open("poc_transactionsOut.txt", "w")
fileTransList = open("poc_transactionList.txt","w")
fileAddressList = open ("poc_addressList.txt","w")
fileBlockList = open("poc_blockdata.txt", "w")
file.write(":ID,Transaction_ID,Hash,Time,V_In,V_Out,:LABEL" + "\n")
fileIn.write(":ID,Transaction_ID,Transaction_Hash,Address,Spent,Value,:LABEL" + "\n")
fileOut.write(":ID,Transaction_ID,Transaction_Hash,Type,:LABEL" + "\n")
fileAddressList.write(":ID,AddressID,:LABEL" + "\n")
fileBlockList.write(":ID,BlockID,Hash,Received_Time,Previous_Block_Hash,Transaction_Count,Height,:LABEL" + "\n")
fileBlockTrans = open("poc_relsBlocksTransactions.txt","w")
fileTransOut = open("poc_relsTransactionsOut.txt","w")
fileTransInRels = open("poc_relsTransactionsIn.txt","w")
filePaymentOutAddr = open("poc_relsPaymentsToAddress.txt","w")
filePaymentInAddr = open("poc_relsRedeemedFromAddress.txt","w")
fileBlockTrans.write(":START_ID,:END_ID,:TYPE" + "\n")
fileTransOut.write(":START_ID,:END_ID,Spent,Value,:TYPE" + "\n")
fileTransInRels.write(":START_ID,:END_ID,Spent,Value,:TYPE" + "\n")
filePaymentOutAddr.write(":START_ID,:END_ID,Spent,Value,:TYPE" + "\n")
filePaymentInAddr.write(":START_ID,:END_ID,Spent,Value,:TYPE" + "\n")
fileBlockTrans.write(str(hash) + ',' + str('trans' + str(item["tx_index"])) + ',' + 'PART_OF_BLOCK' + "\n")
fileTransOut.write('trans' + str(xx["tx_index"]) + ',' + 'out_' + str(tout) + ',' + str(xx["spent"]) + ','+ str( Decimal(xx["value"]) / Decimal(100000000.0)) + ',' + 'SENT_COINS' + "\n")
fileTransInRels.write('transin_' + str(tin) + ',' + 'trans' + str(tx_index) + ',' + str(nn["prev_out"]["spent"]) + ','+ str( Decimal(nn["prev_out"]["value"]) / Decimal(100000000.0)) + ',' + 'INCOMING_PAYMENT' + "\n")
filePaymentOutAddr.write('out_' + str(tout) + ',' + rec + ',' + str(xx["spent"]) + ','+ str( Decimal(xx["value"]) / Decimal(100000000.0)) + ','+ 'WAS_SENT_TO' + "\n")
filePaymentInAddr.write(strAddr + ',' + 'transin_' + str(tin) + ',' + str(nn["prev_out"]["spent"]) + ','+ str( Decimal(nn["prev_out"]["value"])/ Decimal(100000000.0)) + ','+ 'REDEEMED' + "\n")
# tout = 1
# tin = 1
# f = open('blockList.txt', 'r')
# temp = f.read().splitlines()
# for line in temp:
# words = line.split("|")
# s = words[1]
# url = "https://blockchain.info/rawblock/" + str(s)
# print url
# usock = urllib2.urlopen(url)
# data = usock.read()
# result = json.loads(data)
# hash = result['hash']
# block_index = result['block_index']
# height = result['height']
# size = result['size']
# main_chain = result['main_chain']
# prev_block = result['prev_block']
# try:
# received_time = result['received_time']
# except KeyError:
# received_time = 'NA'
# n_tx = result['n_tx']
# fileBlockList.write(str(hash) + ',' + str(block_index) + ',' +str(hash) + ',' + str(received_time) + ',' + str(prev_block) + ',' + str(n_tx) + ',' + str(height) + ",BlockChain" + "\n");
# parent = result["tx"]
# for item in parent:
# tx_index = str(item["tx_index"])
# tx_hash = str(item["hash"])
# file.write(str('trans' + str(item["tx_index"])) + ',' + str(item["tx_index"]) + ',' +str(item["hash"]) + ',' + str(item["time"]) + ',' + str(item["vin_sz"]) + ',' + str(item["vout_sz"]) + ",Transaction"+ "\n");
# fileBlockTrans.write(str(hash) + ',' + str('trans' + str(item["tx_index"])) + ',' + 'PART_OF_BLOCK' + "\n")
# if 'inputs' in item :
# for nn in item["inputs"]:
# # print nn["sequence"]
# if 'prev_out' in nn :
# # print nn["prev_out"]["addr"]
# # print nn["prev_out"]["spent"]
# # print nn["prev_out"]["value"]
# try:
# strAddr = str(nn["prev_out"]["addr"])
# except KeyError:
# strAddr = 'NA'
# fileIn.write('transin_' + str(tin) + ',' + tx_index + ',' + str(tx_hash) + ',' + strAddr + ',' +str(nn["prev_out"]["spent"]) + ',' + str( Decimal(nn["prev_out"]["value"]) / Decimal(100000000.0)) + ",IncomingPayment" + "\n");
# fileTransInRels.write('transin_' + str(tin) + ',' + 'trans' + str(tx_index) + ',' + str(nn["prev_out"]["spent"]) + ','+ str( Decimal(nn["prev_out"]["value"]) / Decimal(100000000.0)) + ',' + 'INCOMING_PAYMENT' + "\n")
# filePaymentInAddr.write(strAddr + ',' + 'transin_' + str(tin) + ',' + str(nn["prev_out"]["spent"]) + ','+ str( Decimal(nn["prev_out"]["value"])/ Decimal(100000000.0)) + ','+ 'REDEEMED' + "\n")
# fileTransList.write('transin_' + str(tin) + ',' + tx_hash + "\n")
# fileAddressList.write(strAddr + ',' + strAddr + ',Address' + "\n")
# tin = tin + 1
# if 'out' in item :
# for xx in item["out"]:
# # print xx["tx_index"]
# # print xx["type"]
# # print xx["addr"]
# # print xx["spent"]
# # print xx["value"]
# try:
# rec = str(xx["addr"])
# fileOut.write('out_' + str(tout) + ',' + str(xx["tx_index"]) + ',' + str(tx_hash) + ',' + str(xx["type"]) + ",OutgoingPayment"+ "\n");
# fileTransOut.write('trans' + str(xx["tx_index"]) + ',' + 'out_' + str(tout) + ',' + str(xx["spent"]) + ','+ str( Decimal(xx["value"]) / Decimal(100000000.0)) + ',' + 'SENT_COINS' + "\n")
# filePaymentOutAddr.write('out_' + str(tout) + ',' + rec + ',' + str(xx["spent"]) + ','+ str( Decimal(xx["value"]) / Decimal(100000000.0)) + ','+ 'WAS_SENT_TO' + "\n")
# fileAddressList.write(str(rec) + ',' + str(rec) + ',Address' + "\n")
# tout = tout+1
# except KeyError:
# rec = 'Unavailable'
usock.close()
file.close()
fileIn.close()
fileOut.close()
fileTransList.close()
fileBlockTrans.close()
fileTransInRels.close()
filePaymentOutAddr.close()
filePaymentInAddr.close()
fileAddressList.close()
fileBlockList.close()
f.close();
print "Done"
#!/usr/bin/env python
import simplejson as json
import httplib
import urllib2
from httplib import HTTPConnection, HTTPS_PORT
import ssl
from decimal import *
file = open("poc_transaction.txt", "w")
fileIn = open("poc_transactionsIn.txt", "w")
fileOut = open("poc_transactionsOut.txt", "w")
fileTransList = open("poc_transactionList.txt","w")
fileBlockTrans = open("poc_relsBlocksTransactions.txt","w")
fileTransOut = open("poc_relsTransactionsOut.txt","w")
fileTransInRels = open("poc_relsTransactionsIn.txt","w")
filePaymentOutAddr = open("poc_relsPaymentsToAddress.txt","w")
filePaymentInAddr = open("poc_relsRedeemedFromAddress.txt","w")
fileAddressList = open ("poc_addressList.txt","w")
fileBlockList = open("poc_blockdata.txt", "w")
file.write(":ID,Transaction_ID,Hash,Time,V_In,V_Out,:LABEL" + "\n")
fileIn.write(":ID,Transaction_ID,Transaction_Hash,Address,Spent,Value,:LABEL" + "\n")
fileOut.write(":ID,Transaction_ID,Transaction_Hash,Type,:LABEL" + "\n")
fileBlockTrans.write(":START_ID,:END_ID,:TYPE" + "\n")
fileTransOut.write(":START_ID,:END_ID,Spent,Value,:TYPE" + "\n")
fileTransInRels.write(":START_ID,:END_ID,Spent,Value,:TYPE" + "\n")
filePaymentOutAddr.write(":START_ID,:END_ID,Spent,Value,:TYPE" + "\n")
filePaymentInAddr.write(":START_ID,:END_ID,Spent,Value,:TYPE" + "\n")
fileAddressList.write(":ID,AddressID,:LABEL" + "\n")
fileBlockList.write(":ID,BlockID,Hash,Received_Time,Previous_Block_Hash,Transaction_Count,Height,:LABEL" + "\n")
tout = 1
tin = 1
f = open('blockList.txt', 'r')
temp = f.read().splitlines()
for line in temp:
words = line.split("|")
s = words[1]
url = "https://blockchain.info/rawblock/" + str(s)
print url
usock = urllib2.urlopen(url)
data = usock.read()
result = json.loads(data)
hash = result['hash']
block_index = result['block_index']
height = result['height']
size = result['size']
main_chain = result['main_chain']
prev_block = result['prev_block']
try:
received_time = result['received_time']
except KeyError:
received_time = 'NA'
n_tx = result['n_tx']
fileBlockList.write(str(hash) + ',' + str(block_index) + ',' +str(hash) + ',' + str(received_time) + ',' + str(prev_block) + ',' + str(n_tx) + ',' + str(height) + ",BlockChain" + "\n");
parent = result["tx"]
for item in parent:
tx_index = str(item["tx_index"])
tx_hash = str(item["hash"])
file.write(str('trans' + str(item["tx_index"])) + ',' + str(item["tx_index"]) + ',' +str(item["hash"]) + ',' + str(item["time"]) + ',' + str(item["vin_sz"]) + ',' + str(item["vout_sz"]) + ",Transaction"+ "\n");
fileBlockTrans.write(str(hash) + ',' + str('trans' + str(item["tx_index"])) + ',' + 'PART_OF_BLOCK' + "\n")
if 'inputs' in item :
for nn in item["inputs"]:
# print nn["sequence"]
if 'prev_out' in nn :
# print nn["prev_out"]["addr"]
# print nn["prev_out"]["spent"]
# print nn["prev_out"]["value"]
try:
strAddr = str(nn["prev_out"]["addr"])
except KeyError:
strAddr = 'NA'
fileIn.write('transin_' + str(tin) + ',' + tx_index + ',' + str(tx_hash) + ',' + strAddr + ',' +str(nn["prev_out"]["spent"]) + ',' + str( Decimal(nn["prev_out"]["value"]) / Decimal(100000000.0)) + ",IncomingPayment" + "\n");
fileTransInRels.write('transin_' + str(tin) + ',' + 'trans' + str(tx_index) + ',' + str(nn["prev_out"]["spent"]) + ','+ str( Decimal(nn["prev_out"]["value"]) / Decimal(100000000.0)) + ',' + 'INCOMING_PAYMENT' + "\n")
filePaymentInAddr.write(strAddr + ',' + 'transin_' + str(tin) + ',' + str(nn["prev_out"]["spent"]) + ','+ str( Decimal(nn["prev_out"]["value"])/ Decimal(100000000.0)) + ','+ 'REDEEMED' + "\n")
fileTransList.write('transin_' + str(tin) + ',' + tx_hash + "\n")
fileAddressList.write(strAddr + ',' + strAddr + ',Address' + "\n")
tin = tin + 1
if 'out' in item :
for xx in item["out"]:
# print xx["tx_index"]
# print xx["type"]
# print xx["addr"]
# print xx["spent"]
# print xx["value"]
try:
rec = str(xx["addr"])
fileOut.write('out_' + str(tout) + ',' + str(xx["tx_index"]) + ',' + str(tx_hash) + ',' + str(xx["type"]) + ",OutgoingPayment"+ "\n");
fileTransOut.write('trans' + str(xx["tx_index"]) + ',' + 'out_' + str(tout) + ',' + str(xx["spent"]) + ','+ str( Decimal(xx["value"]) / Decimal(100000000.0)) + ',' + 'SENT_COINS' + "\n")
filePaymentOutAddr.write('out_' + str(tout) + ',' + rec + ',' + str(xx["spent"]) + ','+ str( Decimal(xx["value"]) / Decimal(100000000.0)) + ','+ 'WAS_SENT_TO' + "\n")
fileAddressList.write(str(rec) + ',' + str(rec) + ',Address' + "\n")
tout = tout+1
except KeyError:
rec = 'Unavailable'
usock.close()
file.close()
fileIn.close()
fileOut.close()
fileTransList.close()
fileBlockTrans.close()
fileTransInRels.close()
filePaymentOutAddr.close()
filePaymentInAddr.close()
fileAddressList.close()
fileBlockList.close()
f.close();
print "Done"
# # NODES
# - Convert txt files to csv
# - http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
# - nodes first
# - https://neo4j.com/docs/operations-manual/3.2/tools/import/file-header-format/#import-tool-id-spaces
# In[10]:
test = pd.read_csv('graph_addresses.txt', header = None, names = ['user:ID', ':LABEL'], delimiter= '\t')
# In[11]:
test.head()
# In[12]:
test2 = test.groupby('user:ID').sum()
# In[13]:
test2.head()
# In[27]:
test.to_csv('graph_addresses.csv')
# In[39]:
test.shape # number of nodes
# # EDGES
# - Specify start, end, and type for each edge
# In[ ]:
test = pd.read_csv('txedgeunique.txt', header = None, delimiter= '\t')
# In[29]:
test1 = pd.read_csv('../btc/au_graph.txt', header = None, names = [':START_ID', 'stop', 'unixtime'], delimiter= '\t')
# In[30]:
test1[':END_ID'] = test1['stop']
# In[35]:
test1 = test1.drop('stop', axis = 1)
# test[':TYPE'] = 'TXN'
# In[36]:
test1.head()
# In[37]:
test1.to_csv('au_graph.csv')
# In[40]:
test1.shape # number of transactions
import pandas
from math import log10, floor
from scipy.constants import codata
def most_significant_digit(x):
e = floor(log10(x))
return int(x*10**-e)
def f(x):
return most_significant_digit(abs(x))
# read in the ticker data
tick = pandas.read_csv('./your_ticker_data.csv')
tick_ret = tick.diff()
# count leading digits
data = tick_ret[tick_ret!=0]
counts = data.fillna(method='bfill').apply(f).value_counts()
total = counts.sum()
# expected number of each leading digit per Benford's law
benford = [total*log10(1 + 1./i) for i in range(1, 10)]
# plot actual vs expected
bins = np.arange(9)
error_config = {'ecolor': '0.3'}
r1 = plt.bar(bins, counts.values, 0.35, alpha=0.4, color='b', error_kw=error_config, label = 'actual')
r2 = plt.bar(bins + 0.35, benford, 0.35, alpha=0.4, color='r', error_kw=error_config, label = 'expected')
plt.xlabel('Most significant digit')
plt.ylabel('Occurence count')
plt.title('Leading digits in BTC-E ticker volume')
plt.xticks(bins + 0.35, bins+1)
plt.legend()
plt.show()
from neo4j.v1 import GraphDatabase, basic_auth
# In[ ]:
driver = GraphDatabase.driver("bolt://localhost:7687", auth=basic_auth("neo4j", "neo4j"))
session = driver.session()
session.run("CREATE (a:Person {name: {name}, title: {title}})",
{"name": "Arthur", "title": "King"})
result = session.run("MATCH (a:Person) WHERE a.name = {name} "
"RETURN a.name AS name, a.title AS title",
{"name": "Arthur"})
for record in result:
print("%s %s" % (record["title"], record["name"]))
session.close()
# In[ ]:
from py2neo import Graph, Path
graph = Graph()
tx = graph.cypher.begin()
for name in ["Alice", "Bob", "Carol"]:
tx.append("CREATE (person:Person {name:{name}}) RETURN person", name=name)
alice, bob, carol = [result.one for result in tx.commit()]
friends = Path(alice, "KNOWS", bob, "KNOWS", carol)
graph.create(friends)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Change the headers
Step2: NODES
Step3: -----------------------------
Step4: Import script - LOAD CSV?? - STOPPED HERE
Step5: Original Blockchain API
Step6: OLD CODE
Step7: benford's law test for ticker data
Step8: Python Drivers
|
15,248 | <ASSISTANT_TASK:>
Python Code:
!yes|azure login
!azure account show
sid_tid = !azure account show|awk -F ':' '/ID/{ print $3}'
sid = sid_tid[0]
tid = sid_tid[1]
out=!azure ad sp create --name simpleazure
cid = out[6].split(":")[1].lstrip()
newout="\n".join(out)
print(newout)
password=""
!azure ad sp set -p $password $cid
!azure role assignment create --objectId $cid -o Owner -c /subscriptions/$sid
from simpleazure import SimpleAzure as saz
import os
os.environ['AZURE_SUBSCRIPTION_ID'] = $sid
os.environ['AZURE_CLIENT_SECRET'] = $password
os.environ['AZURE_TENANT_ID'] = $tid
os.environ['AZURE_CLIENT_ID'] = $cid
saz_obj = saz()
url = "https://raw.githubusercontent.com/Azure-Samples/resource-manager-python-template-deployment/master/templates/template.json"
saz_obj.arm.deploy(template = url, param = {"sshKeyData": "ssh-rsa AAAAB3...<skipped>... hroe.lee@simpleazure", 'dnsLabelPrefix':"simpleazure", 'vmName':'simpleazure-first-vm'})
saz_obj.arm.remove_resource_group()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Credentials for Azure Python SDK
Step2: IPython filters the subscription ID and tenant ID using awk command and stores into sid and tid variables.
Step3: Service Principal for Simple Azure
Step4: Id after Service Principal Names is our client id for Simple Azure. cid variable stores the ID in the previous commands.
Step5: Note that '$cid' is a client id obtained from the previous command.
Step6: Are you completed all steps without any issues?
Step7: Credentials via Environment Variables
Step8: Template from Azure-Samples
Step9: Deploy with Template and Parameters
Step10: Termination (deleting resource group)
|
15,249 | <ASSISTANT_TASK:>
Python Code:
import networkx as nx
G = nx.Graph()
G.add_node(1)
G.add_nodes_from([2, 3])
H = nx.path_graph(10)
G.add_nodes_from(H)
H.edges()
G.add_node(H)
G.add_edge(1, 2)
e = (2, 3)
G.add_edge(*e) # unpack edge tuple*
G.add_edges_from([(1, 2), (1, 3)])
G.add_edges_from(H.edges())
G.clear()
G.add_edges_from([(1, 2), (1, 3)])
G.add_node(1)
G.add_edge(1, 2)
G.add_node("spam") # adds node "spam"
G.add_nodes_from("spam") # adds 4 nodes: 's', 'p', 'a', 'm'
G.add_edge(3, 'm')
G.number_of_nodes()
G.number_of_edges()
list(G.nodes())
list(G.edges())
list(G.adj[1]) # or list(G.neighbors(1))
G.degree()[1] # the number of edges incident to 1
G.edges([2, 'm'])
G.degree([2, 3])
G.remove_node(2)
G.remove_nodes_from("spam")
list(G.nodes())
G.remove_edge(1, 3)
G.add_edge(1, 2)
H = nx.DiGraph(G) # create a DiGraph using the connections from G
list(H.edges())
edgelist = [(0, 1), (1, 2), (2, 3)]
H = nx.Graph(edgelist)
G[1] # same as G.adj[1]
G[1][2]
G.edges(1, 2)
G.add_edge(1, 3)
G[1][3]['color'] = "blue"
G.edges[1, 2]['color'] = "red"
FG = nx.Graph()
FG.add_weighted_edges_from([(1, 2, 0.125), (1, 3, 0.75), (2, 4, 1.2), (3, 4, 0.375)])
for n, nbrs in FG.adj.items():
for nbr, eattr in nbrs.items():
wt = eattr['weight']
if wt < 0.5: print('(%d, %d, %.3f)' % (n, nbr, wt))
for (u, v, wt) in FG.edges.data('weight'):
if wt < 0.5: print('(%d, %d, %.3f)' % (u, v, wt))
G = nx.Graph(day="Friday")
G.graph
G.graph['day'] = "Monday"
G.graph
G.add_node(1, time='5pm')
G.add_nodes_from([3], time='2pm')
G.nodes[1]
G.nodes[1]['room'] = 714
G.nodes.data()
G.add_edge(1, 2, weight=4.7 )
G.add_edges_from([(3, 4), (4, 5)], color='red')
G.add_edges_from([(1, 2, {'color': 'blue'}), (2, 3, {'weight': 8})])
G[1][2]['weight'] = 4.7
G.edges[3, 4]['weight'] = 4.2
DG = nx.DiGraph()
DG.add_weighted_edges_from([(1, 2, 0.5), (3, 1, 0.75)])
DG.out_degree(1, weight='weight')
DG.degree(1, weight='weight')
list(DG.successors(1))
list(DG.neighbors(1))
H = nx.Graph(G) # convert G to undirected graph
MG = nx.MultiGraph()
MG.add_weighted_edges_from([(1, 2, 0.5), (1, 2, 0.75), (2, 3, 0.5)])
dict(MG.degree(weight='weight'))
GG = nx.Graph()
for n, nbrs in MG.adjacency():
for nbr, edict in nbrs.items():
minvalue = min([d['weight'] for d in edict.values()])
GG.add_edge(n, nbr, weight = minvalue)
nx.shortest_path(GG, 1, 3)
petersen = nx.petersen_graph()
tutte = nx.tutte_graph()
maze = nx.sedgewick_maze_graph()
tet = nx.tetrahedral_graph()
K_5 = nx.complete_graph(5)
K_3_5 = nx.complete_bipartite_graph(3, 5)
barbell = nx.barbell_graph(10, 10)
lollipop = nx.lollipop_graph(10, 20)
er = nx.erdos_renyi_graph(100, 0.15)
ws = nx.watts_strogatz_graph(30, 3, 0.1)
ba = nx.barabasi_albert_graph(100, 5)
red = nx.random_lobster(100, 0.9, 0.9)
nx.write_gml(red, "path.to.file")
mygraph = nx.read_gml("path.to.file")
G = nx.Graph()
G.add_edges_from([(1, 2), (1, 3)])
G.add_node("spam") # adds node "spam"
list(nx.connected_components(G))
sorted(d for n, d in G.degree())
nx.clustering(G)
sp = dict(nx.all_pairs_shortest_path(G))
sp[3]
# %matplotlib inline
import matplotlib.pyplot as plt
G = nx.petersen_graph()
plt.subplot(121)
nx.draw(G, with_labels=True, font_weight='bold')
plt.subplot(122)
nx.draw_shell(G, nlist=[range(5, 10), range(5)], with_labels=True, font_weight='bold')
plt.show()
options = {
'node_color': 'black',
'node_size': 100,
'width': 3,
}
plt.subplot(221)
nx.draw_random(G, **options)
plt.subplot(222)
nx.draw_circular(G, **options)
plt.subplot(223)
nx.draw_spectral(G, **options)
plt.subplot(224)
nx.draw_shell(G, nlist=[range(5,10), range(5)], **options)
plt.show()
G = nx.dodecahedral_graph()
shells = [[2, 3, 4, 5, 6], [8, 1, 0, 19, 18, 17, 16, 15, 14, 7], [9, 10, 11, 12, 13]]
nx.draw_shell(G, nlist=shells, **options)
plt.hold
plt.show()
nx.draw(G)
plt.savefig("path.png")
from networkx.drawing.nx_pydot import write_dot
pos = nx.nx_agraph.graphviz_layout(G)
nx.draw(G, pos=pos)
write_dot(G, 'file.dot')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: By definition, a Graph is a collection of nodes (vertices) along with
Step2: add a list of nodes,
Step3: or add any iterable container of nodes. You can also add nodes along with node
Step4: Note that G now contains the nodes of H as nodes of G.
Step5: The graph G now contains H as a node. This flexibility is very powerful as
Step6: by adding a list of edges,
Step7: or by adding any ebunch of edges. An ebunch is any iterable
Step8: There are no complaints when adding existing nodes or edges. For example,
Step9: we add new nodes/edges and NetworkX quietly ignores any that are
Step10: At this stage the graph G consists of 8 nodes and 3 edges, as can be seen by
Step11: We can examine the nodes and edges. Four basic graph properties facilitate
Step12: One can specify to report the edges and degree from a subset of all nodes
Step13: One can remove nodes and edges from the graph in a similar fashion to adding.
Step14: When creating a graph structure by instantiating one of the graph
Step15: What to use as nodes and edges
Step16: You can get/set the attributes of an edge using subscript notation
Step17: Fast examination of all (node, adjacency) pairs is achieved using
Step18: Convenient access to all edges is achieved with the edges property.
Step19: Adding attributes to graphs, nodes, and edges
Step20: Or you can modify attributes later
Step21: Node attributes
Step22: Note that adding a node to G.nodes does not add it to the graph, use
Step23: The special attribute weight should be numeric as it is used by
Step24: Some algorithms work only for directed graphs and others are not well
Step25: Multigraphs
Step26: Graph generators and graph operations
Step27: Using a (constructive) generator for a classic graph, e.g.,
Step28: Using a stochastic graph generator, e.g.,
Step29: Reading a graph stored in a file using common graph formats,
Step30: For details on graph formats see Reading and writing graphs
Step31: Some functions with large output iterate over (node, value) 2-tuples.
Step32: See Algorithms for details on graph algorithms
Step33: You may find it useful to interactively test code using ipython -pylab,
Step34: when drawing to an interactive display. Note that you may need to issue a
Step35: command if you are not using matplotlib in interactive mode (see
Step36: You can find additional options via draw_networkx() and
Step37: To save drawings to a file, use, for example
Step38: writes to the file path.png in the local directory. If Graphviz and
|
15,250 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
#print("weights_h_t_o: {0}".format(network.weights_hidden_to_output))
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
#print("weights_i_t_h: {0}".format(network.weights_input_to_hidden))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
#print("run output: {0}".format(network.run(inputs)))
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 3300
learning_rate = 0.5
hidden_nodes = 25
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
# cropped y to see more details around the lower loss
_ = plt.ylim(0,2)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
15,251 | <ASSISTANT_TASK:>
Python Code:
class HandleGZippedJSON:
def __init__(self, url):
self.url = url
self.json_data = None
def run(self):
request = urllib2.Request(self.url)
request.add_header('Accept-encoding', 'gzip')
opener = urllib2.build_opener()
f = opener.open(request)
c_data = f.read()
c_stream = StringIO(c_data)
gzipper = gzip.GzipFile(fileobj=c_stream)
data = gzipper.read()
output = data.splitlines()
datastr=[]
for lines in output:
try:
r=json.loads(lines)
datastr.append(r)
except ValueError: # includes simplejson.decoder.JSONDecodeError
print 'Decoding JSON has failed'
pass
return datastr
fileurl="http://thedataincubator.s3.amazonaws.com/coursedata/mldata/yelp_train_academic_dataset_review.json.gz"
out=HandleGZippedJSON(fileurl)
xfile=out.run()
df = pd.DataFrame(xfile)
x_train, x_test, y_train, y_test = cross_validation.train_test_split(xfile,df['stars'],test_size=0.2)
print(len(xfile))
print(xfile[0])
class ColumnSelector(TransformerMixin):
def __init__(self,namecol):
import pandas as pd
import numpy as np
self.namecol=namecol
def fit(self, data, y=None):
import pandas as pd
import numpy as np
return self
def transform(self, data):
import pandas as pd
import numpy as np
if type(data) is list:
df = pd.DataFrame(data)
D=df[self.namecol]
elif type(data) is dict:
df = pd.DataFrame(columns=[self.namecol], index=['x'])
df.loc['x'] = pd.Series({self.namecol:data[self.namecol]})
D=df[self.namecol]
return D
class RidgeRegressor2(BaseEstimator, RegressorMixin):
def __init__(self):
import pandas as pd
import numpy as np
pass
def fit(self, X, y):
import pandas as pd
import numpy as np
from sklearn import datasets, linear_model, utils, preprocessing, cross_validation, neighbors, ensemble
self.ridge_regression = linear_model.Ridge().fit(X, y)
return self
def predict(self, X):
import pandas as pd
import numpy as np
from sklearn import datasets, linear_model, utils, preprocessing, cross_validation, neighbors, ensemble
Xy=self.ridge_regression.predict(X)
if type(Xy) is list:
Xyz=Xy
elif type(Xy) is np.ndarray:
Xyz=[]
for record in Xy:
frecord=float(record)
Xyz.append(frecord)
if len(Xyz)<2:
Xyz=Xyz[0]
return Xyz
### JUST USING ONE-GRAMS ####
mypipeline=Pipeline([
('text_extractor', ColumnSelector('text')),
('hvect', HashingVectorizer(norm='l2',stop_words=nltk.corpus.stopwords.words('english'))),
('ridgefit', RidgeRegressor2())
])
mypipeline.fit(x_train,y_train)
print(mypipeline.score(x_test,y_test))
#### ALSO USING BIGRAMS ####
mypipeline2=Pipeline([
('text_extractor', ColumnSelector('text')),
('hvect', HashingVectorizer(norm='l2', ngram_range=(1, 2), stop_words=nltk.corpus.stopwords.words('english'))),
('ridgefit', RidgeRegressor2())
])
mypipeline2.fit(x_train,y_train)
print(mypipeline2.score(x_test,y_test))
#import and merge data from previous dataset to identify restaurants
fileurl2="http://thedataincubator.s3.amazonaws.com/coursedata/mldata/yelp_train_academic_dataset_business.json.gz"
out=HandleGZippedJSON(fileurl2)
xfile_rests=out.run()
dfrest=pd.DataFrame(xfile_rests)
xrests=list(myrests['business_id'] for myrests in xfile_rests if ('Restaurants' in myrests['categories']) or ('Food' in myrests['categories']))
print(len(xfile))
xfile2 = [review for review in xfile if review['business_id'] in xrests]
print(len(xfile2))
dfonlyrests=pd.DataFrame(xfile2)
reviewtext=dfonlyrests['text']
estopwords=stopwords.words('english')
xonegramst = CountVectorizer(ngram_range=(1,1),stop_words=estopwords)
xonegrams = xonegramst.fit_transform(reviewtext)
xbigramst = CountVectorizer(ngram_range=(2,2),stop_words=estopwords)
xbigrams = xbigramst.fit_transform(reviewtext)
all_onegrams=xonegramst.get_feature_names()
all_bigrams=xbigramst.get_feature_names()
tot_words=xonegrams.sum() #total words in corpus (~50 million)
unique_words=xonegramst.get_feature_names()
tot_unique=len(unique_words) #total unique words in corpus (~322k)
new_tot_words=tot_words+tot_unique
wordcount_list=np.array(xonegrams.sum(axis=0))[0] #array of occurrances of each word in corpus
wordloc=xonegramst.vocabulary_ #location of each unique word in list
wc_list = [wordcount_list[wordloc[key]] for key in all_onegrams] #occurrances of particular unique word
###BIGRAMS###
unique_biwords=xbigramst.get_feature_names()
bi_keys_split = [re.split('\s',key) for key in unique_biwords]
biwordcount_list=np.array(xbigrams.sum(axis=0))[0]
biwordloc=xbigramst.vocabulary_
bi_wc_list=[biwordcount_list[biwordloc[key]] for key in unique_biwords]
def get_probs(xgrams,xgramst):
wordloc=xgramst.vocabulary_
unique_words=xgramst.get_feature_names()
tot_words=xgrams.sum() #total words in corpus (~50 million)
wordcount_list=np.array(xgrams.sum(axis=0))[0]
prob_word={}
for xword in unique_words:
prob_word[xword] = float((wordcount_list[wordloc[xword]]) + 5)
return prob_word
arb_cutoff = 35
biprobs=get_probs(xbigrams,xbigramst)
monoprobs=get_probs(xonegrams,xonegramst)
bigram_prob = [biprobs[b]/(monoprobs[s[0]]*monoprobs[s[1]]) for b,s in zip(unique_biwords,bi_keys_split)]
dfbiprob = pd.DataFrame({'biprob':bigram_prob,'bigram':unique_biwords})
dfbiprob = dfbiprob.sort('biprob',ascending=False)
dfbiprob = dfbiprob[dfbiprob['biprob'] != np.inf]
blist=[]
for x in dfbiprob['bigram']:
if bi_wc_list[biwordloc[x]] > arb_cutoff:
blist.append(x)
print(blist[0:100])
hv = HashingVectorizer(norm='l2',stop_words=nltk.corpus.stopwords.words('english'))
hvcounts = hv.fit_transform(df['text'])
cv = cross_validation.KFold(len(df['stars']), n_folds=10, shuffle=True)
params = {'alpha':np.logspace(-6,-3,10)}
grid = grid_search.GridSearchCV(linear_model.SGDRegressor(),cv=cv,param_grid=params)
grid.fit(hvcounts,df['stars'])
with open('/home/vagrant/miniprojects/questions/nlp1.pkl', 'wb') as handle:
pickle.dump(grid, handle)
mypipeline3=Pipeline([
('text_extractor', ColumnSelector('text')),
('hvect', HashingVectorizer(norm=None, ngram_range=(1, 2), non_negative=True, stop_words=nltk.corpus.stopwords.words('english'))),
('tfidft', TfidfTransformer()),
('svd', TruncatedSVD(n_components=100)),
('normdata', Normalizer(copy=False)),
('compatibility', Compatibility())
])
mypipeline2.fit(xfile,yout)
with open('/home/vagrant/miniprojects/nlp3.pkl', 'wb') as handle2:
pickle.dump(mypipeline2, handle2)
mypipeline2.predict(xfile[0:10])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ANALYSIS
Step2: FOOD BIGRAMS
Step3: CLEANING, PROCESSING THE DATA
Step4: EXTRA CODE
|
15,252 | <ASSISTANT_TASK:>
Python Code:
from devito import *
grid = Grid(shape=(5, 6), extent=(1., 1.))
grid
print(Function.__doc__)
f = Function(name='f', grid=grid)
f
f.data
g = TimeFunction(name='g', grid=grid)
g
g.shape
g.dt
g.dt.evaluate
g.forward
g.backward
g.forward.dt
g.forward.dy
from examples.cfd import init_smooth, plot_field
nt = 100 # Number of timesteps
dt = 0.2 * 2. / 80 # Timestep size (sigma=0.2)
c = 1 # Value for c
# Then we create a grid and our function
grid = Grid(shape=(81, 81), extent=(2., 2.))
u = TimeFunction(name='u', grid=grid)
# We can now set the initial condition and plot it
init_smooth(field=u.data[0], dx=grid.spacing[0], dy=grid.spacing[1])
init_smooth(field=u.data[1], dx=grid.spacing[0], dy=grid.spacing[1])
plot_field(u.data[0])
eq = Eq(u.dt + c * u.dxl + c * u.dyl)
eq
stencil = solve(eq, u.forward)
update = Eq(u.forward, stencil)
update
op = Operator(update, opt='noop')
op(time=nt+1, dt=dt)
plot_field(u.data[0])
print(op.ccode)
u = TimeFunction(name='u', grid=grid, space_order=2)
u.dx2
u.dx2.evaluate
u = TimeFunction(name='u', grid=grid, space_order=4)
u.dx2
u.dx2.evaluate
grid_3d = Grid(shape=(5, 6, 7), extent=(1., 1., 1.))
u = TimeFunction(name='u', grid=grid_3d, space_order=2)
u
u = TimeFunction(name='u', grid=grid_3d, space_order=12)
u.laplace
u.dx2 + u.dy2 + u.dz2
u = TimeFunction(name='u', grid=grid, space_order=2)
v = TimeFunction(name='v', grid=grid, space_order=2, time_order=2)
v.dt2 + u.laplace
(v.dt2 + u.laplace).dx2
(v.dt2 + u.laplace).dx2.evaluate
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From equations to code in a few lines of Python
Step2: Functions and data
Step3: Ok, let's create a function $f(x, y)$ and look at the data Devito has associated with it. Please note that it is important to use explicit keywords, such as name or grid when creating Function objects.
Step4: By default, Devito Function objects use the spatial dimensions (x, y) for 2D grids and (x, y, z) for 3D grids. To solve a PDE over several timesteps a time dimension is also required by our symbolic function. For this Devito provides an additional function type, the TimeFunction, which incorporates the correct dimension along with some other intricacies needed to create a time stepping scheme.
Step5: Since the default time order of a TimeFunction is 1, the shape of f is (2, 5, 6), i.e. Devito has allocated two buffers to represent g(t, x, y) and g(t + dt, x, y)
Step6: Derivatives of symbolic functions
Step7: We may also want to take a look at the stencil Devito will generate based on the chosen discretisation
Step8: There also exist convenient shortcuts to express the forward and backward stencil points, g(t+dt, x, y) and g(t-dt, x, y).
Step9: And of course, there's nothing to stop us taking derivatives on these objects
Step10: A linear convection operator
Step11: Next, we wish to discretise our governing equation so that a functional Operator can be created from it. We begin by simply writing out the equation as a symbolic expression, while using shorthand expressions for the derivatives provided by the Function object. This will create a symbolic object of the dicretised equation.
Step12: We now need to rearrange our equation so that the term $u(t+dt, x, y)$ is on the left-hand side, since it represents the next point in time for our state variable $u$. Devito provides a utility called solve, built on top of SymPy's solve, to rearrange our equation so that it represents a valid state update for $u$. Here, we use solve to create a valid stencil for our update to u(t+dt, x, y)
Step13: The right-hand side of the 'update' equation should be a stencil of the shape
Step14: Note that the real power of Devito is hidden within Operator, it will automatically generate and compile the optimized C code. We can look at this code (noting that this is not a requirement of executing it) via
Step15: Second derivatives and high-order stencils
Step16: We can increase the discretisation arbitrarily if we wish to specify higher order FD stencils
Step17: To implement the diffusion or wave equations, we must take the Laplacian $\nabla^2 u$, which is the sum of the second derivatives in all spatial dimensions. For this, Devito also provides a shorthand expression, which means we do not have to hard-code the problem dimension (2D or 3D) in the code. To change the problem dimension we can create another Grid object and use this to re-define our Function's
Step18: We can re-define our function u with a different space_order argument to change the discretisation order of the stencil expression created. For example, we can derive an expression of the 12th-order Laplacian $\nabla^2 u$
Step19: The same expression could also have been generated explicitly via
Step20: Derivatives of composite expressions
Step21: Which can, depending on the chosen discretisation, lead to fairly complex stencils
|
15,253 | <ASSISTANT_TASK:>
Python Code:
# Load libraries
import numpy as np
from sklearn.datasets import load_iris
# Load iris data
iris = load_iris()
# Create feature matrix
X = iris.data
# Create target vector
y = iris.target
# Remove first 40 observations
X = X[40:,:]
y = y[40:]
# Create binary target vector indicating if class 0
y = np.where((y == 0), 0, 1)
# Look at the imbalanced target vector
y
# Indicies of each class' observations
i_class0 = np.where(y == 0)[0]
i_class1 = np.where(y == 1)[0]
# Number of observations in each class
n_class0 = len(i_class0)
n_class1 = len(i_class1)
# For every observation of class 0, randomly sample from class 1 without replacement
i_class1_downsampled = np.random.choice(i_class1, size=n_class0, replace=False)
# Join together class 0's target vector with the downsampled class 1's target vector
np.hstack((y[i_class0], y[i_class1_downsampled]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Dataset
Step2: Make Iris Dataset Imbalanced
Step3: Downsample Majority Class To Match Minority Class
|
15,254 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
15,255 | <ASSISTANT_TASK:>
Python Code:
from bqplot import (LinearScale, Axis, Figure, OrdinalScale,
LinearScale, Bars, Lines, Scatter)
# first, let's create two vectors x and y to plot using a Lines mark
import numpy as np
x = np.linspace(-10, 10, 100)
y = np.sin(x)
# 1. Create the scales
xs = LinearScale()
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label='X')
yax = Axis(scale=ys, orientation='vertical', label='Y')
# 3. Create a Lines mark by passing in the scales
# note that Lines object is stored in `line` which can be used later to update the plot
line = Lines(x=x, y=y, scales={'x': xs, 'y': ys})
# 4. Create a Figure object by assembling marks and axes
fig = Figure(marks=[line], axes=[xax, yax], title='Simple Line Chart')
# 5. Render the figure using display or just as is
fig
# first, let's create two vectors x and y to plot a bar chart
x = list('ABCDE')
y = np.random.rand(5)
# 1. Create the scales
xs = OrdinalScale() # note the use of ordinal scale to represent categorical data
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label='X', grid_lines='none') # no grid lines needed for x
yax = Axis(scale=ys, orientation='vertical', label='Y', tick_format='.0%') # note the use of tick_format to format ticks
# 3. Create a Bars mark by passing in the scales
# note that Bars object is stored in `bar` object which can be used later to update the plot
bar = Bars(x=x, y=y, scales={'x': xs, 'y': ys}, padding=.2)
# 4. Create a Figure object by assembling marks and axes
Figure(marks=[bar], axes=[xax, yax], title='Simple Bar Chart')
# first, let's create two vectors x and y
import numpy as np
x = np.linspace(-10, 10, 25)
y = 3 * x + 5
y_noise = y + 10 * np.random.randn(25) # add some random noise to y
# 1. Create the scales
xs = LinearScale()
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label='X')
yax = Axis(scale=ys, orientation='vertical', label='Y')
# 3. Create a Lines and Scatter marks by passing in the scales
# additional attributes (stroke_width, colors etc.) can be passed as attributes to the mark objects as needed
line = Lines(x=x, y=y, scales={'x': xs, 'y': ys}, colors=['green'], stroke_width=3)
scatter = Scatter(x=x, y=y_noise, scales={'x': xs, 'y': ys}, colors=['red'], stroke='black')
# 4. Create a Figure object by assembling marks and axes
# pass both the marks (line and scatter) as a list to the marks attribute
Figure(marks=[line, scatter], axes=[xax, yax], title='Scatter and Line')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For creating other marks (like scatter, pie, bars, etc.), only step 3 needs to be changed. Lets look a simple example to create a bar chart
Step2: Mutiple marks can be rendered in a figure. It's as easy as passing a list of marks when constructing the Figure object
|
15,256 | <ASSISTANT_TASK:>
Python Code:
a = -0.7
j_vals = []
kl_vals = []
mus = np.linspace(0,1,100)
for mu in mus:
j_vals.append(J(mu,p_sig,a)[0])
kl_vals.append(KL(mu,p_sig)[0])
fig = plt.figure(figsize=(15,5))
p_vals = p(mus)
plt.plot(mus, p_vals/p_vals.max(), label="$p(x)$")
#plt.plot(mus, j_vals/np.max(np.abs(j_vals)), label='$J$')
plt.plot(mus, j_vals, label='$J$')
plt.plot(mus, kl_vals/np.max(np.abs(kl_vals)), label='$KL$')
plt.title("Divergences with alpha = {}".format(a))
plt.xlabel('$\mu$')
plt.legend()
plt.show()
dj_vals = []
dkl_vals = []
mus = np.linspace(0,1.0,100)
for mu in mus:
dj_vals.append(dJ_dmu(mu,p_sig,a)[0])
dkl_vals.append(dKL_dmu(mu,p_sig)[0])
fig = plt.figure(figsize=(15,5))
p_vals = p(mus)
plt.plot(mus, p_vals/p_vals.max(), label="$p(x)$")
#plt.plot(mus, dj_vals/np.max(np.abs(dj_vals)), label='$\partial J/\partial \mu_q$')
plt.plot(mus, dj_vals, label='$\partial J/\partial \mu_q$')
plt.plot(mus, dkl_vals/np.max(np.abs(dkl_vals)), label='$\partial KL/\partial \mu_q$')
plt.title("Derivative of Divergences with alpha = {}".format(a))
plt.xlabel('$\mu$')
plt.legend()
plt.show()
a = -0.7
j_optims = []
j_maxErrs = []
kl_optims = []
kl_maxErrs = []
tot_mus_list = [1000,2000,3000,4000,5000]
for tot_mus in tot_mus_list:
print("Operating on {} mus...".format(tot_mus))
dj_vals = []
dj_errs = []
dkl_vals = []
dkl_errs = []
mus = np.linspace(0.4,0.6,tot_mus)
for mu in mus:
j_quad, j_err = dJ_dmu(mu,p_sig,a)
dj_vals.append(j_quad)
dj_errs.append(j_err)
kl_quad, kl_err = dKL_dmu(mu,p_sig)
dkl_vals.append(kl_quad)
dkl_errs.append(kl_err)
j_optims.append(mus[np.argmin(np.abs(dj_vals))])
j_maxErrs.append(np.max(dj_errs))
kl_optims.append(mus[np.argmin(np.abs(dkl_vals))])
kl_maxErrs.append(np.max(dkl_errs))
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(121)
ax1.set_title("Error")
ax1.set_xlabel("Number of $\mu$s in Calcuation")
ax1.plot(tot_mus_list, j_maxErrs, label="max $J$-err")
ax1.plot(tot_mus_list, kl_maxErrs, label="max $KL$-err")
ax1.legend()
ax2 = fig.add_subplot(122)
ax2.set_title("$|\max{J}-\min{KL}|$")
ax2.set_xlabel("Number of $\mu$s in Calcuation")
ax2.plot(tot_mus_list, np.abs(np.array(j_optims)-np.array(kl_optims)), label="error")
ax2.legend()
j_vals = []
kl_vals = []
alphas = np.linspace(-3,0.999,1000)
for a in alphas:
j_vals.append(J(p_mean,p_sig,a)[0])
kl_vals.append(KL(p_mean,p_sig)[0])
fig = plt.figure(figsize=(15,5))
plt.plot(alphas, j_vals, label='$J$')
plt.plot(alphas, kl_vals, label='$KL$')
plt.title("Divergences vs alpha")
plt.xlabel('alpha')
plt.legend()
plt.show()
dj_vals = []
dkl_vals = []
alphas = np.linspace(-3,0.999,1000)
for a in alphas:
dj_vals.append(dJ_dmu(p_mean,p_sig,a)[0])
dkl_vals.append(dKL_dmu(p_mean,p_sig)[0])
fig = plt.figure(figsize=(15,5))
plt.plot(alphas, dj_vals, label='$\partial J/\partial \mu_q$')
plt.plot(alphas, dkl_vals, label='$\partial KL/\partial \mu_q$')
plt.title("Derivative of Divergences vs alpha")
plt.xlabel('alpha')
plt.legend()
plt.show()
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
mu_min = 0.4
mu_max = 0.6
num_mus = 1000
mus = np.linspace(mu_min, mu_max, num_mus)
sig_min = 0.0001
sig_max = 0.01
num_sigs = 1000
sigmas = np.linspace(sig_min, sig_max, num_sigs)
mu,sigma = np.meshgrid(mus,sigmas)
vals = np.array((mu,sigma))
z = np.ndarray(mu.shape)
for i in range(len(mu[0])):
for j in range(len(sigma[0])):
m,s = vals[:,i,j]
z[i,j] = J(m,s,-0.7)[0]
fig = plt.figure(figsize=(16,12))
ax = fig.gca(projection='3d')
ax.plot_surface(mu, sigma, z, rstride=5, cstride=5, alpha=0.3)
cset = ax.contour(mu, sigma, z, zdir='z', offset=-0.1, cmap=cm.coolwarm)
cset = ax.contour(mu, sigma, z, zdir='x', offset=mu_min, cmap=cm.coolwarm)
cset = ax.contour(mu, sigma, z, zdir='y', offset=sig_max, cmap=cm.coolwarm)
ax.set_xlabel('$\mu$')
ax.set_xlim(mu_min, mu_max)
ax.set_ylabel('$\sigma$')
ax.set_ylim(sig_min, sig_max)
ax.set_zlabel('$f$')
ax.set_zlim(-0.1,100)
plt.show()
z[0,0]
len(np.where(np.isnan(z))[0])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Derivative of Divergences as a function of $\mu_q$
Step2: Finding the Zeros of the Derivatives
Step3: Examining the Divergences as a function of $\alpha$
Step4: Derivatives of Divergences vs alpha
Step5: Cost Surface for the Convolution
|
15,257 | <ASSISTANT_TASK:>
Python Code:
import logging
logging.basicConfig()
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)
print len(twenty_train.data)
print twenty_train.data[0]
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(X_train_counts.shape)
from sklearn.feature_extraction.text import TfidfTransformer
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tf, twenty_train.target)
docs_new = ['God is love', 'OpenGL on the GPU is fast', 'learn python']
X_new_counts = count_vect.transform(docs_new)
X_new_tfidf = tf_transformer.transform(X_new_counts)
predicted = clf.predict(X_new_tfidf)
for doc, category in zip(docs_new, predicted):
print('%r => %s' % (doc, twenty_train.target_names[category]))
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 500)
plt.plot(x, np.sin(x))
# plt.show()
plt.plot(x, np.sin(x*2))
plt.title('A simple chirp')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 从 sklearn.datasets 这个库中导入 fetch_20newsgroups 模块
Step2: 整个数据集包含了 2257 个这样的文档。 我们需要用这 2257 条数据来训练我们的模型, 让它变得智能起来。
Step3: X_train_counts 的维度, 包含了 2257 个条目, 每个条目包含了 35788 的字符计数。
Step4: 训练数据模型
Step5: 进行一个实际验证
Step6: 使用matplotlib作图
|
15,258 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns; sns.set_context('notebook')
# Import radon data
srrs2 = pd.read_csv('../data/srrs2.dat')
srrs2.columns = srrs2.columns.map(str.strip)
srrs_mn = srrs2[srrs2.state=='MN']
srrs_mn['fips'] = srrs_mn.stfips*1000 + srrs_mn.cntyfips
cty = pd.read_csv('../data/cty.dat')
cty_mn = cty[cty.st=='MN'].copy()
cty_mn[ 'fips'] = 1000*cty_mn.stfips + cty_mn.ctfips
srrs_mn = srrs_mn.merge(cty_mn[['fips', 'Uppm']], on='fips')
srrs_mn = srrs_mn.drop_duplicates(subset='idnum')
u = np.log(srrs_mn.Uppm)
n = len(srrs_mn)
srrs_mn.head()
srrs_mn.county = srrs_mn.county.map(str.strip)
mn_counties = srrs_mn.county.unique()
counties = len(mn_counties)
county_lookup = dict(zip(mn_counties, range(len(mn_counties))))
county = srrs_mn['county_code'] = srrs_mn.county.replace(county_lookup).values
radon = srrs_mn.activity
srrs_mn['log_radon'] = log_radon = np.log(radon + 0.1).values
floor_measure = srrs_mn.floor.values
srrs_mn.activity.apply(lambda x: np.log(x+0.1)).hist(bins=25)
from pymc3 import Model, sample, Normal, HalfCauchy, Uniform
floor = srrs_mn.floor.values
log_radon = srrs_mn.log_radon.values
with Model() as pooled_model:
beta = Normal('beta', 0, sd=1e5, shape=2)
sigma = HalfCauchy('sigma', 5)
theta = beta[0] + beta[1]*floor
y = Normal('y', theta, sd=sigma, observed=log_radon)
with pooled_model:
pooled_trace = sample(2000)
b0, m0 = pooled_trace['beta', 1000:].mean(axis=0)
plt.scatter(srrs_mn.floor, np.log(srrs_mn.activity+0.1))
xvals = np.linspace(-0.2, 1.2)
plt.plot(xvals, m0*xvals+b0, 'r--')
with Model() as unpooled_model:
beta0 = Normal('beta0', 0, sd=1e5, shape=counties)
beta1 = Normal('beta1', 0, sd=1e5)
sigma = HalfCauchy('sigma', 5)
theta = beta0[county] + beta1*floor
y = Normal('y', theta, sd=sigma, observed=log_radon)
with unpooled_model:
unpooled_trace = sample(2000)
from pymc3 import forestplot
forestplot(unpooled_trace, varnames=['beta0'])
unpooled_estimates = pd.Series(unpooled_trace['beta0'].mean(axis=0), index=mn_counties)
unpooled_se = pd.Series(unpooled_trace['beta0'].std(axis=0), index=mn_counties)
order = unpooled_estimates.order().index
plt.scatter(range(len(unpooled_estimates)), unpooled_estimates[order])
for i, m, se in zip(range(len(unpooled_estimates)), unpooled_estimates[order], unpooled_se[order]):
plt.plot([i,i], [m-se, m+se], 'b-')
plt.xlim(-1,86); plt.ylim(-1,4)
plt.ylabel('Radon estimate');plt.xlabel('Ordered county');
sample_counties = ('LAC QUI PARLE', 'AITKIN', 'KOOCHICHING',
'DOUGLAS', 'CLAY', 'STEARNS', 'RAMSEY', 'ST LOUIS')
fig, axes = plt.subplots(2, 4, figsize=(12, 6), sharey=True, sharex=True)
axes = axes.ravel()
m = unpooled_trace['beta1'].mean()
for i,c in enumerate(sample_counties):
y = srrs_mn.log_radon[srrs_mn.county==c]
x = srrs_mn.floor[srrs_mn.county==c]
axes[i].scatter(x + np.random.randn(len(x))*0.01, y, alpha=0.4)
# No pooling model
b = unpooled_estimates[c]
# Plot both models and data
xvals = np.linspace(-0.2, 1.2)
axes[i].plot(xvals, m*xvals+b)
axes[i].plot(xvals, m0*xvals+b0, 'r--')
axes[i].set_xticks([0,1])
axes[i].set_xticklabels(['basement', 'floor'])
axes[i].set_ylim(-1, 3)
axes[i].set_title(c)
if not i%2:
axes[i].set_ylabel('log radon level')
with Model() as partial_pooling:
# Priors
mu_a = Normal('mu_a', mu=0., sd=1e5)
sigma_a = HalfCauchy('sigma_a', 5)
# Random intercepts
a = Normal('a', mu=mu_a, sd=sigma_a, shape=counties)
# Model error
sigma_y = HalfCauchy('sigma_y',5)
# Expected value
y_hat = a[county]
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)
with partial_pooling:
partial_pooling_trace = sample(2000)
sample_trace = partial_pooling_trace['a', 1000:]
fig, axes = plt.subplots(1, 2, figsize=(14,6), sharex=True, sharey=True)
samples, counties = sample_trace.shape
jitter = np.random.normal(scale=0.1, size=counties)
n_county = srrs_mn.groupby('county')['idnum'].count()
unpooled_means = srrs_mn.groupby('county')['log_radon'].mean()
unpooled_sd = srrs_mn.groupby('county')['log_radon'].std()
unpooled = pd.DataFrame({'n':n_county, 'm':unpooled_means, 'sd':unpooled_sd})
unpooled['se'] = unpooled.sd/np.sqrt(unpooled.n)
axes[0].plot(unpooled.n + jitter, unpooled.m, 'b.')
for j, row in zip(jitter, unpooled.iterrows()):
name, dat = row
axes[0].plot([dat.n+j,dat.n+j], [dat.m-dat.se, dat.m+dat.se], 'b-')
axes[0].set_xscale('log')
axes[0].hlines(sample_trace.mean(), 0.9, 100, linestyles='--')
samples, counties = sample_trace.shape
means = sample_trace.mean(axis=0)
sd = sample_trace.std(axis=0)
axes[1].scatter(n_county.values + jitter, means)
axes[1].set_xscale('log')
axes[1].set_xlim(1,100)
axes[1].set_ylim(0, 3)
axes[1].hlines(sample_trace.mean(), 0.9, 100, linestyles='--')
for j,n,m,s in zip(jitter, n_county.values, means, sd):
axes[1].plot([n+j]*2, [m-s, m+s], 'b-')
with Model() as varying_intercept:
# Priors
mu_a = Normal('mu_a', mu=0., tau=0.0001)
sigma_a = HalfCauchy('sigma_a', 5)
# Random intercepts
a = Normal('a', mu=mu_a, sd=sigma_a, shape=counties)
# Common slope
b = Normal('b', mu=0., sd=1e5)
# Model error
sd_y = HalfCauchy('sd_y', 5)
# Expected value
y_hat = a[county] + b * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sd=sd_y, observed=log_radon)
with varying_intercept:
varying_intercept_trace = sample(2000)
from pymc3 import forestplot, traceplot, plot_posterior
plt.figure(figsize=(6,10))
forestplot(varying_intercept_trace[1000:], varnames=['a'])
plot_posterior(varying_intercept_trace[1000:], varnames=['sigma_a', 'b'])
from pymc3 import summary
summary(varying_intercept_trace, varnames=['b'])
xvals = np.arange(2)
bp = varying_intercept_trace[a, 1000:].mean(axis=0)
mp = varying_intercept_trace[b, 1000:].mean()
for bi in bp:
plt.plot(xvals, mp*xvals + bi, 'bo-', alpha=0.4)
plt.xlim(-0.1,1.1);
fig, axes = plt.subplots(2, 4, figsize=(12, 6), sharey=True, sharex=True)
axes = axes.ravel()
for i,c in enumerate(sample_counties):
# Plot county data
y = srrs_mn.log_radon[srrs_mn.county==c]
x = srrs_mn.floor[srrs_mn.county==c]
axes[i].scatter(x + np.random.randn(len(x))*0.01, y, alpha=0.4)
# No pooling model
m,b = unpooled_estimates[['floor', c]]
xvals = np.linspace(-0.2, 1.2)
# Unpooled estimate
axes[i].plot(xvals, m*xvals+b)
# Pooled estimate
axes[i].plot(xvals, m0*xvals+b0, 'r--')
# Partial pooling esimate
axes[i].plot(xvals, mp*xvals+bp[county_lookup[c]], 'k:')
axes[i].set_xticks([0,1])
axes[i].set_xticklabels(['basement', 'floor'])
axes[i].set_ylim(-1, 3)
axes[i].set_title(c)
if not i%2:
axes[i].set_ylabel('log radon level')
with Model() as varying_slope:
# Priors
mu_b = Normal('mu_b', mu=0., sd=1e5)
sigma_b = HalfCauchy('sigma_b', 5)
# Common intercepts
a = Normal('a', mu=0., sd=1e5)
# Random slopes
b = Normal('b', mu=mu_b, sd=sigma_b, shape=counties)
# Model error
sigma_y = HalfCauchy('sigma_y',5)
# Expected value
y_hat = a + b[county] * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)
with varying_slope:
varying_slope_trace = sample(2000)
plt.figure(figsize=(6,10))
forestplot(varying_slope_trace[1000:], varnames=['b'])
xvals = np.arange(2)
b = varying_slope_trace['a', 1000:].mean()
m = varying_slope_trace['b', 1000:].mean(axis=0)
for mi in m:
plt.plot(xvals, mi*xvals + b, 'bo-', alpha=0.4)
plt.xlim(-0.2, 1.2);
with Model() as varying_intercept_slope:
# Priors
mu_a = Normal('mu_a', mu=0., sd=1e5)
sigma_a = HalfCauchy('sigma_a', 5)
mu_b = Normal('mu_b', mu=0., sd=1e5)
sigma_b = HalfCauchy('sigma_b', 5)
# Random intercepts
a = Normal('a', mu=mu_a, sd=sigma_a, shape=counties)
# Random slopes
b = Normal('b', mu=mu_b, sd=sigma_b, shape=counties)
# Model error
sigma_y = Uniform('sigma_y', lower=0, upper=100)
# Expected value
y_hat = a[county] + b[county] * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)
with varying_intercept_slope:
varying_intercept_slope_trace = sample(2000)
plt.figure(figsize=(6,16))
forestplot(varying_intercept_slope_trace[1000:], varnames=['a','b'])
xvals = np.arange(2)
b = varying_intercept_slope_trace['a', 1000:].mean(axis=0)
m = varying_intercept_slope_trace['b', 1000:].mean(axis=0)
for bi,mi in zip(b,m):
plt.plot(xvals, mi*xvals + bi, 'bo-', alpha=0.4)
plt.xlim(-0.1, 1.1);
from pymc3 import Deterministic
with Model() as hierarchical_intercept:
# Priors
sigma_a = HalfCauchy('sigma_a', 5)
# County uranium model for slope
gamma_0 = Normal('gamma_0', mu=0., sd=1e5)
gamma_1 = Normal('gamma_1', mu=0., sd=1e5)
# Uranium model for intercept
mu_a = gamma_0 + gamma_1*u
# County variation not explained by uranium
eps_a = Normal('eps_a', mu=0, sd=sigma_a, shape=counties)
a = Deterministic('a', mu_a + eps_a[county])
# Common slope
b = Deterministic('b', Normal('b', mu=0., sd=1e5))
# Model error
sigma_y = Uniform('sigma_y', lower=0, upper=100)
# Expected value
y_hat = a + b * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)
with hierarchical_intercept:
hierarchical_intercept_trace = sample(2000)
a_means = hierarchical_intercept_trace['a', 1000:].mean(axis=0)
plt.scatter(u, a_means)
g0 = hierarchical_intercept_trace['gamma_0'].mean()
g1 = hierarchical_intercept_trace['gamma_1'].mean()
xvals = np.linspace(-1, 0.8)
plt.plot(xvals, g0+g1*xvals, 'k--')
plt.xlim(-1, 0.8)
a_se = hierarchical_intercept_trace['a', 1000:].std(axis=0)
for ui, m, se in zip(u, a_means, a_se):
plt.plot([ui,ui], [m-se, m+se], 'b-')
plt.xlabel('County-level uranium'); plt.ylabel('Intercept estimate')
# Create new variable for mean of floor across counties
xbar = srrs_mn.groupby('county')['floor'].mean().rename(county_lookup).values
with Model() as contextual_effect:
# Priors
sigma_a = HalfCauchy('sigma_a', 5)
# County uranium model for slope
gamma = Normal('gamma', mu=0., sd=1e5, shape=3)
# Uranium model for intercept
mu_a = Deterministic('mu_a', gamma[0] + gamma[1]*u.values + gamma[2]*xbar[county])
# County variation not explained by uranium
eps_a = Normal('eps_a', mu=0, sd=sigma_a, shape=counties)
a = Deterministic('a', mu_a + eps_a[county])
# Common slope
b = Normal('b', mu=0., sd=1e15)
# Model error
sigma_y = Uniform('sigma_y', lower=0, upper=100)
# Expected value
y_hat = a + b * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)
with contextual_effect:
contextual_effect_trace = sample(2000)
forestplot(contextual_effect_trace[1000:], varnames=['gamma'])
summary(contextual_effect_trace[1000:], varnames=['gamma'])
county_lookup['ST LOUIS']
with Model() as contextual_pred:
# Priors
sigma_a = HalfCauchy('sigma_a', 5)
# County uranium model for slope
gamma = Normal('gamma', mu=0., sd=1e5, shape=3)
# Uranium model for intercept
mu_a = Deterministic('mu_a', gamma[0] + gamma[1]*u.values + gamma[2]*xbar[county])
# County variation not explained by uranium
eps_a = Normal('eps_a', mu=0, sd=sigma_a, shape=counties)
a = Deterministic('a', mu_a + eps_a[county])
# Common slope
b = Normal('b', mu=0., sd=1e15)
# Model error
sigma_y = Uniform('sigma_y', lower=0, upper=100)
# Expected value
y_hat = a + b * floor_measure
# Data likelihood
y_like = Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)
# St Louis county prediction
stl_pred = Normal('stl_pred', mu=a[69] + b, sd=sigma_y)
with contextual_pred:
contextual_pred_trace = sample(2000)
plot_posterior(contextual_pred_trace[1000:], varnames=['stl_pred'])
# Write your answer here
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, obtain the county-level predictor, uranium, by combining two variables.
Step2: Use the merge method to combine home- and county-level information in a single DataFrame.
Step3: We also need a lookup table (dict) for each unique county, for indexing.
Step4: Finally, create local copies of variables.
Step5: Distribution of radon levels in MN (log scale)
Step6: Conventional approaches
Step7: Estimates of county radon levels for the unpooled model
Step8: We can plot the ordered estimates to identify counties with high radon levels
Step9: Here are visual comparisons between the pooled and unpooled estimates for a subset of counties representing a range of sample sizes.
Step10: Neither of these models are satisfactory
Step11: Notice the difference between the unpooled and partially-pooled estimates, particularly at smaller sample sizes. The former are both more extreme and more imprecise.
Step12: We can fit the above model using MCMC.
Step13: The estimate for the floor coefficient is approximately -0.66, which can be interpreted as houses without basements having about half ($\exp(-0.66) = 0.52$) the radon levels of those with basements, after accounting for county.
Step14: It is easy to show that the partial pooling model provides more objectively reasonable estimates than either the pooled or unpooled models, at least for counties with small sample sizes.
Step15: Varying slope model
Step16: Varying intercept and slope model
Step17: Adding group-level predictors
Step18: The standard errors on the intercepts are narrower than for the partial-pooling model without a county-level covariate.
Step19: So, we might infer from this that counties with higher proportions of houses without basements tend to have higher baseline levels of radon. Perhaps this is related to the soil type, which in turn might influence what type of structures are built.
Step20: That is,
Step21: Exercise
|
15,259 | <ASSISTANT_TASK:>
Python Code:
import tohu
from tohu.v4.primitive_generators import *
from tohu.v4.derived_generators import *
from tohu.v4.dispatch_generators import *
from tohu.v4.custom_generator import *
from tohu.v4.utils import print_generated_sequence, make_dummy_tuples
print(f'Tohu version: {tohu.__version__}')
class QuuxGenerator(CustomGenerator):
aa = Integer(100, 200)
bb = HashDigest(length=6)
cc = FakerGenerator(method='name')
g = QuuxGenerator()
print_generated_sequence(g, num=10, sep='\n', seed=12345)
class SomeGeneratorWithExplicitItemsName(CustomGenerator):
__tohu_items_name__ = 'Foobar'
aa = Integer(100, 200)
bb = HashDigest(length=6)
cc = FakerGenerator(method='name')
g = SomeGeneratorWithExplicitItemsName()
print_generated_sequence(g, num=10, sep='\n', seed=12345)
class QuuxGenerator(CustomGenerator):
aa = Integer(100, 200)
def __init__(self, faker_method):
self.bb = FakerGenerator(method=faker_method)
# Note: the call to super().__init__() needs to be at the end,
# and it needs to be passed the same arguments as the __init__()
# method from which it is called (here: `faker_method`).
super().__init__(faker_method)
g1 = QuuxGenerator(faker_method='first_name')
g2 = QuuxGenerator(faker_method='city')
print_generated_sequence(g1, num=10, sep='\n', seed=12345); print()
print_generated_sequence(g2, num=10, sep='\n', seed=12345)
some_tuples = make_dummy_tuples('abcdefghijklmnopqrstuvwxyz')
#some_tuples[:5]
class QuuxGenerator(CustomGenerator):
aa = SelectOne(some_tuples)
bb = GetAttribute(aa, 'x')
cc = GetAttribute(aa, 'y')
g = QuuxGenerator()
print_generated_sequence(g, num=10, sep='\n', seed=12345)
def square(x):
return x * x
def add(x, y):
return x + y
class QuuxGenerator(CustomGenerator):
aa = Integer(0, 20)
bb = Integer(0, 20)
cc = Apply(add, aa, Apply(square, bb))
g = QuuxGenerator()
print_generated_sequence(g, num=10, sep='\n', seed=12345)
df = g.generate(num=100, seed=12345).to_df()
print(list(df['aa'][:20]))
print(list(df['bb'][:20]))
print(list(df['cc'][:20]))
all(df['aa'] + df['bb']**2 == df['cc'])
class QuuxGenerator(CustomGenerator):
name = FakerGenerator(method="name")
tag = SelectOne(['a', 'bb', 'ccc'])
g = QuuxGenerator()
quux_items = g.generate(num=100, seed=12345)
quux_items.to_df().head(5)
tag_lookup = {
'a': [1, 2, 3, 4, 5],
'bb': [10, 20, 30, 40, 50],
'ccc': [100, 200, 300, 400, 500],
}
class FoobarGenerator(CustomGenerator):
some_quux = SelectOne(quux_items)
number = SelectOneDerived(Lookup(GetAttribute(some_quux, 'tag'), tag_lookup))
h = FoobarGenerator()
h_items = h.generate(10000, seed=12345)
df = h_items.to_df(fields={'name': 'some_quux.name', 'tag': 'some_quux.tag', 'number': 'number'})
df.head()
print(df.query('tag == "a"')['number'].isin([1, 2, 3, 4, 5]).all())
print(df.query('tag == "bb"')['number'].isin([10, 20, 30, 40, 50]).all())
print(df.query('tag == "ccc"')['number'].isin([100, 200, 300, 400, 500]).all())
df.query('tag == "a"').head(5)
df.query('tag == "bb"').head(5)
df.query('tag == "ccc"').head(5)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Custom generator without __init__ method
Step2: Explicitly setting the name of generated items
Step3: The generated sequence is the same as above, but the name of the items has changed from Quux to Foobar.
Step4: Custom generator with __init__ method
Step5: Custom generator containing derived generators
Step6: Example
Step7: Example
Step8: Example
|
15,260 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math
import random
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import load_boston
import numpy as np
import tensorflow as tf
sns.set(style="ticks", color_codes=True)
#load data from scikit-learn library
dataset = load_boston()
#load data as DataFrame
houses = pd.DataFrame(dataset.data, columns=dataset.feature_names)
#add target data to DataFrame
houses['target'] = dataset.target
#print first 5 entries of data
print houses.head()
print dataset['DESCR']
# Create a datset of correlations between house features
corrmat = houses.corr()
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(9, 6))
# Draw the heatmap using seaborn
sns.set_context("notebook", font_scale=0.7, rc={"lines.linewidth": 1.5})
sns.heatmap(corrmat, annot=True, square=True)
f.tight_layout()
sns.jointplot(houses['target'], houses['RM'], kind='hex')
sns.jointplot(houses['target'], houses['LSTAT'], kind='hex')
# convert housing data to numpy format
houses_array = houses.as_matrix().astype(float)
# split data into feature and target sets
X = houses_array[:, :-1]
y = houses_array[:, -1]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# split data into training and test sets
trainingSplit = int(.7 * houses_array.shape[0])
X_train = X[:trainingSplit]
y_train = y[:trainingSplit]
X_test = X[trainingSplit:]
y_test = y[trainingSplit:]
print('Training set', X_train.shape, y_train.shape)
print('Test set', X_test.shape, y_test.shape)
# helper variables
num_samples = X_train.shape[0]
num_features = X_train.shape[1]
num_outputs = 1
# Hyper-parameters
batch_size = 50
num_hidden_1 = 10
num_hidden_2 = 5
learning_rate = 0.0001
training_epochs = 900
dropout_keep_prob = 0.4# set to no dropout by default
# variable to control the resolution at which the training results are stored
display_step = 1
def accuracy(predictions, targets):
error = np.absolute(predictions.reshape(-1) - targets)
return np.mean(error)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
'''First we create a variable to store our graph'''
graph = tf.Graph()
'''Next we build our neural network within this graph variable'''
with graph.as_default():
'''Our training data will come in as x feature data and
y target data. We need to create tensorflow placeholders
to capture this data as it comes in'''
x = tf.placeholder(tf.float32, shape=(None, num_features))
_y = tf.placeholder(tf.float32, shape=(None))
'''Another placeholder stores the hyperparameter
that controls dropout'''
keep_prob = tf.placeholder(tf.float32)
'''Finally, we convert the test and train feature data sets
to tensorflow constants so we can use them to generate
predictions on both data sets'''
tf_X_test = tf.constant(X_test, dtype=tf.float32)
tf_X_train = tf.constant(X_train, dtype=tf.float32)
'''Next we create the parameter variables for the model.
Each layer of the neural network needs it's own weight
and bias variables which will be tuned during training.
The sizes of the parameter variables are determined by
the number of neurons in each layer.'''
W_fc1 = weight_variable([num_features, num_hidden_1])
b_fc1 = bias_variable([num_hidden_1])
W_fc2 = weight_variable([num_hidden_1, num_hidden_2])
b_fc2 = bias_variable([num_hidden_2])
W_fc3 = weight_variable([num_hidden_2, num_outputs])
b_fc3 = bias_variable([num_outputs])
'''Next, we define the forward computation of the model.
We do this by defining a function model() which takes in
a set of input data, and performs computations through
the network until it generates the output.'''
def model(data, keep):
# computing first hidden layer from input, using relu activation function
fc1 = tf.nn.sigmoid(tf.matmul(data, W_fc1) + b_fc1)
# adding dropout to first hidden layer
fc1_drop = tf.nn.dropout(fc1, keep)
# computing second hidden layer from first hidden layer, using relu activation function
fc2 = tf.nn.sigmoid(tf.matmul(fc1_drop, W_fc2) + b_fc2)
# adding dropout to second hidden layer
fc2_drop = tf.nn.dropout(fc2, keep)
# computing output layer from second hidden layer
# the output is a single neuron which is directly interpreted as the prediction of the target value
fc3 = tf.matmul(fc2_drop, W_fc3) + b_fc3
# the output is returned from the function
return fc3
'''Next we define a few calls to the model() function which
will return predictions for the current batch input data (x),
as well as the entire test and train feature set'''
prediction = model(x, keep_prob)
test_prediction = model(tf_X_test, 1.0)
train_prediction = model(tf_X_train, 1.0)
'''Finally, we define the loss and optimization functions
which control how the model is trained.
For the loss we will use the basic mean square error (MSE) function,
which tries to minimize the MSE between the predicted values and the
real values (_y) of the input dataset.
For the optimization function we will use basic Gradient Descent (SGD)
which will minimize the loss using the specified learning rate.'''
loss = tf.reduce_mean(tf.square(tf.sub(prediction, _y)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
'''We also create a saver variable which will allow us to
save our trained model for later use'''
saver = tf.train.Saver()
# create an array to store the results of the optimization at each epoch
results = []
'''First we open a session of Tensorflow using our graph as the base.
While this session is active all the parameter values will be stored,
and each step of training will be using the same model.'''
with tf.Session(graph=graph) as session:
'''After we start a new session we first need to
initialize the values of all the variables.'''
tf.initialize_all_variables().run()
print('Initialized')
'''Now we iterate through each training epoch based on the hyper-parameter set above.
Each epoch represents a single pass through all the training data.
The total number of training steps is determined by the number of epochs and
the size of mini-batches relative to the size of the entire training set.'''
for epoch in range(training_epochs):
'''At the beginning of each epoch, we create a set of shuffled indexes
so that we are using the training data in a different order each time'''
indexes = range(num_samples)
random.shuffle(indexes)
'''Next we step through each mini-batch in the training set'''
for step in range(int(math.floor(num_samples/float(batch_size)))):
offset = step * batch_size
'''We subset the feature and target training sets to create each mini-batch'''
batch_data = X_train[indexes[offset:(offset + batch_size)]]
batch_labels = y_train[indexes[offset:(offset + batch_size)]]
'''Then, we create a 'feed dictionary' that will feed this data,
along with any other hyper-parameters such as the dropout probability,
to the model'''
feed_dict = {x : batch_data, _y : batch_labels, keep_prob: dropout_keep_prob}
'''Finally, we call the session's run() function, which will feed in
the current training data, and execute portions of the graph as necessary
to return the data we ask for.
The first argument of the run() function is a list specifying the
model variables we want it to compute and return from the function.
The most important is 'optimizer' which triggers all calculations necessary
to perform one training step. We also include 'loss' and 'prediction'
because we want these as ouputs from the function so we can keep
track of the training process.
The second argument specifies the feed dictionary that contains
all the data we want to pass into the model at each training step.'''
_, l, p = session.run([optimizer, loss, prediction], feed_dict=feed_dict)
'''At the end of each epoch, we will calcule the error of predictions
on the full training and test data set. We will then store the epoch number,
along with the mini-batch, training, and test accuracies to the 'results' array
so we can visualize the training process later. How often we save the data to
this array is specified by the display_step variable created above'''
if (epoch % display_step == 0):
batch_acc = accuracy(p, batch_labels)
train_acc = accuracy(train_prediction.eval(session=session), y_train)
test_acc = accuracy(test_prediction.eval(session=session), y_test)
results.append([epoch, batch_acc, train_acc, test_acc])
'''Once training is complete, we will save the trained model so that we can use it later'''
save_path = saver.save(session, "model_houses.ckpt")
print("Model saved in file: %s" % save_path)
df = pd.DataFrame(data=results, columns = ["epoch", "batch_acc", "train_acc", "test_acc"])
df.set_index("epoch", drop=True, inplace=True)
fig, ax = plt.subplots(1, 1, figsize=(10, 4))
ax.plot(df)
ax.set(xlabel='Epoch',
ylabel='Error',
title='Training result')
ax.legend(df.columns, loc=1)
print "Minimum test loss:", np.min(df["test_acc"])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with.
Step2: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set.
Step3: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation).
Step4: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail
Step5: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target.
Step6: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include
Step7: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined.
Step8: Now we are ready to build our neural network model in Tensorflow.
Step9: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we
Step10: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps.
|
15,261 | <ASSISTANT_TASK:>
Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../../style/custom.css'
HTML(open(css_file, "r").read())
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
from mpl_toolkits.axes_grid1 import make_axes_locatable
# Definition of modelling parameters
# ----------------------------------
# Define model discretization
nx = 500 # number of grid points in x-direction
nz = 174 # number of grid points in z-direction
dx = 20.0 # spatial grid point distance in x-direction (m)
dz = dx # spatial grid point distance in z-direction (m)
# Define xmax, zmax
xmax = nx * dx
zmax = nz * dz
# Define maximum recording time
tmax = 6.0 # maximum wave propagation time (s)
# Define source and receiver position
xsrc = 2000.0 # x-source position (m)
zsrc = 40.0 # z-source position (m)
xrec = 8000.0 # x-receiver position (m)
zrec = 40.0 # z-source position (m)
f0 = 10 # dominant frequency of the source (Hz)
print("f0 = ", f0, " Hz")
t0 = 4.0/f0 # source time shift (s)
isnap = 2 # snapshot interval (timesteps)
# Calculate monochromatic wavefields for discrete frequency freq
freq = 5.0 # discrete frequency (Hz)
@jit(nopython=True) # use JIT for C-performance
def update_d2px_d2pz_3pt(p, dx, dz, nx, nz, d2px, d2pz):
for i in range(1, nx - 1):
for j in range(1, nz - 1):
d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx**2
d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz**2
return d2px, d2pz
# Define simple absorbing boundary frame based on wavefield damping
# according to Cerjan et al., 1985, Geophysics, 50, 705-708
def absorb(nx,nz):
FW = 60 # thickness of absorbing frame (gridpoints)
a = 0.0053
coeff = np.zeros(FW)
# define coefficients in absorbing frame
for i in range(FW):
coeff[i] = np.exp(-(a**2 * (FW-i)**2))
# initialize array of absorbing coefficients
absorb_coeff = np.ones((nx,nz))
# compute coefficients for left grid boundaries (x-direction)
zb=0
for i in range(FW):
ze = nz - i - 1
for j in range(zb,ze):
absorb_coeff[i,j] = coeff[i]
# compute coefficients for right grid boundaries (x-direction)
zb=0
for i in range(FW):
ii = nx - i - 1
ze = nz - i - 1
for j in range(zb,ze):
absorb_coeff[ii,j] = coeff[i]
# compute coefficients for bottom grid boundaries (z-direction)
xb=0
for j in range(FW):
jj = nz - j - 1
xb = j
xe = nx - j
for i in range(xb,xe):
absorb_coeff[i,jj] = coeff[j]
return absorb_coeff
# FD_2D_acoustic code with JIT optimization
# -----------------------------------------
def FD_2D_acoustic_JIT(vp,dt,dx,dz,f0,xsrc,zsrc,op,freq):
# calculate number of time steps nt
# ---------------------------------
nt = (int)(tmax/dt)
# locate source on Cartesian FD grid
# ----------------------------------
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# define clip value: 0.1 * absolute maximum value of source wavelet
clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2
# Define absorbing boundary frame
# -------------------------------
absorb_coeff = absorb(nx,nz)
# Define squared vp-model
# -----------------------
vp2 = vp**2
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# INITIALIZE ARRAYS FOR REAL AND IMAGINARY PARTS OF MONOCHROMATIC WAVEFIELDS HERE!
# --------------------------------------------------------------------------------
# real part of the monochromatic wavefield
# imaginary part of the monochromatic wavefield
# Initalize animation of pressure wavefield
# -----------------------------------------
fig = plt.figure(figsize=(7,3)) # define figure size
extent = [0.0,xmax,zmax,0.0] # define model extension
# Plot Vp-model
image = plt.imshow((vp.T)/1000, cmap=plt.cm.gray, interpolation='nearest',
extent=extent)
# Plot pressure wavefield movie
image1 = plt.imshow(p.T, animated=True, cmap="RdBu", alpha=.75, extent=extent,
interpolation='nearest', vmin=-clip, vmax=clip)
plt.title('Pressure wavefield')
plt.xlabel('x [m]')
plt.ylabel('z [m]')
plt.ion()
plt.show(block=False)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
if(op==3):
d2px, d2pz = update_d2px_d2pz_3pt(p, dx, dz, nx, nz, d2px, d2pz)
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Apply absorbing boundary frame
# ------------------------------
p *= absorb_coeff
pnew *= absorb_coeff
# Remap Time Levels
# -----------------
pold, p = p, pnew
# Calculate frequency domain wavefield at discrete frequency freq by DFT
# ----------------------------------------------------------------------
# time
# real part
# imaginary part
# Estimate real and imaginary part of pressur wavefield p by DFT
# --------------------------------------------------------------
# display pressure snapshots
if (it % isnap) == 0:
image1.set_data(p.T)
fig.canvas.draw()
# Finalize computation of DFT
# Return real and imaginary parts of the monochromatic wavefield
# Run 2D acoustic FD modelling with 3-point spatial operater
# ----------------------------------------------------------
%matplotlib notebook
op = 3 # define spatial FD operator (3-point)
# define homogeneous model with vp = 2500 m/s
# Define time step
dt = dx / (np.sqrt(2) * np.max(vp_hom))# time step (s)
p_hom_re, p_hom_im = FD_2D_acoustic_JIT(vp_hom,dt,dx,dz,f0,xsrc,zsrc,op,freq)
%matplotlib notebook
# Plot real and imaginary parts of monochromatic wavefields
clip_seis = 5e-10
extent_seis = [0.0,xmax/1000,zmax/1000,0.0]
ax = plt.subplot(211)
plt.imshow(p_hom_re.T, cmap=plt.cm.RdBu, aspect=1, vmin=-clip_seis,
vmax=clip_seis, extent=extent_seis)
plt.title('Real part of monochromatic wavefield')
#plt.xlabel('x [km]')
ax.set_xticks([])
plt.ylabel('z [km]')
plt.subplot(212)
plt.imshow(p_hom_im.T, cmap=plt.cm.RdBu, aspect=1, vmin=-clip_seis,
vmax=clip_seis, extent=extent_seis)
plt.title('Imaginary part of monochromatic wavefield')
plt.xlabel('x [km]')
plt.ylabel('z [km]')
plt.tight_layout()
plt.show()
# Import FATT result for Marmousi-2 Vp model
# ------------------------------------------
# Define model filename
name_vp = "../marmousi-2/marmousi_II_fatt.vp"
# Open file and write binary data to vp
f = open(name_vp)
data_type = np.dtype ('float32').newbyteorder ('<')
vp_fatt = np.fromfile (f, dtype=data_type)
# Reshape (1 x nx*nz) vector to (nx x nz) matrix
vp_fatt = vp_fatt.reshape(nx,nz)
# Plot Marmousi-2 vp-model
# ------------------------
%matplotlib notebook
extent = [0, xmax/1000, zmax/1000, 0]
fig = plt.figure(figsize=(7,3)) # define figure size
image = plt.imshow((vp_fatt.T)/1000, cmap=plt.cm.viridis, interpolation='nearest',
extent=extent)
cbar = plt.colorbar(aspect=12, pad=0.02)
cbar.set_label('Vp [km/s]', labelpad=10)
plt.title('Marmousi-2 FATT model')
plt.xlabel('x [km]')
plt.ylabel('z [km]')
plt.show()
# Run 2D acoustic FD modelling with 3-point spatial operater
# ----------------------------------------------------------
%matplotlib notebook
op = 3 # define spatial FD operator (3-point)
# Define time step by CFL criterion
dt = dx / (np.sqrt(2) * np.max(vp_fatt))# time step (s)
p_fatt_re, p_fatt_im = FD_2D_acoustic_JIT(vp_fatt,dt,dx,dz,f0,xsrc,zsrc,op,freq)
%matplotlib notebook
# Plot real and imaginary parts of monochromatic wavefields
clip_seis = 5e-9
extent_seis = [0.0,xmax/1000,zmax/1000,0.0]
ax = plt.subplot(211)
plt.imshow(p_fatt_re.T, cmap=plt.cm.RdBu, aspect=1, vmin=-clip_seis,
vmax=clip_seis, extent=extent_seis)
plt.title('Real part of monochromatic wavefield')
#plt.xlabel('x [km]')
ax.set_xticks([])
plt.ylabel('z [km]')
plt.subplot(212)
plt.imshow(p_fatt_im.T, cmap=plt.cm.RdBu, aspect=1, vmin=-clip_seis,
vmax=clip_seis, extent=extent_seis)
plt.title('Imaginary part of monochromatic wavefield')
plt.xlabel('x [km]')
plt.ylabel('z [km]')
plt.tight_layout()
plt.show()
# COMPUTE RECEIVER GREEN'S FUNCTION FOR HOMOGENEOUS MODEL HERE!
# -------------------------------------------------------------
%matplotlib notebook
op = 3 # define spatial FD operator (3-point)
# Define time step
dt = dx / (np.sqrt(2) * np.max(vp_hom))# time step (s)
g_hom_re, g_hom_im = FD_2D_acoustic_JIT(vp_hom,dt,dx,dz,f0,xsrc,zsrc,op,freq)
%matplotlib notebook
# COMPUTE SENSITVITY KERNEL FOR HOMOGENEOUS MODEL HERE!
clip = 4e-18
extent = [0.0,xmax/1000,zmax/1000,0.0] # define model extension
fig = plt.figure(figsize=(7,3)) # define figure size
# Plot Vp-model
image = plt.imshow((vp_hom.T)/1000, cmap=plt.cm.gray, interpolation='nearest',
extent=extent)
# Plot Sensitivity Kernel
image1 = plt.imshow(K_hom.T, cmap="RdBu", alpha=.75, extent=extent,
interpolation='nearest', vmin=-clip, vmax=clip)
plt.title('Sensitivity Kernel (homogeneous model)')
plt.xlabel('x [km]')
plt.ylabel('z [km]')
plt.show()
# COMPUTE RECEIVER GREEN'S FUNCTION FOR MARMOUSI-2 FATT MODEL HERE!
# -----------------------------------------------------------------
%matplotlib notebook
op = 3 # define spatial FD operator (3-point)
# Define time step
dt = dx / (np.sqrt(2) * np.max(vp_fatt))# time step (s)
g_fatt_re, g_fatt_im = FD_2D_acoustic_JIT(vp_fatt,dt,dx,dz,f0,xsrc,zsrc,op,freq)
%matplotlib notebook
# COMPUTE SENSITVITY KERNEL FOR Marmousi-2 FATT MODEL HERE!
clip = 8e-17
extent = [0.0,xmax/1000,zmax/1000,0.0] # define model extension
fig = plt.figure(figsize=(7,3)) # define figure size
# Plot Vp-model
image = plt.imshow((vp_fatt.T)/1000, cmap=plt.cm.gray, interpolation='nearest',
extent=extent)
# Plot Sensitivity Kernel
image1 = plt.imshow(K_hom.T, cmap="RdBu", alpha=.5, extent=extent,
interpolation='nearest', vmin=-clip, vmax=clip)
plt.title('Sensitivity Kernel (Marmousi-2 FATT model)')
plt.xlabel('x [km]')
plt.ylabel('z [km]')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computation of Sensitivity Kernels by 2D acoustic FD modelling
Step2: As always, we start with the definition of the basic modelling parameters ...
Step3: ... define a JIT-ed function for the spatial FD approximation ...
Step4: ... initialize the absorbing boundary frame ...
Step5: In the 2D FD acoustic modelling code, we implement the DFT of the time-domain wavefields, by initializing the real and imaginary parts of the pressure wavefields, calculate the trigonometric factors for the DFT within the time-loop, apply the DFT to the pressure wavefield p for the discrete frequency freq and finally return the real and imaginary parts of the frequency domain wavefields ...
Step6: Modelling monochromatic frequency domain wavefields for a homogeneous acoustic medium
Step7: The time domain wavefields seem to be correct. Let's take a look at the frequency domain wavefield
Step8: Modelling monochromatic frequency domain wavefields for the Marmousi-2 FATT model
Step9: ... and run the time-domain modelling code
Step10: Sensitivity Kernels
|
15,262 | <ASSISTANT_TASK:>
Python Code:
import numpy
from matplotlib import pyplot
%matplotlib notebook
def RHS(U, dx):
RHS term.
Parameters
----------
U : array
contains [phi, phi_t, phi_x] at each point
dx : double
grid spacing
Returns
-------
dUdt : array
contains the required time derivatives
#
def apply_boundaries(dUdt):
Periodic boundaries
#
def grid(Npoints):
Npoints is the number of interior points
dx = 2.0 / Npoints
return dx, numpy.linspace(-1.0-dx/2.0, 1.0+dx/2.0, Npoints+2)
def RK2_step(U, RHS, apply_boundaries, dt, dx):
RK2 method
#
def initial_data(x):
Set the initial data. x are the coordinates. U (phi, phi_t, phi_x) are the variables.
U = numpy.zeros((3, len(x)))
U[0, :] = numpy.exp(-20.0 * x**2)
U[2, :] = -40.0*x*numpy.exp(-20.0 * x**2)
return U
Npoints = 50
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(1.0 / dt)
for n in range(Nsteps):
#
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=1$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()
Npoints = 50
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0 / dt)
for n in range(Nsteps):
#
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()
def error_norms(U, U_initial):
Error norms (1, 2, infinity)
N = len(U)
error_1 = numpy.sum(numpy.abs(U - U_initial))/N
error_2 = numpy.sqrt(numpy.sum((U - U_initial)**2)/N)
error_inf = numpy.max(numpy.abs(U - U_initial))
return error_1, error_2, error_inf
Npoints_all = 50 * 2**(numpy.arange(0, 6))
dxs = numpy.zeros((len(Npoints_all,)))
wave_errors = numpy.zeros((3, len(Npoints_all)))
for i, Npoints in enumerate(Npoints_all):
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0 / dt)
for n in range(Nsteps):
#
dxs[i] = dx
wave_errors[:, i] = error_norms(U[0, :], U0[0, :])
pyplot.figure()
pyplot.loglog(dxs, wave_errors[0, :], 'bx', label=r"${\cal E}_1$")
pyplot.loglog(dxs, wave_errors[1, :], 'go', label=r"${\cal E}_2$")
pyplot.loglog(dxs, wave_errors[2, :], 'r+', label=r"${\cal E}_{\infty}$")
pyplot.loglog(dxs, wave_errors[1, 0]*(dxs/dxs[0])**4, 'k-', label=r"$p=4$")
pyplot.xlabel(r"$\Delta x$")
pyplot.ylabel("Error norm")
pyplot.legend(loc="lower right")
pyplot.show()
def initial_data_asymmetric(x):
Set the initial data. x are the coordinates. U (phi, phi_t, phi_x) are the variables.
U = numpy.zeros((3, len(x)))
U[0, :] = numpy.sin(numpy.pi*x)*(1-x)**2*(1+x)**3
U[2, :] = numpy.pi*numpy.cos(numpy.pi*x)*(1-x)**2*(1+x)**3 + numpy.sin(numpy.pi*x)*(2.0*(1-x)*(1+x)**3 + 3.0*(1-x)**2*(1+x)**2)
return U
Npoints_all = 50 * 2**(numpy.arange(0, 6))
dxs = numpy.zeros((len(Npoints_all,)))
wave_errors = numpy.zeros((3, len(Npoints_all)))
for i, Npoints in enumerate(Npoints_all):
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data_asymmetric(x)
U = initial_data_asymmetric(x)
Nsteps = int(2.0 / dt)
for n in range(Nsteps):
#
dxs[i] = dx
wave_errors[:, i] = error_norms(U[0, :], U0[0, :])
pyplot.figure()
pyplot.loglog(dxs, wave_errors[0, :], 'bx', label=r"${\cal E}_1$")
pyplot.loglog(dxs, wave_errors[1, :], 'go', label=r"${\cal E}_2$")
pyplot.loglog(dxs, wave_errors[2, :], 'r+', label=r"${\cal E}_{\infty}$")
pyplot.loglog(dxs, wave_errors[1, 0]*(dxs/dxs[0])**2, 'k-', label=r"$p=2$")
pyplot.xlabel(r"$\Delta x$")
pyplot.ylabel("Error norm")
pyplot.legend(loc="lower right")
pyplot.show()
Npoints = 50
dx, x = grid(Npoints)
dt = dx
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0/dt)
for n in range(Nsteps):
#
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()
Npoints = 200
dx, x = grid(Npoints)
dt = dx
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0/dt)
for n in range(Nsteps):
#
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()
Npoints = 400
dx, x = grid(Npoints)
dt = dx
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0/dt)
for n in range(Nsteps):
#
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: We start by implementing the right-hand-side of the evolution
Step4: We see that this doesn't give us the update term at the edges of the domain. We'll enforce that the domain is periodic as a simple boundary condition. Usually this would be an outgoing wave type boundary condition, but anything that fixes the update term at the boundary is fine.
Step6: Then we fix the grid. To work with the periodic domain we need to stagger the grid away from the boundaries. We'll fix the domain to be $x \in [-1, 1]$
Step8: We take, with some changes in notation, the RK2 method from earlier. This will take only one step.
Step10: There are only two things we need to fix. One is the timestep. For now, we'll set it to $\Delta t = \Delta x / 4$. The second is the initial data. We will choose the initial data to be a time symmetric gaussian,
Step11: Now we can evolve
Step12: We can see the expected behaviour
Step14: Convergence
Step16: This fourth order convergence is an artefact of the initial data and boundary conditions, which are perfectly symmetric. If we change the initial data to make it asymmetric, we'll get something much closer to second order
Step17: Courant limits
Step18: The result doesn't look too bad, but the numerical approximation is actually bigger than the correct solution. What happens as we increase resolution?
Step19: The bulk of the solution looks good, but there's small oscillations at the edges. Increase resolution a bit further
|
15,263 | <ASSISTANT_TASK:>
Python Code:
def find_similar_pt():
rn = lambda: random.randint(0, 10000)
aset = [(rn(), rn()) for i in range(40000)]
q = (rn(), rn())
rad = 9990
distance = lambda a, b: math.sqrt(sum([((x-y)**2) for x, y in zip(a, b)]))
s = time.time()
print("creating vptree...")
root = VpNode(aset, distance=distance)
print("vptree created", time.time() - s)
s = time.time()
print("searching...")
se = VpSearch(root, q, rad, 30)
#out = se.search()
out = se.knn()
for k, v in sorted(se.stat.items()):
print(k, v)
print("number of resultes: %s" % len(out))
print("vptree search done, searching time", time.time() - s)
projx = lambda x: map(lambda y: y[0], x)
projy = lambda x: map(lambda y: y[1], x)
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].scatter(list(projx(aset)), list(projy(aset)), s = 20, alpha=0.1)
ax[0].scatter([q[0]], [q[1]], s = 40, color='g')
ax[0].scatter(list(projx(out)), list(projy(out)), s = 10, color='r')
ax[0].annotate("query", xy=q)
ax[1].scatter([q[0]], [q[1]], s = 40, color='g')
ax[1].scatter(list(projx(out)), list(projy(out)), s = 10, color='r')
plt.show()
find_similar_pt()
def tsmaker(m, s, j):
"returns metadata and a time series in the shape of a jittered normal"
t = np.arange(0.0, 1.0, 0.01)
v = norm.pdf(t, m, s) + j*np.random.randn(100)
return (t, v)
mus = np.random.uniform(low=0.0, high=1.0, size=50)
sigs = np.random.uniform(low=0.05, high=0.4, size=50)
jits = np.random.uniform(low=0.05, high=0.2, size=50)
ts_set = [tsmaker(m, s, j) for i, m, s, j in zip(range(50), mus, sigs, jits)]
ts_set[0][1]
def find_similar_ts():
rn = lambda: random.randint(0, 10000)
aset = [tsmaker(m, s, j) for i, m, s, j in zip(range(50), mus, sigs, jits)]
q = tsmaker(mus[1], sigs[1], jits[1])
rad = 9990
distance = lambda a, b: math.sqrt(sum([((x-y)**2) for x, y in zip(a[1], b[1])]))
s = time.time()
print("creating vptree...")
root = VpNode(aset, distance=distance)
print("vptree created", time.time() - s)
s = time.time()
print("searching...")
se = VpSearch(root, q, rad, 5)
#out = se.search()
out = se.knn()
for k, v in sorted(se.stat.items()):
print(k, v)
print("number of resultes: %s s" % len(out))
print("vptree search done", time.time() - s)
plt.plot(q[1], label='original timeseries', linewidth=2)
plt.plot(out[0][1], label='similar_1')
plt.plot(out[1][1], label='similar_2')
plt.plot(out[2][1], label='similar_3')
plt.legend()
plt.show()
find_similar_ts()
find_similar_ts()
find_similar_ts()
def levenshtein(a,b):
"Calculates the Levenshtein distance between a and b."
n, m = len(a), len(b)
if n > m:
# Make sure n <= m, to use O(min(n,m)) space
a,b = b,a
n,m = m,n
current = range(n+1)
for i in range(1,m+1):
previous, current = current, [i]+[0]*n
for j in range(1,n+1):
add, delete = previous[j]+1, current[j-1]+1
change = previous[j-1]
if a[j-1] != b[i-1]:
change = change + 1
current[j] = min(add, delete, change)
return current[n]
def find_similar_ts(file_name):
f = open(file_name)
next(f)
aset = [w[:-1] for w in f]
rad = 1
distance = levenshtein
s = time.time()
print("\ninput set %s points" % len(aset))
print("creating tree...")
root = VpNode(aset, distance=distance)
print("created: %s nodes" % VpNode.ids)
print("done in s: %s" % (time.time() - s))
print("searching...")
while True:
q = input(">>")
s = time.time()
se = VpSearch(root, q, rad, 10)
out = se.knn()
print(se.stat)
print("founded %s results:" % len(out))
count = 1
print("\n".join(out))
print("done in s: %s" % (time.time() - s))
find_similar_ts('wordsEn.txt')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: here we find the top 30 closest points to objective point in a set of 40000 tuples. The graph below shows
Step2: 2. VPTREE on timeseries
Step3: 3. VPTREE on text corpus
Step4: Note
|
15,264 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'Name': ['Name1', 'Name2', 'Name3'],
'2001': [2, 1, 0],
'2002': [5, 4, 5],
'2003': [0, 2, 0],
'2004': [0, 0, 0],
'2005': [4, 4, 0],
'2006': [6, 0, 2]})
def g(df):
cols = list(df)[1:]
cols = cols[::-1]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, col] != 0:
s += df.loc[idx, col]
cnt += 1
df.loc[idx, col] = s / (max(cnt, 1))
return df
df = g(df.copy())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
15,265 | <ASSISTANT_TASK:>
Python Code:
# Import all the libraries we'll be using
import numpy as np
import statsmodels.api as sm
from statsmodels import regression, stats
import statsmodels
import matplotlib.pyplot as plt
residuals = np.random.normal(0, 1, 100)
_, pvalue, _, _ = statsmodels.stats.stattools.jarque_bera(residuals)
print pvalue
residuals = np.random.poisson(size = 100)
_, pvalue, _, _ = statsmodels.stats.stattools.jarque_bera(residuals)
print pvalue
# Artificially create dataset with constant variance around a line
xs = np.arange(100)
y1 = xs + 3*np.random.randn(100)
# Get results of linear regression
slr1 = regression.linear_model.OLS(y1, sm.add_constant(xs)).fit()
# Construct the fit line
fit1 = slr1.params[0] + slr1.params[1]*xs
# Plot data and regression line
plt.scatter(xs, y1)
plt.plot(xs, fit1)
plt.title('Homoskedastic errors');
plt.legend(['Predicted', 'Observed'])
plt.xlabel('X')
plt.ylabel('Y');
# Artificially create dataset with changing variance around a line
y2 = xs*(1 + .5*np.random.randn(100))
# Perform linear regression
slr2 = regression.linear_model.OLS(y2, sm.add_constant(xs)).fit()
fit2 = slr2.params[0] + slr2.params[1]*xs
# Plot data and regression line
plt.scatter(xs, y2)
plt.plot(xs, fit2)
plt.title('Heteroskedastic errors')
plt.legend(['Predicted', 'Observed'])
plt.xlabel('X')
plt.ylabel('Y')
# Print summary of regression results
slr2.summary()
residuals1 = y1-fit1
residuals2 = y2-fit2
xs_with_constant = sm.add_constant(xs)
_, jb_pvalue1, _, _ = statsmodels.stats.stattools.jarque_bera(residuals1)
_, jb_pvalue2, _, _ = statsmodels.stats.stattools.jarque_bera(residuals2)
print "p-value for residuals1 being normal", jb_pvalue1
print "p-value for residuals2 being normal", jb_pvalue2
_, pvalue1, _, _ = stats.diagnostic.het_breushpagan(residuals1, xs_with_constant)
_, pvalue2, _, _ = stats.diagnostic.het_breushpagan(residuals2, xs_with_constant)
print "p-value for residuals1 being heteroskedastic", pvalue1
print "p-value for residuals2 being heteroskedastic", pvalue2
print slr2.summary()
print slr2.get_robustcov_results().summary()
# Load pricing data for an asset
start = '2014-01-01'
end = '2015-01-01'
y = get_pricing('DAL', fields='price', start_date=start, end_date=end)
x = np.arange(len(y))
# Regress pricing data against time
model = regression.linear_model.OLS(y, sm.add_constant(x)).fit()
# Construct the fit line
prediction = model.params[0] + model.params[1]*x
# Plot pricing data and regression line
plt.plot(x,y)
plt.plot(x, prediction, color='r')
plt.legend(['DAL Price', 'Regression Line'])
plt.xlabel('Time')
plt.ylabel('Price')
# Print summary of regression results
model.summary()
_, prices_qstats, prices_qstat_pvalues = statsmodels.tsa.stattools.acf(y, qstat=True)
_, prices_qstats, prices_qstat_pvalues = statsmodels.tsa.stattools.acf(y-prediction, qstat=True)
print 'Prices autocorrelation p-values', prices_qstat_pvalues
print 'Residuals autocorrelation p-values', prices_qstat_pvalues
_, jb_pvalue, _, _ = statsmodels.stats.stattools.jarque_bera(y-prediction)
print 'Jarque-Bera p-value that residuals are normally distributed', jb_pvalue
from math import sqrt
# Find the covariance matrix of the coefficients
cov_mat = stats.sandwich_covariance.cov_hac(model)
# Print the standard errors of each coefficient from the original model and from the adjustment
print 'Old standard errors:', model.bse[0], model.bse[1]
print 'Adjusted standard errors:', sqrt(cov_mat[0,0]), sqrt(cov_mat[1,1])
# Load pricing data for asset and two market indices
start = '2014-01-01'
end = '2015-01-01'
b1 = get_pricing('SPY', fields='price', start_date=start, end_date=end)
b2 = get_pricing('MDY', fields='price', start_date=start, end_date=end)
a = get_pricing('HPQ', fields='price', start_date=start, end_date=end)
# Run multiple linear regression
mlr = regression.linear_model.OLS(a, sm.add_constant(np.column_stack((b1,b2)))).fit()
# Construct fit curve using dependent variables and estimated coefficients
mlr_prediction = mlr.params[0] + mlr.params[1]*b1 + mlr.params[2]*b2
# Print regression statistics
print 'R-squared:', mlr.rsquared_adj
print 't-statistics of coefficients:\n', mlr.tvalues
# Plot asset and model
a.plot()
mlr_prediction.plot()
plt.legend(['Asset', 'Model']);
plt.ylabel('Price')
# Perform linear regression
slr = regression.linear_model.OLS(a, sm.add_constant(b1)).fit()
slr_prediction = slr.params[0] + slr.params[1]*b1
# Print fit statistics
print 'R-squared:', slr.rsquared_adj
print 't-statistics of coefficients:\n', slr.tvalues
# Plot asset and model
a.plot()
slr_prediction.plot()
plt.ylabel('Price')
plt.legend(['Asset', 'Model']);
from scipy.stats import pearsonr
# Construct Anscombe's arrays
x1 = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]
y1 = [8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]
x2 = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]
y2 = [9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]
x3 = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]
y3 = [7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]
x4 = [8, 8, 8, 8, 8, 8, 8, 19, 8, 8, 8]
y4 = [6.58, 5.76, 7.71, 8.84, 8.47, 7.04, 5.25, 12.50, 5.56, 7.91, 6.89]
# Perform linear regressions on the datasets
slr1 = regression.linear_model.OLS(y1, sm.add_constant(x1)).fit()
slr2 = regression.linear_model.OLS(y2, sm.add_constant(x2)).fit()
slr3 = regression.linear_model.OLS(y3, sm.add_constant(x3)).fit()
slr4 = regression.linear_model.OLS(y4, sm.add_constant(x4)).fit()
# Print regression coefficients, Pearson r, and R-squared for the 4 datasets
print 'Cofficients:', slr1.params, slr2.params, slr3.params, slr4.params
print 'Pearson r:', pearsonr(x1, y1)[0], pearsonr(x2, y2)[0], pearsonr(x3, y3)[0], pearsonr(x4, y4)[0]
print 'R-squared:', slr1.rsquared, slr2.rsquared, slr3.rsquared, slr4.rsquared
# Plot the 4 datasets with their regression lines
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2)
xs = np.arange(20)
ax1.plot(slr1.params[0] + slr1.params[1]*xs, 'r')
ax1.scatter(x1, y1)
ax1.set_xlabel('x1')
ax1.set_ylabel('y1')
ax2.plot(slr2.params[0] + slr2.params[1]*xs, 'r')
ax2.scatter(x2, y2)
ax2.set_xlabel('x2')
ax2.set_ylabel('y2')
ax3.plot(slr3.params[0] + slr3.params[1]*xs, 'r')
ax3.scatter(x3, y3)
ax3.set_xlabel('x3')
ax3.set_ylabel('y3')
ax4.plot(slr4.params[0] + slr4.params[1]*xs, 'r')
ax4.scatter(x4,y4)
ax4.set_xlabel('x4')
ax4.set_ylabel('y4');
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Heteroskedasticity
Step2: Testing for Heteroskedasticity
Step3: Correcting for Heteroskedasticity
Step4: Serial correlation of errors
Step5: Testing for Autocorrelation
Step6: Newey-West
Step7: # Multicollinearity
Step8: Example
|
15,266 | <ASSISTANT_TASK:>
Python Code:
%%capture
!python -m pip install abraia
import os
if not os.getenv('ABRAIA_KEY'):
#@markdown <a href="https://abraia.me/console/settings" target="_blank">Get your ABRAIA_KEY</a>
abraia_key = '' #@param {type: "string"}
%env ABRAIA_KEY=$abraia_key
from abraia import Abraia
abraia = Abraia()
%%capture
#@markdown <a href="https://abraia.me/console/gallery" target="_blank">Upload and manage your images</a>
url = 'http://upload.wikimedia.org/wikipedia/commons/1/13/Usain_Bolt_16082009_Berlin.JPG'
abraia.upload_file(url, 'usain.jpg')
import pandas as pd
folder = ''
files, folders = abraia.list_files(folder)
pd.DataFrame(files)
import matplotlib.pyplot as plt
img = abraia.load_image('usain.jpg')
plt.figure()
plt.title('Image')
plt.imshow(img)
plt.axis('off')
plt.show()
import json
meta = abraia.load_metadata('usain.jpg')
abraia.save_file('usain.json', json.dumps(meta))
meta
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: List files
Step2: Plot an image
Step3: Image metadata
|
15,267 | <ASSISTANT_TASK:>
Python Code:
from dkrz_forms import form_widgets
form_widgets.show_status('form-submission')
MY_LAST_NAME = "ki" # e.gl MY_LAST_NAME = "schulz"
#-------------------------------------------------
from dkrz_forms import form_handler, form_widgets, checks
form_info = form_widgets.check_pwd(MY_LAST_NAME)
sfg = form_handler.init_form(form_info)
sf = sfg.sub.entity_out.form_info
sf.submission_type = "..." # example: sf.submission_type = "initial_version"
sf.institution = "..." # example: sf.institution = "Alfred Wegener Institute"
sf.institute_id = "..." # example: sf.institute_id = "AWI"
sf.model_id = "..." # example: sf.model_id = "AWI-HIRHAM5"
sf.experiment_id = "..." # example: sf.experiment_id = "evaluation"
# ["value_a","value_b"] in case of multiple experiments
sf.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
sf.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc"
# Please run this cell as it is to check your example file name structure
# to_do: implement submission_form_check_file function - output result (attributes + check_result)
form_handler.cordex_file_info(sf,sf.example_file_name)
sf.grid_mapping_name = "..." # example: sf.grid_mapping_name = "rotated_latitude_longitude"
sf.grid_as_specified_if_rotated_pole = "..." # example: sf.grid_as_specified_if_rotated_pole = "yes"
sf.data_qc_status = "..." # example: sf.data_qc_status = "QC2-CORDEX"
sf.data_qc_comment = "..." # any comment of quality status of the files
sf.terms_of_use = "..." # example: sf.terms_of_use = "unrestricted"
sf.directory_structure = "..." # example: sf.directory_structure = "compliant"
sf.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
sf.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... "
sf.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"]
sf.uniqueness_of_tracking_id = "..." # example: sf.uniqueness_of_tracking_id = "yes"
sf.variable_list_day = [
"clh","clivi","cll","clm","clt","clwvi",
"evspsbl","evspsblpot",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","prc","prhmax","prsn","prw","ps","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","tauu","tauv","ta200","ta500","ta850","ts",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850","wsgsmax",
"zg200","zg500","zmla"
]
sf.variable_list_mon = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200",
"ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_sem = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200","ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_fx = [
"areacella",
"mrsofc",
"orog",
"rootd",
"sftgif","sftlf"
]
# simple consistency check report for your submission form
res = form_handler.check_submission(sf)
sf.sub.valid_submission = res['valid_submission']
form_handler.DictTable(res)
form_handler.save_form(sf,"..my comment..") # edit my comment info
#evaluate this cell if you want a reference to the saved form emailed to you
# (only available if you access this form via the DKRZ form hosting service)
form_handler.email_form_info()
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf)
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start submission procedure
Step2: please provide information on the contact person for this CORDEX data submission request
Step3: Requested general information
Step4: institute_id
Step5: model_id
Step6: experiment_id and time_period
Step7: Example file name
Step8: information on the grid_mapping
Step9: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
Step10: Please provide information on quality check performed on the data you plan to submit
Step11: Terms of use
Step12: Information on directory structure and data access path
Step13: Give the path where the data reside, for example
Step14: Exclude variable list
Step15: Uniqueness of tracking_id and creation_date
Step16: Variable list
Step17: Check your submission form
Step18: Save your form
Step19: officially submit your form
|
15,268 | <ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 500, activation='ReLU')
net = tflearn.fully_connected(net, 250, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrieving training and test data
Step2: Visualize the training data
Step3: Building the network
Step4: Training the network
Step5: Testing
|
15,269 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
modelname = 'swiex1'
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
nlay = 1
nrow = 1
ncol = 50
delr = 5.
delc = 1.
nper, perlen, nstp = 1, 400., 200
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol,
delr=delr, delc=delc,
top=0, botm=-40.0,
steady=True, nper=nper, perlen=perlen, nstp=nstp)
ibound = np.ones((nrow, ncol))
ibound[0, -1] = -1
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=0.0)
lpf = flopy.modflow.ModflowLpf(ml, hk=2., laytyp=0, layavg=0)
wel = flopy.modflow.ModflowWel(ml, stress_period_data = {0:[[0, 0, 0, 1]]} )
spd = {}
for istp in range(49, nstp+1, 50):
spd[(0, istp)] = ['save head', 'print budget']
spd[(0, istp+1)] = []
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml)
z = np.zeros((nrow, ncol), np.float)
z[0, 16:24] = np.arange(-2.5, -40, -5)
z[0, 24:] = -40
z = [z] # zeta needs to be
isource = np.ones((nrow, ncol), np.int)
isource[0, 0] = 2
#
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1,
toeslope=0.2, tipslope=0.2, nu=[0, 0.025],
zeta=z, ssz=0.2, isource=isource,
nsolver=1, iswizt=55)
ml.write_input()
ml.run_model(silent=True)
# read model heads
hfile = flopy.utils.HeadFile(os.path.join(ml.model_ws, modelname+'.hds'))
head = hfile.get_alldata()
# read model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
plt.figure(figsize=(16,6))
# define x-values of xcells and plot interface
x = np.arange(0, ncol*delr, delr) + delr/2.
label = ['SWI2','_','_','_'] # labels with an underscore are not added to legend
for i in range(4):
zt = np.ma.masked_outside(zeta[i,0,0,:], -39.99999, -0.00001)
plt.plot(x, zt, 'r-', lw=1,
zorder=10, label=label[i])
# Data for the Wilson - Sa da Costa solution
k = 2.0
n = 0.2
nu = 0.025
H = 40.0
tzero = H * n / (k * nu) / 4.0
Ltoe = np.zeros(4)
v = 0.125
t = np.arange(100, 500, 100)
label = ['Wilson and Sa Da Costa (1982)','_','_','_'] # labels with an underscore are not added to legend
for i in range(4):
Ltoe[i] = H * np.sqrt(k * nu * (t[i] + tzero) / n / H)
plt.plot([100 - Ltoe[i] + v * t[i], 100 + Ltoe[i] + v * t[i]], [0, -40], '0.75',
lw=8, zorder=0, label=label[i])
# Scale figure and add legend
plt.axis('scaled')
plt.xlim(0, 250)
plt.ylim(-40, 0)
plt.legend(loc='best');
fig = plt.figure(figsize=(16, 3))
ax = fig.add_subplot(1, 1, 1)
modelxsect = flopy.plot.ModelCrossSection(model=ml, line={'Row': 0})
label = ['SWI2','_','_','_']
for k in range(zeta.shape[0]):
modelxsect.plot_surface(zeta[k, :, :, :], masked_values=[0, -40.],
color='red', lw=1, label=label[k])
linecollection = modelxsect.plot_grid()
ax.set_title('ModelCrossSection.plot_surface()')
# Data for the Wilson - Sa da Costa solution
k = 2.0
n = 0.2
nu = 0.025
H = 40.0
tzero = H * n / (k * nu) / 4.0
Ltoe = np.zeros(4)
v = 0.125
t = np.arange(100, 500, 100)
label = ['Wilson and Sa Da Costa (1982)','_','_','_'] # labels with an underscore are not added to legend
for i in range(4):
Ltoe[i] = H * np.sqrt(k * nu * (t[i] + tzero) / n / H)
ax.plot([100 - Ltoe[i] + v * t[i], 100 + Ltoe[i] + v * t[i]], [0, -40], 'blue',
lw=1, zorder=0, label=label[i])
# Scale figure and add legend
ax.axis('scaled')
ax.set_xlim(0, 250)
ax.set_ylim(-40, 0)
ax.legend(loc='best');
fig = plt.figure(figsize=(16, 3))
ax = fig.add_subplot(1, 1, 1)
modelxsect = flopy.plot.ModelCrossSection(model=ml, line={'Row': 0})
modelxsect.plot_fill_between(zeta[3, :, :, :])
linecollection = modelxsect.plot_grid()
ax.set_title('ModelCrossSection.plot_fill_between()');
X, Y = np.meshgrid(x, [0, -40])
zc = flopy.plot.SwiConcentration(model=ml)
conc = zc.calc_conc(zeta={0:zeta[3,:,:,:]}) / 0.025
print(conc[0, 0, :])
v = np.vstack((conc[0, 0, :], conc[0, 0, :]))
plt.imshow(v, extent=[0, 250, -40, 0], cmap='Reds')
cb = plt.colorbar(orientation='horizontal')
cb.set_label('percent seawater');
plt.contour(X, Y, v, [0.25, 0.5, 0.75], linewidths=[2, 1.5, 1], colors='black');
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named ml and specify that this is a MODFLOW-2005 model.
Step2: Define the number of layers, rows and columns, and the cell size along the rows (delr) and along the columns (delc). Then create a discretization file. Specify the top and bottom of the aquifer. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There is one stress period with a length of 400 days and 200 steps (so one step is 2 days).
Step3: All cells are active (ibound=1), while the last cell is fixed head (ibound=-1). The starting values of the head are not important, as the heads are computed every time with a steady run.
Step4: Define the hydraulic conductivity. The aquifer is confined (laytype=0) and the intercell hydraulic conductivity is the harmonic meand (layavg=0).
Step5: Inflow on the right side of the model is 1 m$^3$/d (layer 0, row 0, column 0, discharge 1)
Step6: Define the output control to save heads and interface every 50 steps, and define the pcg solver with default arguments.
Step7: The intial interface is straight. The isource is one everywhere (inflow and outflow is fresh (zone 1)) except for the first cell (index=0) which has saltwater inflow (zone 2).
Step8: Write the MODFLOW input files and run the model
Step9: Load the head and zeta data from the file
Step10: Make a graph and add the solution of Wilson and Sa da Costa
Step11: Use ModelCrossSection plotting class and plot_surface() method to plot zeta surfaces.
Step12: Use ModelCrossSection plotting class and plot_fill_between() method to fill between zeta surfaces.
Step13: Convert zeta surfaces to relative seawater concentrations
|
15,270 | <ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
import numpy
mat = numpy.random.random((15, 15))
mat = mat + mat.T
adja = (mat >= 1.4).astype(int)
for i in range(adja.shape[0]):
adja[i ,i] = 0
adja
import networkx
import matplotlib.pyplot as plt
fix, ax = plt.subplots(1, 1,figsize=(4,4))
G = networkx.from_numpy_matrix(adja)
networkx.draw(G, with_labels=True, ax=ax)
degres = adja.sum(axis=1)
degres
distrib = {}
for d in degres:
if d in distrib:
distrib[d] += 1
else:
distrib[d] = 1
distrib
adjan = adja.copy()
conne = numpy.zeros(adja.shape)
for i in range(1, adja.shape[0]):
conne += adjan
adjan = adjan @ adja
(conne > 0).astype(int)
mat = numpy.random.random((15, 15))
mat = mat + mat.T
adja = (mat >= 1.45).astype(int)
for i in range(adja.shape[0]):
adja[i ,i] = 0
fix, ax = plt.subplots(1, 1, figsize=(4, 4))
G = networkx.from_numpy_matrix(adja)
networkx.draw(G, with_labels=True, ax=ax)
C = numpy.arange(adja.shape[0])
maj = 1
while maj > 0:
maj = 0
for i in range(adja.shape[0]):
for j in range(i + 1, adja.shape[1]):
if adja[i, j] > 0 and C[i] != C[j]:
maj += 1
C[i] = C[j] = min(C[i], C[j])
C
set(C)
print("Il y a %r composantes connexes." % len(set(C)))
def distribution_to_degree_list(hist):
N = int(hist.sum())
deg = numpy.zeros(N, dtype=numpy.int32)
p = 0
for i, nh in enumerate(hist):
for n in range(nh):
deg[p] = i
p += 1
return deg
dist = numpy.array(numpy.array([0, 4, 3, 2]))
distribution_to_degree_list(dist)
import warnings
from tqdm import tqdm # pour visualiser la progression de l'algorithme
def random_graph(distribution_degree):
degrees = distribution_to_degree_list(distribution_degree)
current = numpy.zeros(degrees.shape[0], dtype=numpy.int32)
expected = degrees.sum()
adja = numpy.zeros((degrees.shape[0], degrees.shape[0]), dtype=numpy.int32)
nb = 0
# tqdm: une boucle qui affiche l'avancement dans un notebook
# on évite la boucle infinie en limitant le nombre d'itération
loop = tqdm(range(expected * 5))
for n_iter in loop:
loop.set_description("sum=%r expected=%r" % (nb, expected))
nodes = [i for i, (c, d) in enumerate(zip(current, degrees))
if c < d]
if len(nodes) == 1:
i, j = 0, 0
elif len(nodes) == 2:
di, dj = 0, 0
i, j = nodes[di], nodes[dj]
else:
di, dj = numpy.random.randint(0, len(nodes), 2)
i, j = nodes[di], nodes[dj]
if i == j or adja[i, j] == 1:
# arc déjà créé ou impossible
continue
current[i] += 1
current[j] += 1
adja[i, j] = 1
adja[j, i] = 1
nb += 2
if nb >= expected:
# Tous les noeuds ont le degré souhaité.
loop.set_description("sum=%r expected=%r" % (nb, expected))
break
if nb < expected:
warnings.warn("Graphe incomplet\ndegrees=%r\ncurrent=%r" % (degrees, current))
return adja
adja = random_graph(numpy.array([0, 5, 3, 2]))
adja
adja = random_graph(numpy.array([0, 4, 3, 2]))
adja
adja.sum(axis=1)
from collections import Counter
Counter(adja.sum(axis=1))
def random_graph_remove(distribution_degree):
degrees = distribution_to_degree_list(distribution_degree)
current = numpy.zeros(degrees.shape[0], dtype=numpy.int32)
expected = degrees.sum()
adja = numpy.zeros((degrees.shape[0], degrees.shape[0]), dtype=numpy.int32)
nb = 0
loop = tqdm(range(expected * 5))
last_added = 0
n_removed = 0
edges = {i: [] for i in range(current.shape[0])}
for n_iter in loop:
loop.set_description("sum=%r expected=%r n_removed=%r" % (nb, expected, n_removed))
nodes = [i for i, (c, d) in enumerate(zip(current, degrees))
if c < d]
if len(nodes) > 1:
di, dj = numpy.random.randint(0, len(nodes), 2)
i, j = nodes[di], nodes[dj]
else:
i = j = 0
if i == j or adja[i, j] == 1:
if last_added + 5 < n_iter:
# on supprime un arc
nodes = [i for i, c in enumerate(current) if c > 0]
di = (0 if len(nodes) <= 1 else
numpy.random.randint(0, len(nodes)))
i = nodes[di]
dh = (0 if len(edges[i]) <= 1 else
numpy.random.randint(0, len(edges[i])))
j = edges[i][dh]
adja[i, j] = 0
adja[j, i] = 0
edges[i].remove(j)
edges[j].remove(i)
current[i] -= 1
current[j] -= 1
nb -= 2
n_removed += 2
continue
current[i] += 1
current[j] += 1
adja[i, j] = 1
adja[j, i] = 1
nb += 2
last_added = n_iter
edges[i].append(j)
edges[j].append(i)
if nb >= expected:
# Tous les noeuds ont le degré souhaité.
loop.set_description("sum=%r expected=%r n_removed=%r" % (nb, expected, n_removed))
break
if nb < expected:
warnings.warn("Graphe incomplet\ndegrees=%r\ncurrent=%r" % (degrees, current))
return adja
adja = random_graph_remove(numpy.array([0, 4, 3, 2]))
adja
Counter(adja.sum(axis=1))
def distribution_degree_realisable(distribution):
degrees = -numpy.array(sorted(-distribution_to_degree_list(distribution)))
if degrees.sum() % 2 != 0:
return False
sumdi = 0
for i in range(degrees.shape[0] - 1):
sumdi += degrees[i]
mindi = numpy.minimum(degrees[i+1:], i + 1).sum()
if sumdi >= i * (i + 1) + mindi:
return False
return True
distribution_degree_realisable(numpy.array([0, 2, 0, 0, 0, 0, 0, 0, 0, 1]))
distribution_degree_realisable(numpy.array([0, 4, 3, 2]))
fix, ax = plt.subplots(1, 1, figsize=(4, 4))
G = networkx.from_numpy_matrix(adja)
networkx.draw(G, with_labels=True, ax=ax)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graphe aléatoire - matrice d'adjacence aléatoire
Step2: En le visualisant...
Step3: Vocabulaire lié aux graphes
Step4: D'après les remarques précédentes, $\sum A_{pq} > 0$ s'il existe un chemin reliant les noeuds $p, q$, donc s'il font partie de la même composante connexe. Et 0 si les deux noeuds font partie de deux composantes connexes distinctes.
Step5: Génération d'un graphe aléatoire
Step6: Etape 2
Step7: On remarque que la somme des degrés ne peut être impaire car chaque arc est connecté à deux noeuds.
Step8: On regarde la distribution des degrés
Step9: L'algorithme ne semble pas aboutir à un graphe qui répond au critère souhaité. Il existe deux cas pour lesquels l'algorithme reste bloqué. On note $A_t$ l'ensemble des noeuds à l'itération t dont les degrés sont inférieurs au degré souhaité.
Step10: Il est possible que cet algorithme aboutisse au résultat souhaité même avec beaucoup de temps. Est-ce la stratégie ou le fait que la distribution des noeuds ne soit pas réalisable.
|
15,271 | <ASSISTANT_TASK:>
Python Code:
%load_ext rpy2.ipython
%%R
mean = 1500
sd = 300
d = 1800
(d-mean)/sd
%%R
mean = 1500
sd = 300
point = 2100
LT = T
pnorm(point,mean=mean,sd=sd,lower.tail=LT)
%%R
mean = 1500
sd = 300
percentile = 0.4
LT = T
qnorm(percentile,mean=mean,sd=sd,lower.tail=LT)
%%R
mean = 70
sd = 3.3
lower = 69
upper = 74
LT = T
# IN RANGE
# 1 - pnorm(upper,mean=mean,sd=sd,lower.tail=!LT) - pnorm(lower,mean=mean,sd=sd,lower.tail=LT)
# OUT RANGE
# pnorm(upper,mean=mean,sd=sd,lower.tail=!LT) + pnorm(lower,mean=mean,sd=sd,lower.tail=LT)
%%R
k = 8
n = 10
p = 0.3
dbinom(k,size=n,p=p)
%%R
k_success = 8
trials = 8
choose(trials,k_success)
import numpy as np
n = 40
p = 0.35
sd = round(np.sqrt(n*p*(1-p)),2)
print 'Expected Value of point success is', n*p
print 'standard deviation is', sd
print 'variance is' , sd**2
import numpy as np
n = 2500
p = 0.7
sd = round(np.sqrt(n*p*(1-p)),2)
print 'Expected Value of point success is', n*p
print 'standard deviation is', sd
print 'variance is' , sd**2
p = 0.01
n = 300
assert (n*p >= 10 and n*(1-p) >= 10), 'Not large enough to take advantage of normal approximation'
%%R
val = 59
mu = 80
sd = 8
tail = T
# Add 0.5 to val if observe small range of observation
ifsmall = (-0.5+!tail)
pnorm (val,mean=mu,sd=sd,lower.tail=tail)
%%R
#95% = 1.96
#99% = 2.58
n = 50
mu = 3.2
s = 1.74
# z = 1.96
CL = 0.95
z = round(qnorm((1-CL)/2,lower.tail=F),digits=2)
SE = s/sqrt(n)
ME = z*SE
c(mu-ME,mu+ME)
%%R
CL = 0.9
ME = 4
sd = 18
#CONFIDENCE LEVEL
#95% = 1.96
#99% = 2.58
#90% = 1.65
#####
z_star = round(qnorm((1-CL)/2,lower.tail=F),digits=2)
#z_star = 1.65
((z_star*sd)/ME)**2
%%R
CL = 0.9
ME = 4
sd = 18
#CONFIDENCE LEVEL
#95% = 1.96
#99% = 2.58
#90% = 1.65
#####
z_star = round(qnorm((1-CL)/2,lower.tail=F),digits=2)
#z_star = 1.65
((z_star*sd)/ME)**2
%%R
xbar = 118.2
mu = 100
sd = 6.5
n = 36
SE = round(sd/sqrt(n),digits=2)
pnorm(xbar, mean=mu,sd=SE, lower.tail=xbar < mu) * 2
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Associated is dependant variable. Both can't happen at the same time
Step2: To calculate the percentiles
Step3: Pnorm ranges
Step4: Standard normal distributions, is mean 0 and sd 1. Used in default pnorm and qnorm R.
Step5: Requirements
Step6: Screenshot taken from Coursera video, 12
Step7: To check whether the data is sufficiently large,
Step8: For binorm norm, and for taking the probability of more than one events
Step9: Because there's no exact value in binomial, we have to decrement by 0.05.
Step10: Sample Size for ME
Step11: Hypothesis Testing
|
15,272 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
a = len(s)
b = list(s)
dict1 = {'a':1, 'b':2, 'c':3, 'd':4, 'e':5, 'f':6, 'g':7, 'h':8, 'i':9, 'j':10, 'k':11, 'l':12, 'm':13, 'n':14, 'o':15, 'p':16, 'q':17, 'r':18, 's':19, 't':20, 'u':21, 'v':22, 'w':23, 'x':24, 'y':25, 'z':26}
dict2 = {'a':0, 'b':0, 'c':0, 'd':0, 'e':0, 'f':0, 'g':0, 'h':0, 'i':0, 'j':0, 'k':0, 'l':0, 'm':0, 'n':0, 'o':0, 'p':0, 'q':0, 'r':0, 's':0, 't':0, 'u':0, 'v':0, 'w':0, 'x':0, 'y':0, 'z':0}
for item in b:
dict2[item] += 1
for key in dict2:
print
return dict2
print(char_probs('hibebe'))
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
d = {'1':2, '3':4}
for item in d:
print(item)
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the pi digits histogram
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Character counting and entropy
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
|
15,273 | <ASSISTANT_TASK:>
Python Code:
import urllib.request
urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/mlai.py','mlai.py')
urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/teaching_plots.py','teaching_plots.py')
urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/gp_tutorial.py','gp_tutorial.py')
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 22})
%pip install --upgrade git+https://github.com/sods/ods
import pods
import numpy as np
import pods
data = pods.datasets.olympic_marathon_men()
x = data['X']
y = data['Y']
offset = y.mean()
scale = np.sqrt(y.var())
import matplotlib.pyplot as plt
import teaching_plots as plot
import mlai
xlim = (1875,2030)
ylim = (2.5, 6.5)
yhat = (y-offset)/scale
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
_ = ax.plot(x, y, 'r.',markersize=10)
ax.set_xlabel('year', fontsize=20)
ax.set_ylabel('pace min/km', fontsize=20)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(filename='olympic-marathon.svg',
directory='./datasets')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <!--setupplotcode{import seaborn as sns
Step2: from the command prompt where you can access your python installation.
Step3: Olympic Marathon Data
|
15,274 | <ASSISTANT_TASK:>
Python Code:
from default import *
import os
lexsub = LexSub(os.path.join('data','glove.6B.100d.magnitude'))
output = []
with open(os.path.join('data','input','dev.txt')) as f:
for line in f:
fields = line.strip().split('\t')
output.append(" ".join(lexsub.substitutes(int(fields[0].strip()), fields[1].strip().split())))
print("\n".join(output[:10]))
from lexsub_check import precision
with open(os.path.join('data','reference','dev.out'), 'rt') as refh:
ref_data = [str(x).strip() for x in refh.read().splitlines()]
print("Score={:.2f}".format(100*precision(ref_data, output)))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the default solution on dev
Step2: Evaluate the default output
|
15,275 | <ASSISTANT_TASK:>
Python Code:
# Requirements:
# - sox + soundfile + our vowel_discri package
# - ABXpy.distances (could be made independent from ABXpy)
import soundfile
import os
import os.path as path
import pandas as pd
import numpy as np
import seaborn
from ast import literal_eval as make_tuple
import wave
import subprocess
import shutil
# To get vowel_discri from github repo, run:
# export PYTHONPATH=:/path/2/vowel-discri/repository:`$PYTHONPATH`
# in the terminal before launching jupyter notebook
import vowel_discri.mfcc as mfcc_lib
%matplotlib inline
# load raw data, this should be changed when we make the repository public
# so that the script can be used by anybody to reproduce our results (doing wget to download the files directly
# from the OSF server)
root = '/Users/admin/Documents/PhD/Data/vowel_discri/osf'
wav_dir = path.join(root, 'Stimuli', 'final_stimuli')
xp_designs_path = path.join('/Users/admin/Documents/PhD/Data/vowel_discri', 'xp_designs.txt')
filename = {'ES': 'InPhonDb_native_nonnative - vowels.csv',
'F1F2_natural': path.join('F1F2_measures', 'formants_natural.txt'),
'F1F2_synthetic': path.join('F1F2_measures', 'formants_synthetic.txt'),
'stim_groups': 'Stimulus name mapping - mapping.csv'}
sep = {'ES': ",", 'F1F2_natural': "\t",'F1F2_synthetic': "\t", 'stim_groups': ","}
data = {}
for key in filename:
data[key] = pd.read_csv(path.join(root, filename[key]), sep=sep[key])
def add_kuhl82_es1_design(xp_designs):
# Kuhl82 first effect size has a special trial structure
es_id = 7
study_id = 'Kuhl1982'
bases = [('kuhl82_p1_a_monotone.wav',), ('kuhl82_p1_a_rise_fall.wav',)]
deviants = [('kuhl82_p1_i_monotone.wav',), ('kuhl82_p1_i_rise_fall.wav',)]
counterbalanced = True
trial = 1
for base, deviant in zip(bases, deviants):
xp_designs['study_id'].append(study_id)
xp_designs['ES_id'].append(es_id)
xp_designs['trial_id'].append(trial)
xp_designs['base_stims'].append(base)
xp_designs['deviant_stims'].append(deviant)
trial = trial+1
if counterbalanced:
xp_designs['study_id'].append(study_id)
xp_designs['ES_id'].append(es_id)
xp_designs['trial_id'].append(trial)
xp_designs['base_stims'].append(deviant)
xp_designs['deviant_stims'].append(base)
trial = trial+1
return xp_designs
# collect effect size ids
pair2ESid = {}
pair2studyid = {}
for pair, df in data['ES'].groupby('stimulus_pair'):
if not(pair.strip == ''):
pair2ESid[pair] = list(df['effect_ID'])
pair2studyid[pair] = list(df['study_ID'])
# get list of stimuli wavs
wavs = os.listdir(wav_dir)
wavs = [wav for wav in wavs if wav[-4:] == '.wav']
# function to determine if a stim is base or deviant
# this is ad hoc, as stim could play different roles
# in different studies, but works here in combination
# with 'not_counterbalanced' below
is_base = {
'liu16_p1': lambda name: '_i_' in name,
'mazuka13_p1': lambda name: '_e_' in name,
'mazuka13_p2': lambda name: '_u_' in name,
'mazuka13_p3': lambda name: '_ue_' in name,
'mugitani09_p1': lambda name: '_aa_' in name,
'mugitani09_p2': lambda name: '_aa_' in name,
'phan08_p1': lambda name: '_aimw_' in name,
'phan08_p2': lambda name: '_ais_' in name,
'phan08_p3': lambda name: '_aimw_' in name,
'phan08_p4': lambda name: '_V_' in name,
'polka96_p1': lambda name: '_u_' in name,
'sebastian09_p1': lambda name: '_o_' in name,
'sebastian09_p2': lambda name: '_e_' in name,
'sundara11_p1': lambda name: '_eh_' in name,
'tsuji17_p1': lambda name: '_ei_' in name,
'tsuji17_p2': lambda name: '_oi_' in name,
'tsuji17_p3': lambda name: '_a_' in name,
'versteegh15_p1': lambda name: '_ei_' in name,
'figueras10_p1': lambda name: '_o_' in name,
'cardillo10_p1': lambda name: '_u' in name,
'kuhl79_p1': lambda name: '_a_' in name,
'kuhl82_p1': lambda name: '_a_' in name,
'swoboda76_p1': lambda name: '_i_' in name,
'tsao04_p1': lambda name: '_u' in name,
'marean92_p1': lambda name: '_a_' in name}
for i in range(1, 10):
is_base['eilers84_p{}'.format(i)] = lambda name: '300' in name
for i in range(1, 5):
is_base['trainor02_p{}'.format(i)] = lambda name: '_i_' in name
# list of effect size for which base/deviant role was counterbalanced
ll = ['tsao04_p1', 'cardillo10_p1', 'marean92_p1'] +\
['eilers84_p{}'.format(i) for i in range(1, 10)]
not_counterbalanced = [es_id for pair in ll for es_id in pair2ESid[pair]]
# instantiate xp_design structure, study_ID is included for human readability
cols = ['study_id', 'ES_id', 'trial_id', 'base_stims', 'deviant_stims']
xp_designs = {col : [] for col in cols}
for pair in pair2ESid:
if pair == 'sebastian09_p2':
# we ignore this one because we are not sure what the experimental stimuli were
continue
pair_wavs = [wav for wav in wavs if wav[:len(pair)] == pair]
bases = tuple([wav for wav in pair_wavs if is_base[pair](wav)])
deviants = tuple([wav for wav in pair_wavs if not(is_base[pair](wav))])
for ES_id, study_id in zip(pair2ESid[pair], pair2studyid[pair]):
if ES_id == 7:
# Kuhl82 first effect size has a special trial structure
assert pair == 'kuhl82_p1'
xp_designs = add_kuhl82_es1_design(xp_designs)
else:
xp_designs['study_id'].append(study_id)
xp_designs['ES_id'].append(ES_id)
xp_designs['trial_id'].append(1)
xp_designs['base_stims'].append(bases)
xp_designs['deviant_stims'].append(deviants)
if not(ES_id in not_counterbalanced):
# counterbalance base/deviant role
xp_designs['study_id'].append(study_id)
xp_designs['ES_id'].append(ES_id)
xp_designs['trial_id'].append(2)
xp_designs['base_stims'].append(deviants)
xp_designs['deviant_stims'].append(bases)
xp_designs = pd.DataFrame(xp_designs)
# make sure column order is nice
xp_designs = xp_designs[cols]
# make sure row order is nice
xp_designs = xp_designs.sort_values(["study_id", "ES_id"])
xp_designs.to_csv(xp_designs_path, sep=" ")
def read_xp_designs(xp_designs_path):
xp_designs = pd.read_csv(xp_designs_path, sep=" ")
del xp_designs['Unnamed: 0']
xp_designs['base_stims'] = [make_tuple(e) for e in xp_designs['base_stims']]
xp_designs['deviant_stims'] = [make_tuple(e) for e in xp_designs['deviant_stims']]
return xp_designs
xp_designs = read_xp_designs(xp_designs_path)
std_wav_dir = path.join(root, 'Stimuli', 'standardized_final_stimuli')
standardize_fs = True # If true resample all wavefiles to standard_fs Hz sampling rate
standard_fs = 16000
base_wavs = [f for e in xp_designs['base_stims'] for f in e]
deviant_wavs = [f for e in xp_designs['deviant_stims'] for f in e]
wavs = base_wavs + deviant_wavs
for wav in wavs:
fpath = path.join(wav_dir, wav)
std_fpath = path.join(std_wav_dir, wav)
tmp_out = os.path.join(std_wav_dir, '__tmp__.wav') # to avoid shennanigans with self-replacement of a file
with wave.open(fpath, 'rb') as fh:
params = fh.getparams()
assert params.comptype == 'NONE', wav
assert params.compname == 'not compressed', wav
assert params.sampwidth == 2, wav # 2 bytes == 16 bits
try:
assert params.nchannels == 1, wav
except AssertionError:
assert params.nchannels == 2
# fuse channels with sox
out = subprocess.run(["sox", fpath, tmp_out, "channels", "1"])
assert out.returncode == 0, out
shutil.move(tmp_out, std_fpath)
fpath = std_fpath
if standardize_fs and params.framerate != standard_fs:
out = subprocess.run(["sox", fpath, "-r", str(standard_fs), tmp_out])
assert out.returncode == 0, out
shutil.move(tmp_out, std_fpath)
fpath = std_fpath
if fpath != std_fpath:
shutil.copy(fpath, std_fpath)
if path.exists(tmp_out):
os.remove(tmp_out)
# Get MFCCs for each wav
mfcc = {}
for wav in wavs:
fpath = os.path.join(std_wav_dir, wav)
data, fs = soundfile.read(fpath)
mfcc[wav] = mfcc_lib.mfcc(data, fs).T # take the transpose to be in dtw.dtw expected format
# Get F1F2 for each wav
def read_F1F2(filename, sep):
f1f2_n = pd.read_csv(path.join(root, filename['F1F2_natural']), sep=sep['F1F2_natural'])
f1f2_s = pd.read_csv(path.join(root, filename['F1F2_synthetic']), sep=sep['F1F2_synthetic'])
f1f2_all = pd.concat([f1f2_n, f1f2_s])
f1f2 = {}
missing_f1f2 = []
for wav in wavs:
line = f1f2_all[f1f2_all['Filename'] == wav[:-4]]
if len(line) == 0:
missing_f1f2.append(wav)
else:
assert len(line) == 1
f1f2[wav] = np.array([line.f1early.iloc[0], line.f2early.iloc[0],
line.f1mid.iloc[0], line.f2mid.iloc[0],
line.f1late.iloc[0], line.f2late.iloc[0]])
return f1f2, missing_f1f2
f1f2, missing_f1f2 = read_F1F2(filename, sep)
import ABXpy.distances.metrics.dtw as dtw
# cosine in ABXpy is actually the angular distance
import ABXpy.distances.metrics.cosine as cos
def dtw_cosine(x, y):
return dtw.dtw(x, y, cos.cosine_distance, normalized=True)
def rms_euclidean(x, y):
d = np.sum((x[:2]-y[:2])**2) + \
np.sum((x[2:4]-y[2:4])**2) + \
np.sum((x[4:]-y[4:])**2)
d = np.sqrt(d/3.)
return d
def simulate_trial(df, dis, feats):
assert len(df) == 1, df
xs = df['base_stims'].iloc[0]
ys = df['deviant_stims'].iloc[0]
# assumes counterbalancing here, or at least ignore possible asymetries
# there is probably a more df-style way of doing this, possibly by renaming
# columns and doing an external merge
D_across = 0
for x in xs:
for y in ys:
D_across = D_across + dis(feats[x], feats[y])**2
D_within_x = 0
for x1 in xs:
for x2 in xs:
D_within_x = D_within_x + dis(feats[x1], feats[x2])**2
D_within_y = 0
for y1 in ys:
for y2 in ys:
D_within_y = D_within_y + dis(feats[y1], feats[y2])**2
m = len(xs)
n = len(ys)
D_across = D_across / float(m*n)
D_within = .5 * (D_within_x / float(m*m) + D_within_y / float(n*n))
D = np.sqrt(max(0, D_across-D_within))
return pd.Series({'D': D}, index=['D'])
def simulate_xp(xp_design, dis, feats):
by_trial = xp_design.groupby('trial_id').apply(lambda df: simulate_trial(df, dis, feats))
by_trial.reset_index(level=by_trial.index.names, inplace=True)
return by_trial.groupby('trial_id', as_index=False).mean()
def simulate_xps(xp_designs, dis, feats):
res = xp_designs.groupby(['study_id', 'ES_id']).apply(lambda df: simulate_xp(df, dis, feats))
res.reset_index(level=res.index.names, inplace=True)
return res
mfcc_regs = simulate_xps(xp_designs, dtw_cosine, mfcc)
# provisory f1f2_regs (exclusions based on missing_f1f2)
excluded = ['Eilers1984', 'Marean1992', 'Mazuka2013', 'Phan2008', 'Swoboda1976', 'Trainor2002', 'Tsuji2017',
'Versteegh2015']
xp_designs2 = xp_designs[[not(e in excluded) for e in xp_designs['study_id']]]
f1f2_regs = simulate_xps(xp_designs2, rms_euclidean, f1f2)
mfcc_regs_restricted = simulate_xps(xp_designs2, dtw_cosine, mfcc)
df = pd.DataFrame({'ES_id': data['ES']['effect_ID'], 'ES': data['ES']['g_calc']})
res_df = pd.merge(mfcc_regs, df, on='ES_id')
res_df_f1f2 = pd.merge(f1f2_regs, df, on='ES_id')
res_df_mfcc_restricted = pd.merge(mfcc_regs_restricted, df, on='ES_id')
del res_df_mfcc_restricted['level_2']
del res_df_f1f2['level_2']
# F1F2
g = seaborn.regplot(data=res_df_f1f2, x='D', y='ES')
g.axes.grid()
arr = np.column_stack([res_df_f1f2['D'], res_df_f1f2['ES']]).T
print(np.corrcoef(arr))
# MFCC (on same studies as available for F1F2)
g = seaborn.regplot(data=res_df_mfcc_restricted, x='D', y='ES')
g.axes.grid()
arr = np.column_stack([res_df_mfcc_restricted['D'], res_df_mfcc_restricted['ES']]).T
print(np.corrcoef(arr))
# Correlation MFCC vs F1F2 predictions
data_both = pd.merge(res_df_mfcc_restricted, res_df_f1f2, on=['study_id', 'ES_id', 'trial_id', 'ES'],
suffixes=['_MFCC', '_F1F2'])
assert len(data_both) == len(res_df_mfcc_restricted)
assert len(data_both) == len(res_df_f1f2)
g = seaborn.regplot(data=data_both, x='D_MFCC', y='D_F1F2')
g.axes.grid()
arr = np.column_stack([data_both['D_MFCC'], data_both['D_F1F2']]).T
print(np.corrcoef(arr))
# MFCC on all studies, but that include some pure duration contrasts we don't want to look at.
g = seaborn.regplot(data=res_df, x='D', y='ES')
g.axes.grid()
arr = np.column_stack([res_df['D'], res_df['ES']]).T
print(np.corrcoef(arr))
study_id ES_id level_2 trial_id
# MFCC on all studies but the ones with duration contrasts (not completely sure the filtering is perfect here...)
g = seaborn.regplot(data=res_df[(res_df['D']>=.005)], x='D', y='ES')
g.axes.grid()
arr = np.column_stack([res_df[res_df['D']>.005]['D'], res_df[res_df['D']>.005]['ES']]).T
print(np.corrcoef(arr))
# MFCC without duration contrasts and without the studies with the largest differences
g = seaborn.regplot(data=res_df[(res_df['D']<=.06) & (res_df['D']>=.005)], x='D', y='ES')
g.axes.grid()
arr = np.column_stack([res_df[(res_df['D']<=.06) & (res_df['D']>=.005)]['D'], res_df[(res_df['D']<=.06) & (res_df['D']>=.005)]['ES']]).T
print(np.corrcoef(arr))
with open(path.join(root, 'missing_f1f2.txt'), 'w') as fh:
fh.write(str(missing_f1f2))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load relevant data
Step2: Create XP designs
Step3: Standardize wavefiles
Step4: Get MFCC and F1F2 figure for each wavefile
Step5: Define dissimilarity measures
Step6: Simulate experiments
Step7: Assemble results
Step8: Quick look at the results and exploratory analyses
Step9: TODO
|
15,276 | <ASSISTANT_TASK:>
Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
import numpy as np
import hashlib
# Test functions
def hashstr(str1):
Implements the secure hash of a string
return hashlib.sha1(str1).hexdigest()
def test_arrayequal(x1, x2, err_msg, ok_msg='Test passed'):
Test if all elements in arrays x1 and x2 are the same item by item
:param x1: First array for the comparison
:param x2: Second array for the comparison
:param err_msg: Display message if both arrays are not the same
:param ok_msg: Display message if arrays are the same (optional)
try:
np.testing.assert_array_equal(x1, x2)
print(ok_msg)
except:
print(err_msg)
def test_strequal(str1, str2, err_msg, ok_msg='Test passed'):
Test if str1 and str2 are the same string
:param str1: First string for the comparison
:param str2: Second string for the comparison
:param err_msg: Display message if both strings are not the same
:param ok_msg: Display message if strings are the same (optional)
try:
np.testing.assert_string_equal(str1, str2)
print(ok_msg)
except:
print(err_msg)
def test_hashedequal(str1, str2, err_msg, ok_msg='Test passed'):
Test if hashed(str1) and str2 are the same string
:param str1: First string for the comparison
str1 will be hashed for the comparison
:param str2: Second string for the comparison
:param err_msg: Display message if both strings are not the same
:param ok_msg: Display message if strings are the same (optional)
try:
np.testing.assert_string_equal(hashstr(str1), str2)
print(ok_msg)
except:
print(err_msg)
x = [5, 4, 3, 4]
print(type(x[0]))
# Create a list of floats containing the same elements as in x
# x_f = list(map(<FILL IN>))
x_f = list(map(float, x))
test_arrayequal(x, x_f, 'Elements of both lists are not the same')
if ((type(x[-2])==int) & (type(x_f[-2])==float)):
print('Test passed')
else:
print('Type conversion incorrect')
# Numpy arrays can be created from numeric lists or using different numpy methods
y = np.arange(8)+1
x = np.array(x_f)
# Check the different data types involved
print('Variable x_f is of type', type(x_f))
print('Variable x is of type ', type(x))
print('Variable y is of type', type(y))
# Print the shapes of the numpy arrays
print('Variable y has dimension', y.shape)
print('Variable x has dimension', x.shape)
#Complete the following exercises
# Convert x into a variable x_matrix, of type `numpy.matrixlib.defmatrix.matrix` using command
# np.matrix(). The resulting matrix should be of dimensions 4x1
# x_matrix = <FILL IN>
x_matrix = np.matrix(x).T
# Convert x into a variable x_array, of type `ndarray`, and shape (4,1)
# x_array = <FILL IN>
x_array = x[:,np.newaxis]
# Reshape array y into a numpy array of shape (4,2) using command np.reshape()
# y = <FILL IN>
y = y.reshape((4,2))
test_strequal(str(type(x_matrix)), "<class 'numpy.matrixlib.defmatrix.matrix'>", 'x_matrix is not defined as a matrix')
test_hashedequal(x_matrix.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_matrix')
test_strequal(str(type(x_array)), "<class 'numpy.ndarray'>", 'x_array is not defined as numpy ndarray')
test_hashedequal(x_array.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_array')
test_strequal(str(type(y)), "<class 'numpy.ndarray'>", 'y is not defined as a numpy ndarray')
test_hashedequal(y.tostring(), '0b61a85386775357e0710800497771a34fdc8ae5', 'Incorrect variable y')
print('Applying flatten() to matrix x_matrix (of type matrix)')
print('x_matrix.flatten():', x_matrix.flatten())
print('Its type:', type(x_matrix.flatten()))
print('Its dimensions:', x_matrix.flatten().shape)
print('\nApplying flatten() to matrix y (of type ndarray)')
print('y.flatten():', y.flatten())
print('Its type:', type(y.flatten()))
print('Its dimensions:', y.flatten().shape)
print('\nApplying tolist() to x_matrix (of type matrix) and to the 2D vector y (of type ndarray)')
print('x_matrix.tolist():', x_matrix.tolist())
print('y.tolist():', y.tolist())
# Try to run the following command on variable x_matrix, and check what happens
print(x_array**2)
print('Remember that the shape of x_array is', x_array.shape)
print('Remember that the shape of y is', y.shape)
# Complete the following exercises. You can print the partial results to visualize them
# Multiply the 2-D array `y` by 2
# y_by2 = <FILL IN>
y_by2 = y * 2
# Multiply each of the columns in `y` by the column vector x_array
# z_4_2 = <FILL IN>
z_4_2 = x_array * y
# Obtain the matrix product of the transpose of x_array and y
# x_by_y = <FILL IN>
x_by_y = x_array.T.dot(y)
# Repeat the previous calculation, this time using x_matrix (of type numpy matrix) instead of x_array
# Note that in this case you do not need to use method dot()
# x_by_y2 = <FILL IN>
x_by_y2 = x_matrix.T * y
# Multiply vector x_array by its transpose to obtain a 4 x 4 matrix
#x_4_4 = <FILL IN>
x_4_4 = x_array.dot(x_array.T)
# Multiply the transpose of vector x_array by vector x_array. The result is the squared-norm of the vector
#x_norm2 = <FILL IN>
x_norm2 = x_array.T.dot(x_array)
test_hashedequal(y_by2.tostring(),'1b54af8620657d5b8da424ca6be8d58b6627bf9a','Incorrect result for variable y_by2')
test_hashedequal(z_4_2.tostring(),'0727ed01af0aa4175316d3916fd1c8fe2eb98f27','Incorrect result for variable z_4_2')
test_hashedequal(x_by_y.tostring(),'b33f700fec2b6bd66e76260d31948ce07b8c15d3','Incorrect result for variable x_by_y')
test_hashedequal(x_by_y2.tostring(),'b33f700fec2b6bd66e76260d31948ce07b8c15d3','Incorrect result for variable x_by_y2')
test_hashedequal(x_4_4.tostring(),'832c97cc2d69298287838350b0bae66deec58b03','Incorrect result for variable x_4_4')
test_hashedequal(x_norm2.tostring(),'33b80b953557002511474aa340441d5b0728bbaf','Incorrect result for variable x_norm2')
print(z_4_2.shape)
print(np.mean(z_4_2))
print(np.mean(z_4_2,axis=0))
print(np.mean(z_4_2,axis=1))
# Previous check that you are working with the right matrices
test_hashedequal(z_4_2.tostring(),'0727ed01af0aa4175316d3916fd1c8fe2eb98f27','Incorrect result for variable z_4_2')
test_hashedequal(x_array.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_array')
# Vertically stack matrix z_4_2 with itself
# ex1_res = <FILL IN>
ex1_res = np.vstack((z_4_2,z_4_2))
# Horizontally stack matrix z_4_2 and vector x_array
# ex2_res = <FILL IN>
ex2_res = np.hstack((z_4_2,x_array))
# Horizontally stack a column vector of ones with the result of the first exercise (variable ex1_res)
# X = <FILL IN>
X = np.hstack((np.ones((8,1)),ex1_res))
test_hashedequal(ex1_res.tostring(),'e740ea91c885cdae95499eaf53ec6f1429943d9c','Wrong value for variable ex1_res')
test_hashedequal(ex2_res.tostring(),'d5f18a630b2380fcae912f449b2a87766528e0f2','Wrong value for variable ex2_res')
test_hashedequal(X.tostring(),'bdf94b49c2b7c6ae71a916beb647236918ead39f','Wrong value for variable X')
# Keep last row of matrix X
# X_sub1 = <FILL IN>
X_sub1 = X[-1,]
# Keep first column of the three first rows of X
# X_sub2 = <FILL IN>
X_sub2 = X[:3,0]
# Keep first two columns of the three first rows of X
# X_sub3 = <FILL IN>
X_sub3 = X[:3,:2]
# Invert the order of the rows of X
# X_sub4 = <FILL IN>
X_sub4 = X[::-1,:]
test_hashedequal(X_sub1.tostring(),'51fb613567c9ef5fc33e7190c60ff37e0cd56706','Wrong value for variable X_sub1')
test_hashedequal(X_sub2.tostring(),'12a72e95677fc01de6b7bfb7f62d772d0bdb5b87','Wrong value for variable X_sub2')
test_hashedequal(X_sub3.tostring(),'f45247c6c31f9bcccfcb2a8dec9d288ea41e6acc','Wrong value for variable X_sub3')
test_hashedequal(X_sub4.tostring(),'1fd985c087ba518c6d040799e49a967e4b1d433a','Wrong value for variable X_sub4')
X_col2 = X[:,1]
X_row3 = X[2,]
print('Matrix X is\n', X)
print('Second column of matrix X:', X_col2, '; Dimensions:', X_col2.shape)
print('Third row of matrix X:', X_row3, '; Dimensions:', X_row3.shape)
X_col2 = X[:,1:2]
X_row3 = X[2:3,]
print('Second column of matrix X:', X_col2, '; Dimensions:', X_col2.shape)
print('Third row of matrix X:', X_row3, '; Dimensions:', X_row3.shape)
print(X.shape)
print(X.dot(X.T))
print(X.T.dot(X))
print(np.linalg.inv(X.T.dot(X)))
#print np.linalg.inv(X.dot(X.T))
test_hashedequal(X.tostring(),'bdf94b49c2b7c6ae71a916beb647236918ead39f','Wrong value for variable X')
# Obtain matrix Z using concatenation functions
# Z = np.hstack(<FILL IN>)
Z = np.hstack((X,np.log(X[:,1:])))
test_hashedequal(Z.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')
def log_transform(x):
# return <FILL IN>
return np.hstack((x,np.log(x[1]),np.log(x[2])))
Z_map = np.array(list(map(log_transform,X)))
test_hashedequal(Z_map.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')
# Z_lambda = np.array(list(map(lambda x: <FILL IN>,X)))
Z_lambda = np.array(list(map(lambda x: np.hstack((x,np.log(x[1]),np.log(x[2]))),X)))
test_hashedequal(Z_lambda.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')
# Calculate variable Z_poly, using any method that you want
# Z_poly = <FILL IN>
Z_poly = np.array(list(map(lambda x: np.array([x[1]**k for k in range(4)]),X)))
test_hashedequal(Z_poly.tostring(),'7e025512fcee1c1db317a1a30f01a0d4b5e46e67','Wrong variable Z_poly')
w_log = np.array([3.3, 0.5, -2.4, 3.7, -2.9])
w_poly = np.array([3.2, 4.5, -3.2, 0.7])
# f_log = <FILL IN>
f_log = Z.dot(w_log)
# f_poly = <FILL IN>
f_poly = Z_poly.dot(w_poly)
test_hashedequal(f_log.tostring(),'d5801dfbd603f6db7010b9ef80fa48e351c0b38b','Incorrect evaluation of the logarithmic model')
test_hashedequal(f_poly.tostring(),'32abdcc0e32e76500947d0691cfa9917113d7019','Incorrect evaluation of the polynomial model')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Exercises about Numpy
Step5: This notebook reviews some of the Python modules that make it possible to work with data structures in an easy an efficient manner. We will review Numpy arrays and matrices, and some of the common operations which are needed when working with these data structures in Machine Learning.
Step6: Numpy arrays can be defined directly using methods such as np.arange(), np.ones(), np.zeros(), as well as random number generators. Alternatively, you can easily generate them from python lists (or lists of lists) containing elements of numeric type.
Step7: Some other useful Numpy methods are
Step8: 2. Products and powers of numpy arrays and matrices
Step9: 3. Numpy methods that can be carried out along different dimensions
Step10: Other numpy methods where you can specify the axis along with a certain operation should be carried out are
Step11: 5. Slicing
Step12: Extracting columns and rows from multidimensional arrays
Step13: If you wish that the extracted row or column is still a 2-D row or column vector, it is important to specify an interval instead of a single value, even if such interval consists of just one value.
Step14: 6. Matrix inversion
Step15: 7. Exercises
Step16: 7.1. Non-linear transformations
Step17: Repeat the previous exercise, this time using the map() method together with function log_transform(). This function needs to be defined in such a way that guarantees that variable Z_map is the same as the previously computed variable Z.
Step18: Repeat the previous exercise once more. This time, define a lambda function for the task.
Step19: 7.2. Polynomial transformations
Step20: 7.3. Model evaluation
|
15,277 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import skrf as rf
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
rf.stylely()
capas = rf.read_all('S_Matrices/', f_unit='MHz')
capas_set = rf.NetworkSet(capas)
f_band = '35-65MHz'
f = capas_set[0].f
omega = 2*np.pi*f
D_cylinders = [110, 100, 90, 80, 77, 76]
print(f'{len(capas)} s-parameter files representing {len(D_cylinders)} values of D_cyclinders')
# check the data
capas_set[f_band].plot_s_re(m=1, n=0)
idx = (f > 40e6) & (f < 60e6)
# dummy transmission line to match port 3
coax = rf.media.Coaxial(frequency=capas_set[0].frequency)
def Z1(omega, R, C, L):
'''
Resistor R in series with a capacitance C and an inductance L
R in Ohm, C in pF and L in nH
'''
return (R +1j*(- 1/(C*1e-12*omega) + L*1e-9*omega))
def Z1_re(omega, R, C, L):
return np.real(Z1(omega, R, C, L))
def Z1_im(omega, R, C, L):
return np.imag(Z1(omega, R, C, L))
def Z3(omega, Cshunt):
'shung capacitance. C_shunt in pF'
return -1j/(Cshunt*1e-12*omega)
def Z3_im(omega, Cshunt):
return np.imag(Z3(omega, Cshunt))
Z1s, Z2s, Z3s = {}, {}, {}
Cs, Rs, Ls, C_shunts = [], [], [], []
# For each network in the network set, extract the equivalent network parameters
for (ntw_name, ntw) in capas.items():
# connect the port 3 (voltage probe) to match load to make a 2-port network
# (since equivalent T network is only for a 2-ports network)
ntw = rf.connect(ntw, 2, coax.match(), 0)
# extract impedances
Z3s[ntw_name] = 1/ntw.a[:,1,0]
Z1s[ntw_name] = ntw.a[:,0,1]
Z2s[ntw_name] = 0
# fit the equivalent impedances values to the equivalent circuit elements R,l,Cs
(C_shunt,), cov = curve_fit(Z3_im, omega[idx], np.imag(Z3s[ntw_name][idx]))
(_, C, L), cov = curve_fit(Z1_im, omega[idx], np.imag(Z1s[ntw_name][idx]))
(R, _, _), cov = curve_fit(Z1_re, omega[idx], np.real(Z1s[ntw_name][idx]))
print(f'{ntw_name}: C={C:.1f} pF, L={L:.1f} nH, C_shunt={C_shunt:.1f} pF, R={R:0.1e} Ohm')
Cs.append(C)
Rs.append(R)
Ls.append(L)
C_shunts.append(C_shunt)
fig, ax = plt.subplots()
ax.plot(D_cylinders, Cs, '.', ms=10)
ax.set_xlabel('D_cylinders')
ax.set_ylabel('Capacitance [pF]')
R_eq = np.mean(Rs)
L_eq = np.mean(Ls)
C_shunt_eq = np.mean(C_shunts)
def equivalent_capa_ntw(omega, C, R=R_eq, L=L_eq, C_shunt=C_shunt_eq):
_Z1 = Z1(omega, R, C, L)
_Z3 = Z3(omega, Cshunt=C_shunt)
A11 = 1 + _Z1/_Z3
A12 = _Z1
A21 = 1/_Z3
A22 = np.ones_like(omega)
ntw = rf.Network()
ntw.f_unit = 'Hz'
ntw.f = omega/(2*np.pi)
ntw.s = rf.a2s(np.array([[A11, A21], [A12, A22]]).T)
return ntw
eq_capa = equivalent_capa_ntw(omega, C=100, C_shunt=50)
fig, ax = plt.subplots()
capas_set[-2].plot_s_db(m=0, n=0, ls='--', ax=ax)
ax.plot(omega/(2*np.pi), eq_capa.s_db[:,0,0])
# L expressed in nH and C in pF to make the fit to work!
def ZZ(omega, R, C, L):
return R + 1j*(L*1e-9*omega - 1/(C*1e-12*omega))
def re_Z(omega, R, C, L):
return np.real( ZZ(omega, R, C, L) )
def im_Z(omega, R, C, L):
return np.imag( ZZ(omega, R, C, L) )
fix, (ax1, ax2) = plt.subplots(2,1, sharex=True)
# dummy transmission line to match port 3
coax = rf.media.Coaxial(frequency=capas_set[0].frequency)
for (ntw_name, ntw) in capas.items():
ntw = rf.connect(ntw, 2, coax.match(), 0)
Z = ntw.a[idx,0,1]
(R,_,_), pcov = curve_fit(re_Z, omega[idx], np.real(Z))
(_,C,L), pcov = curve_fit(im_Z, omega[idx], np.imag(Z))
print(f'C={C:0.1f} pF, L={L:0.1f} nH, R={R:0.1e} Ohm')
ax1.plot(f[idx]/1e6, np.real(Z))
media = rf.media.DefinedGammaZ0(frequency=ntw.frequency, z0=50)
ntw2 = media.capacitor(C*1e-12) ** media.inductor(L*1e-9) ** media.resistor(1e-1)
ax1.plot(f[idx]/1e6, ntw2.a_re[idx,0,1], ls='--')
ax2.plot(f[idx]/1e6, np.imag(Z))
ax2.plot(f[idx]/1e6, ntw2.a_im[idx,0,1], ls='--')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we import all the scattering parameters of the capacitor simulated at different positions. S-parameters are imported as skrf networks in a NetworkSet object. This NetworkSet will allow interpolating the capacitors are intermediate values of the $D_{cylinders}$ values, by interpolating the scattering parameters between individual Networks.
Step2: Extracting the capacitance values
Step3: T impedance
Step4: The equivalent circuit parameters R,L can be set as constant
Step5: Now to test the performance of the equivalent model, we create a Network from the equivalent circuit and we compare S-parameter values to the ones of the NetworkSet
Step6: serie impedance
|
15,278 | <ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
from sympy import init_printing
from sympy import S
from sympy import sin, cos, tanh, exp, pi, sqrt
from boutdata.mms import x, y, z, t
from boutdata.mms import DDX
import os, sys
# If we add to sys.path, then it must be an absolute path
common_dir = os.path.abspath('./../../../../common')
# Sys path is a list of system paths
sys.path.append(common_dir)
from CELMAPy.MES import get_metric, make_plot, BOUT_print
init_printing()
folder = '../properZ/'
metric = get_metric()
# Initialization
the_vars = {}
# We need Lx
from boututils.options import BOUTOptions
myOpts = BOUTOptions(folder)
Lx = eval(myOpts.geom['Lx'])
# We make f a tanh function
# NOTE: We get a blow up in S here
# We multiply with cos(6*pi*x/(2*Lx)) in order to give it a modulation, and to get a non-zero value at the boundary
# We multiply with (x/Lx) in order for S not to blow up at rho=0
s = 0.15
c = 50
w = 30
the_vars['f'] = ((1/2) - (1/2)*(tanh(s*(x-(c - (w/2))))))*cos(6*pi*x/(2*Lx))*sin(2*z)
the_vars['S'] = (1/metric.J)*DDX(the_vars['f'], metric=metric)
make_plot(folder=folder, the_vars=the_vars, plot2d=True, include_aux=False)
BOUT_print(the_vars, rational=False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize
Step2: Define the variables
Step3: NOTE
Step4: Calculating the solution
Step5: Plot
Step6: Print the variables in BOUT++ format
|
15,279 | <ASSISTANT_TASK:>
Python Code:
# Librería de cálculo simbólico
import sympy as sym
# Para imprimir en formato TeX
from sympy import init_printing; init_printing(use_latex='mathjax')
sym.var('x', real = True)
f = x**2
f
df = sym.diff(f, x)
df
x_c = sym.solve(df, x)
x_c[0]
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
f_num = sym.lambdify([x], f, 'numpy')
x_vec = np.linspace(-5, 5, 100)
plt.plot(x_vec, f_num(x_vec))
plt.xlabel('$x$')
plt.ylabel('$x^2$')
plt.show()
def f(x):
return x**2
df = sym.diff(f(x), x)
df
x_c = sym.solve(df, x)
x_c[0]
def g(x):
return x**3
dg = sym.diff(g(x), x)
x_c = sym.solve(dg, x)
x_c
x_vec = np.linspace(-2,2,100)
plt.figure(figsize=(8,6))
plt.plot(x_vec, g(x_vec))
plt.xlabel('$x$')
plt.ylabel('$x^3$')
plt.show()
f = x**2
#d2f = sym.diff(f, x, x)
d2f = sym.diff(f, x, 2)
d2f
d2f>0
g(x)
d2g = sym.diff(g(x), x, 2)
d2g
d2g.subs(x, 0)
f = x**2-6*x
f
df = sym.diff(f, x)
df
x_c = sym.solve(df, x)
x_c
f.subs(x, 0), f.subs(x, 5), f.subs(x, x_c[0])
f_num = sym.lambdify([x], f, 'numpy')
x_vec = np.linspace(0, 5, 100)
plt.figure(figsize=(8,6))
plt.plot(x_vec, f_num(x_vec), 'k', label = '$y=f(x)$')
plt.plot([0], [0], '*r', label = '$(0,0=\max_{0\leq x\leq 5} f(x))$')
plt.plot([3], [-9], '*b', label = '$(3,-9=\min_{0\leq x\leq 5} f(x))$')
plt.legend(loc='best')
plt.xlabel('x')
plt.show()
def h(x):
return x**3-3*x
dh = sym.diff(h(x), x)
x_c = sym.solve(dh, x)
x_c
h(-2.2), h(x_c[0]), h(x_c[1]), h(1.8)
x_vec = np.linspace(-2.2, 1.8, 100)
plt.figure(figsize=(8,6))
plt.plot(x_vec, h(x_vec), 'k', label = '$y=f(x)$')
plt.plot([x_c[0]], [h(x_c[0])], '*r', label = '$\max_{-2.2\leq x\leq 1.8} h(x)$')
plt.plot([-2.2], [h(-2.2)], '*b', label = '$\min_{-2.2\leq x\leq 1.8} h(x)$')
plt.legend(loc='best')
plt.xlabel('x')
plt.show()
sym.var('x y')
x, y
def f(x, y):
return x**2 + y**2
dfx = sym.diff(f(x,y), x)
dfy = sym.diff(f(x,y), y)
dfx, dfy
xy_c = sym.solve([dfx, dfy], [x, y])
xy_c
x_c, y_c = xy_c[x], xy_c[y]
x_c
d2fx = sym.diff(f(x,y), x, 2)
d2fy = sym.diff(f(x,y), y, 2)
dfxy = sym.diff(f(x,y), x, y)
Jf = sym.Matrix([[d2fx, dfxy], [dfxy, d2fy]])
Jf.eigenvals()
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = np.linspace(-2, 2, 100)
y = x
X, Y = np.meshgrid(x, y)
ax.plot_surface(X, Y, f(X, Y))
ax.plot([x_c], [y_c], [f(x_c,y_c)], '*r')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Veamos la gráfica...
Step2: Otra manera de hacer lo anterior
Step3: El converso del teorema anterior no es cierto.
Step4: 2. Criterio de la segunda derivada
Step5: Por tanto, por el criterio de la segunda derivada, $f(0)=0$ es un mínimo relativo (en efecto, el mínimo global).
Step6: 3. Método para determinar extremos absolutos de una función continua y=f(x) en [a,b]
Step7: Evaluamos $f$ en los extremos y en los puntos críticos
Step8: Concluimos que el máximo absoluto de $f$ en $\left[0,5\right]$ es $0$ y se alcanza en $x=0$, y que el mínimo absoluto es $-9$ y se alcanza en $x=3$.
Step9: Actividad
Step10: En varias variables...
|
15,280 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text =[]
target_id_text =[]
for sentences in source_text.split('\n'):
sentence_out = [source_vocab_to_int[word] for word in sentences.split()]
source_id_text.append(sentence_out)
for sentences in target_text.split('\n'):
sentence_out = [target_vocab_to_int[word] for word in sentences.split()]
sentence_out.append(target_vocab_to_int['<EOS>'])
target_id_text.append(sentence_out)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
go_batch = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']),
tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])],
1)
return go_batch
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
basic_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
multi_cell = tf.contrib.rnn.MultiRNNCell([basic_cell] * num_layers)
dropout = tf.contrib.rnn.DropoutWrapper(multi_cell, keep_prob)
output, state = tf.nn.dynamic_rnn(dropout, rnn_inputs, dtype=tf.float32)
return state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
simple_decoder_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
output, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,
simple_decoder_fn_train,
dec_embed_input,
sequence_length,
scope=decoding_scope)
logits = output_fn(tf.nn.dropout(output, keep_prob))
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
simple_decoder_fn_inference = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn,
encoder_state,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
vocab_size)
logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, simple_decoder_fn_inference, scope=decoding_scope)
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
basic_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dropout = tf.contrib.rnn.DropoutWrapper(basic_cell, keep_prob)
multi_cell = tf.contrib.rnn.MultiRNNCell([dropout] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x,
vocab_size,
None,
scope=decoding_scope)
decoding_logits_train = decoding_layer_train(encoder_state,
multi_cell,
dec_embed_input,
sequence_length,
decoding_scope,
output_fn,
keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
decoding_logits_infer = decoding_layer_infer(encoder_state,
multi_cell,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
sequence_length,
vocab_size,
decoding_scope,
output_fn,
keep_prob)
return decoding_logits_train, decoding_logits_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
encoder_embed_input = tf.contrib.layers.embed_sequence(input_data,
source_vocab_size,
enc_embedding_size)
encoder_output = encoding_layer(encoder_embed_input,
rnn_size,
num_layers,
keep_prob)
decoder_input = process_decoding_input(target_data,
target_vocab_to_int,
batch_size)
decoder_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
decoder_embed_input = tf.nn.embedding_lookup(decoder_embed, decoder_input)
t_logits, i_logits = decoding_layer(decoder_embed_input,
decoder_embed,
encoder_output,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob)
return t_logits, i_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
unknown_word = vocab_to_int['<UNK>']
sentence_lowercase = sentence.lower()
word_ids = [vocab_to_int.get(word, unknown_word) for word in sentence_lowercase.split()]
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
15,281 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q -U tensorflow-text
!pip install -q tensorflow_datasets
import collections
import os
import pathlib
import re
import string
import sys
import tempfile
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_datasets as tfds
import tensorflow_text as text
import tensorflow as tf
tf.get_logger().setLevel('ERROR')
pwd = pathlib.Path.cwd()
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
for pt, en in train_examples.take(1):
print("Portuguese: ", pt.numpy().decode('utf-8'))
print("English: ", en.numpy().decode('utf-8'))
train_en = train_examples.map(lambda pt, en: en)
train_pt = train_examples.map(lambda pt, en: pt)
from tensorflow_text.tools.wordpiece_vocab import bert_vocab_from_dataset as bert_vocab
bert_tokenizer_params=dict(lower_case=True)
reserved_tokens=["[PAD]", "[UNK]", "[START]", "[END]"]
bert_vocab_args = dict(
# The target vocabulary size
vocab_size = 8000,
# Reserved tokens that must be included in the vocabulary
reserved_tokens=reserved_tokens,
# Arguments for `text.BertTokenizer`
bert_tokenizer_params=bert_tokenizer_params,
# Arguments for `wordpiece_vocab.wordpiece_tokenizer_learner_lib.learn`
learn_params={},
)
%%time
pt_vocab = bert_vocab.bert_vocab_from_dataset(
train_pt.batch(1000).prefetch(2),
**bert_vocab_args
)
print(pt_vocab[:10])
print(pt_vocab[100:110])
print(pt_vocab[1000:1010])
print(pt_vocab[-10:])
def write_vocab_file(filepath, vocab):
with open(filepath, 'w') as f:
for token in vocab:
print(token, file=f)
write_vocab_file('pt_vocab.txt', pt_vocab)
%%time
en_vocab = bert_vocab.bert_vocab_from_dataset(
train_en.batch(1000).prefetch(2),
**bert_vocab_args
)
print(en_vocab[:10])
print(en_vocab[100:110])
print(en_vocab[1000:1010])
print(en_vocab[-10:])
write_vocab_file('en_vocab.txt', en_vocab)
!ls *.txt
pt_tokenizer = text.BertTokenizer('pt_vocab.txt', **bert_tokenizer_params)
en_tokenizer = text.BertTokenizer('en_vocab.txt', **bert_tokenizer_params)
for pt_examples, en_examples in train_examples.batch(3).take(1):
for ex in en_examples:
print(ex.numpy())
# Tokenize the examples -> (batch, word, word-piece)
token_batch = en_tokenizer.tokenize(en_examples)
# Merge the word and word-piece axes -> (batch, tokens)
token_batch = token_batch.merge_dims(-2,-1)
for ex in token_batch.to_list():
print(ex)
# Lookup each token id in the vocabulary.
txt_tokens = tf.gather(en_vocab, token_batch)
# Join with spaces.
tf.strings.reduce_join(txt_tokens, separator=' ', axis=-1)
words = en_tokenizer.detokenize(token_batch)
tf.strings.reduce_join(words, separator=' ', axis=-1)
START = tf.argmax(tf.constant(reserved_tokens) == "[START]")
END = tf.argmax(tf.constant(reserved_tokens) == "[END]")
def add_start_end(ragged):
count = ragged.bounding_shape()[0]
starts = tf.fill([count,1], START)
ends = tf.fill([count,1], END)
return tf.concat([starts, ragged, ends], axis=1)
words = en_tokenizer.detokenize(add_start_end(token_batch))
tf.strings.reduce_join(words, separator=' ', axis=-1)
def cleanup_text(reserved_tokens, token_txt):
# Drop the reserved tokens, except for "[UNK]".
bad_tokens = [re.escape(tok) for tok in reserved_tokens if tok != "[UNK]"]
bad_token_re = "|".join(bad_tokens)
bad_cells = tf.strings.regex_full_match(token_txt, bad_token_re)
result = tf.ragged.boolean_mask(token_txt, ~bad_cells)
# Join them into strings.
result = tf.strings.reduce_join(result, separator=' ', axis=-1)
return result
en_examples.numpy()
token_batch = en_tokenizer.tokenize(en_examples).merge_dims(-2,-1)
words = en_tokenizer.detokenize(token_batch)
words
cleanup_text(reserved_tokens, words).numpy()
class CustomTokenizer(tf.Module):
def __init__(self, reserved_tokens, vocab_path):
self.tokenizer = text.BertTokenizer(vocab_path, lower_case=True)
self._reserved_tokens = reserved_tokens
self._vocab_path = tf.saved_model.Asset(vocab_path)
vocab = pathlib.Path(vocab_path).read_text().splitlines()
self.vocab = tf.Variable(vocab)
## Create the signatures for export:
# Include a tokenize signature for a batch of strings.
self.tokenize.get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string))
# Include `detokenize` and `lookup` signatures for:
# * `Tensors` with shapes [tokens] and [batch, tokens]
# * `RaggedTensors` with shape [batch, tokens]
self.detokenize.get_concrete_function(
tf.TensorSpec(shape=[None, None], dtype=tf.int64))
self.detokenize.get_concrete_function(
tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int64))
self.lookup.get_concrete_function(
tf.TensorSpec(shape=[None, None], dtype=tf.int64))
self.lookup.get_concrete_function(
tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int64))
# These `get_*` methods take no arguments
self.get_vocab_size.get_concrete_function()
self.get_vocab_path.get_concrete_function()
self.get_reserved_tokens.get_concrete_function()
@tf.function
def tokenize(self, strings):
enc = self.tokenizer.tokenize(strings)
# Merge the `word` and `word-piece` axes.
enc = enc.merge_dims(-2,-1)
enc = add_start_end(enc)
return enc
@tf.function
def detokenize(self, tokenized):
words = self.tokenizer.detokenize(tokenized)
return cleanup_text(self._reserved_tokens, words)
@tf.function
def lookup(self, token_ids):
return tf.gather(self.vocab, token_ids)
@tf.function
def get_vocab_size(self):
return tf.shape(self.vocab)[0]
@tf.function
def get_vocab_path(self):
return self._vocab_path
@tf.function
def get_reserved_tokens(self):
return tf.constant(self._reserved_tokens)
tokenizers = tf.Module()
tokenizers.pt = CustomTokenizer(reserved_tokens, 'pt_vocab.txt')
tokenizers.en = CustomTokenizer(reserved_tokens, 'en_vocab.txt')
model_name = 'ted_hrlr_translate_pt_en_converter'
tf.saved_model.save(tokenizers, model_name)
reloaded_tokenizers = tf.saved_model.load(model_name)
reloaded_tokenizers.en.get_vocab_size().numpy()
tokens = reloaded_tokenizers.en.tokenize(['Hello TensorFlow!'])
tokens.numpy()
text_tokens = reloaded_tokenizers.en.lookup(tokens)
text_tokens
round_trip = reloaded_tokenizers.en.detokenize(tokens)
print(round_trip.numpy()[0].decode('utf-8'))
!zip -r {model_name}.zip {model_name}
!du -h *.zip
pt_lookup = tf.lookup.StaticVocabularyTable(
num_oov_buckets=1,
initializer=tf.lookup.TextFileInitializer(
filename='pt_vocab.txt',
key_dtype=tf.string,
key_index = tf.lookup.TextFileIndex.WHOLE_LINE,
value_dtype = tf.int64,
value_index=tf.lookup.TextFileIndex.LINE_NUMBER))
pt_tokenizer = text.BertTokenizer(pt_lookup)
pt_lookup.lookup(tf.constant(['é', 'um', 'uma', 'para', 'não']))
pt_lookup = tf.lookup.StaticVocabularyTable(
num_oov_buckets=1,
initializer=tf.lookup.KeyValueTensorInitializer(
keys=pt_vocab,
values=tf.range(len(pt_vocab), dtype=tf.int64)))
pt_tokenizer = text.BertTokenizer(pt_lookup)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <table class="tfo-notebook-buttons" align="left">
Step2: Download the dataset
Step3: This dataset produces Portuguese/English sentence pairs
Step4: Note a few things about the example sentences above
Step5: Generate the vocabulary
Step6: The bert_vocab.bert_vocab_from_dataset function will generate the vocabulary.
Step7: Here are some slices of the resulting vocabulary.
Step8: Write a vocabulary file
Step9: Use that function to generate a vocabulary from the english data
Step10: Here are the two vocabulary files
Step11: Build the tokenizer
Step12: Now you can use it to encode some text. Take a batch of 3 examples from the english data
Step13: Run it through the BertTokenizer.tokenize method. Initially, this returns a tf.RaggedTensor with axes (batch, word, word-piece)
Step14: If you replace the token IDs with their text representations (using tf.gather) you can see that in the first example the words "searchability" and "serendipity" have been decomposed into "search ##ability" and "s ##ere ##nd ##ip ##ity"
Step15: To re-assemble words from the extracted tokens, use the BertTokenizer.detokenize method
Step16: Note
Step17: Custom detokenization
Step18: Export
Step19: Build a CustomTokenizer for each language
Step20: Export the tokenizers as a saved_model
Step21: Reload the saved_model and test the methods
Step22: Archive it for the translation tutorials
Step23: <a id="algorithm"></a>
Step24: Now you have direct access to the lookup table used in the tokenizer.
Step25: You don't need to use a vocabulary file, tf.lookup has other initializer options. If you have the vocabulary in memory you can use lookup.KeyValueTensorInitializer
|
15,282 | <ASSISTANT_TASK:>
Python Code:
import graphlab as gl
from IPython.display import display
from IPython.display import Image
gl.canvas.set_target('ipynb')
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(2, 2)
plt.text(2, 2, '汉字', fontsize = 300)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 如何卸载一个包
|
15,283 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.PlateCarree(central_longitude=180))
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.LambertConformal())
ax.coastlines(resolution='110m')
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.LambertCylindrical())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.Mercator())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.Miller())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.Mollweide())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.Orthographic())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.Robinson())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.Stereographic())
ax.coastlines(resolution='110m')
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.TransverseMercator())
ax.coastlines(resolution='110m')
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.InterruptedGoodeHomolosine())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.RotatedPole(pole_latitude=37.5, pole_longitude=177.5))
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.OSGB())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.EuroPP())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.Geostationary())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.Gnomonic())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.NorthPolarStereo())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
# note:
# coastlines methods can use several resolution (110m, 50m, 10m)
# for convenience here we use 110m
# see :
# ax.coastlines?
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.OSNI())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
plt.figure(figsize=(12, 12))
ax = plt.axes(projection=ccrs.SouthPolarStereo())
ax.coastlines(resolution='110m')
ax.stock_img()
ax.gridlines();
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PlateCarree
Step2: set the central longitude to $180^\circ$
Step3: AzimuthalEquidistant
Step4: LambertCylindrical
Step5: Mercator
Step6: Miller
Step7: Mollweide
Step8: Orthographic
Step9: Robinson
Step10: Stereographic
Step11: TransverseMercator
Step12: UTM
Step13: RotatedPole
Step14: OSGB
Step15: EuroPP
Step16: Geostationary
Step17: Gnomonic
Step18: NorthPolarStereo
Step19: OSNI
Step20: SouthPolarStereo
|
15,284 | <ASSISTANT_TASK:>
Python Code:
# Run this cell, but please don't change it.
import numpy as np
import math
from datascience import *
# These lines set up the plotting functionality and formatting.
import matplotlib
matplotlib.use('Agg', warn=False)
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
# These lines load the tests.
from client.api.assignment import load_assignment
tests = load_assignment('project1.ok')
# Run this cell, but please don't change it.
districts = Map.read_geojson('water_districts.geojson')
zips = Map.read_geojson('ca_zips.geojson.gz')
usage_raw = Table.read_table('water_usage.csv', dtype={'pwsid': str})
income_raw = Table.read_table('ca_income_by_zip.csv', dtype={'ZIP': str}).drop('STATEFIPS', 'STATE', 'agi_stub')
wd_vs_zip = Table.read_table('wd_vs_zip.csv', dtype={'PWSID': str, 'ZIP': str}).set_format(make_array(2, 3), PercentFormatter)
districts.format(width=400, height=200)
district_table = Table.from_records(districts.features)
district_table.show(3)
# Fill in the next line so the last line draws a map of those two districts.
alameda_and_east_bay = ...
Map(alameda_and_east_bay, height=300, width=300)
_ = tests.grade('q11')
income_raw
income_by_zipcode = ...
income_by_zipcode
_ = tests.grade('q21')
...
...
income_by_zipcode
_ = tests.grade('q22')
income = Table().with_columns(
...
...
...
...
)
income.set_format('total income ($)', NumberFormatter(0)).show(5)
_ = tests.grade('q23')
income = ...
_ = tests.grade('q24')
# Our solution took several lines of code.
average_income = ...
average_income
_ = tests.grade('q25')
avg_total = ...
avg_total
# Write code to make a scatter plot here.
...
# Build and display a table with two rows:
# 1) incomes of returns in ZIP codes with a greater-than-average proportion of farmers
# 2) incomes of returns in other ZIP codes
# Write code to draw a map of only the high-income ZIP codes.
# We have filled in some of it and suggested names for variables
# you might want to define.
zip_features = Table.from_records(zips.features)
high_average_zips = ...
high_zips_with_region = ...
Map(high_zips_with_region.column('feature'), width=400, height=300)
# Run this cell to create the usage table.
usage_raw.set_format(4, NumberFormatter)
max_pop = usage_raw.select(0, 'population').group(0, max).relabeled(1, 'Population')
avg_water = usage_raw.select(0, 'res_gpcd').group(0, np.mean).relabeled(1, 'Water')
usage = max_pop.join('pwsid', avg_water).relabeled(0, 'PWSID')
usage
# We have filled in the call to districts.color(...). Set per_capita_usage
# to an appropriate table so that a map of all the water districts is
# displayed.
per_capita_usage = ...
districts.color(per_capita_usage, key_on='feature.properties.PWSID')
_ = tests.grade('q31')
wd_vs_zip.show(5)
def district_for_zip(zip_code):
zip_code = str(zip_code) # Ensure that the ZIP code is a string, not an integer
districts = ...
at_least_half = ...
if at_least_half:
...
else:
return 'No District'
district_for_zip(94709)
_ = tests.grade('q33')
zip_pwsids = income.apply(district_for_zip, 'ZIP')
income_with_pwsid = income.with_column('PWSID', zip_pwsids).where('PWSID', are.not_equal_to("No District"))
income_with_pwsid.set_format(2, NumberFormatter(0)).show(5)
district_income = ...
district_data = ...
district_data.set_format(make_array('Population', 'Water', 'Income'), NumberFormatter(0))
_ = tests.grade('q34')
bay_districts = Table.read_table('bay_districts.csv')
bay_water_vs_income = ...
top_10 = ...
...
# For your convenience, you can run this cell to run all the tests at once!
import os
_ = [tests.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')]
# Your extensions here (completely optional)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, load the data. Loading may take some time.
Step2: Part 1
Step3: A Map is a collection of regions and other features such as points and markers, each of which has a string id and various properties. You can view the features of the districts map as a table using Table.from_records.
Step4: To display a Map containing only two features from the district_table, call Map on an array containing those two features from the feature column.
Step5: Hint
Step6: Some observations
Step7: Your income_by_zipcode table probably has column names like N1 sum, which looks a little weird.
Step8: Question 2.3.
Step9: Question 2.4. All ZIP codes with less than 100 returns (or some other special conditions) are grouped together into one ZIP code with a special code. Remove the row for that ZIP code from the income table.
Step10: Because each ZIP code has a different number of people, computing the average income across several ZIP codes requires some care. This will come up several times in this project. Here is a simple example
Step11: Question 2.6. Among all California tax returns that include a total income amount, what is the average total income? Express the answer in dollars as an int rounded to the nearest dollar.
Step12: Farming
Step13: Question 2.8. From the graph, can you say whether ZIP codes with more farmers typically have lower or higher average income than ZIP codes with few or no farmers? Can you say how much lower or higher?
Step14: Write your answer here, replacing this text.
Step15: Write your answer here, replacing this text.
Step16: Question 3.1. Draw a map of the water districts, colored by the per capita water usage in each district.
Step17: Question 3.2. Based on the map above, which part of California appears to use more water per person
Step18: Question 3.3. Complete the district_for_zip function that takes a ZIP code as its argument. It returns the PWSID with the largest value of ZIP in District for that zip_code, if that value is at least 50%. Otherwise, it returns the string 'No District'.
Step19: This function can be used to associate each ZIP code in the income table with a PWSID and discard ZIP codes that do not lie (mostly) in a water district.
Step20: Question 3.4. Create a table called district_data with one row per PWSID and the following columns
Step21: Question 3.5. The bay_districts table gives the names of all water districts in the San Francisco Bay Area. Is there an association between water usage and income among Bay Area water districts? Use the tables you have created to compare water usage between the 10 Bay Area water districts with the highest average income and the rest of the Bay Area districts, then describe the association. Do not include any districts in your analysis for which you do not have income information.
Step22: Complete this one-sentence conclusion
Step23: If you want, draw some more maps below.
|
15,285 | <ASSISTANT_TASK:>
Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
# TODO: Implement Function
inputs_real = tf.placeholder(tf.float32, shape=(None, image_width, image_height, image_channels), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs_real, inputs_z, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def discriminator(images, reuse=False):
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
alpha = 0.2
keep_prob=0.8
with tf.variable_scope('discriminator', reuse=reuse):
# using 4 layer network as in DCGAN Paper
# Conv 1
conv1 = tf.layers.conv2d(images, 64, 5, 2, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
lrelu1 = tf.maximum(alpha * conv1, conv1)
# Conv 2
conv2 = tf.layers.conv2d(lrelu1, 128, 5, 2, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm2 = tf.layers.batch_normalization(conv2, training=True)
lrelu2 = tf.maximum(alpha * batch_norm2, batch_norm2)
drop2 = tf.nn.dropout(lrelu2, keep_prob=keep_prob)
# Conv 3
conv3 = tf.layers.conv2d(drop2, 256, 5, 1, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm3 = tf.layers.batch_normalization(conv3, training=True)
lrelu3 = tf.maximum(alpha * batch_norm3, batch_norm3)
drop3 = tf.nn.dropout(lrelu3, keep_prob=keep_prob)
# Conv 4
conv4 = tf.layers.conv2d(drop3, 512, 5, 1, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm4 = tf.layers.batch_normalization(conv4, training=True)
lrelu4 = tf.maximum(alpha * batch_norm4, batch_norm4)
drop4 = tf.nn.dropout(lrelu4, keep_prob=keep_prob)
# Flatten
flat = tf.reshape(drop4, (-1, 7*7*512))
# Logits
logits = tf.layers.dense(flat, 1)
# Output
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
def generator(z, out_channel_dim, is_train=True):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
alpha = 0.2
keep_prob=0.8
with tf.variable_scope('generator', reuse=False if is_train==True else True):
# Fully connected
fc1 = tf.layers.dense(z, 7*7*512)
fc1 = tf.reshape(fc1, (-1, 7, 7, 512))
fc1 = tf.maximum(alpha*fc1, fc1)
# Starting Conv Transpose Stack
deconv2 = tf.layers.conv2d_transpose(fc1, 256, 3, 1, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm2 = tf.layers.batch_normalization(deconv2, training=is_train)
lrelu2 = tf.maximum(alpha * batch_norm2, batch_norm2)
drop2 = tf.nn.dropout(lrelu2, keep_prob=keep_prob)
deconv3 = tf.layers.conv2d_transpose(drop2, 128, 3, 1, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm3 = tf.layers.batch_normalization(deconv3, training=is_train)
lrelu3 = tf.maximum(alpha * batch_norm3, batch_norm3)
drop3 = tf.nn.dropout(lrelu3, keep_prob=keep_prob)
deconv4 = tf.layers.conv2d_transpose(drop3, 64, 3, 2, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm4 = tf.layers.batch_normalization(deconv4, training=is_train)
lrelu4 = tf.maximum(alpha * batch_norm4, batch_norm4)
drop4 = tf.nn.dropout(lrelu4, keep_prob=keep_prob)
# Logits
logits = tf.layers.conv2d_transpose(drop4, out_channel_dim, 3, 2, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
# Output
out = tf.tanh(logits)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, out_channel_dim)
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real) * 0.9)
)
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))
)
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))
)
return d_loss, g_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS, scope='generator')):
g_train_opt = tf.train.AdamOptimizer(learning_rate = learning_rate,beta1 = beta1).minimize(g_loss, var_list = g_vars)
return d_train_opt, g_train_opt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
tf.reset_default_graph()
input_real, input_z, _ = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
d_loss, g_loss = model_loss(input_real, input_z, data_shape[3])
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
batch_images = batch_images * 2
steps += 1
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
_ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_opt, feed_dict={input_z: batch_z})
if steps % 100 == 0:
train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(epoch_i+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
_ = show_generator_output(sess, 1, input_z, data_shape[3], data_image_mode)
batch_size = 32
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
batch_size = 64
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
print("Done")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Face Generation
Step3: Explore the Data
Step5: CelebA
Step7: Preprocess the Data
Step10: Input
Step13: Discriminator
Step16: Generator
Step19: Loss
Step22: Optimization
Step25: Neural Network Training
Step27: Train
Step29: MNIST
Step31: CelebA
Step32: Submitting This Project
|
15,286 | <ASSISTANT_TASK:>
Python Code:
import re
from time import time
import string
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pprint import pprint
#Sklearn Imports
from sklearn import metrics
from sklearn.datasets import fetch_20newsgroups
from sklearn import preprocessing
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer, TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import confusion_matrix, precision_recall_curve, roc_auc_score, auc
from nltk import PorterStemmer
from nltk.corpus import stopwords
from nltk.tokenize import sent_tokenize, word_tokenize
import nltk
nltk.download('stopwords') #download the latest stopwords
all_newsgroups= fetch_20newsgroups()
pprint(list(all_newsgroups.target_names))
cats = ['sci.med' , 'rec.motorcycles']
newsgroups_train = fetch_20newsgroups(subset='train', categories=cats, remove=('headers', 'footers', 'quotes'))
newsgroups_test = fetch_20newsgroups(subset='test', categories=cats, remove=('headers', 'footers', 'quotes'))
print("Categories to classify\n-----------------------")
print(list(newsgroups_train.target_names))
print("TRAIN DATA\n---------------")
print("Data Type:", type(newsgroups_train))
print("%d documents" % len(newsgroups_train.filenames))
print("%d categories" % len(newsgroups_train.target_names))
print("X shape :", newsgroups_train.filenames.shape)
print("Y shape :",newsgroups_train.target.shape)
print("Y head :", newsgroups_train.target[:10])
print("TEST DATA\n---------------")
print("Data Type:", type(newsgroups_test))
print("%d documents" % len(newsgroups_test.filenames))
print("%d categories" % len(newsgroups_test.target_names))
print("X shape :", newsgroups_test.filenames.shape)
print("Y shape :",newsgroups_test.target.shape)
print("Y head :", newsgroups_test.target[:10])
print(newsgroups_train.data[0])
print(newsgroups_test.data[0])
print(type(newsgroups_test.data))
print(type(newsgroups_test.data[0]))
train_labels = newsgroups_train.target #0, 1 array
#print(train_labels)
test_labels = newsgroups_test.target
#print(test_labels)
RE_PREPROCESS = r'\W+|\d+' #the regular expressions that matches all non-characters
#train_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_train.jobDescription.values])
#test_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_test.jobDescription.values])
labels = np.append(train_labels, test_labels)
vectorizer = TfidfVectorizer()
vectors_train = vectorizer.fit_transform(newsgroups_train.data)
vectors_train.shape
vectors_train.nnz / float(vectors_train.shape[0])
vectors_test = vectorizer.transform(newsgroups_test.data)
clf = MultinomialNB(alpha=.01)
clf.fit(vectors_train, newsgroups_train.target)
y_true = newsgroups_test.target
y_pred = clf.predict(vectors_test)
metrics.f1_score(y_true, y_pred, average='macro')
cm = confusion_matrix(y_true, y_pred)
def print_cm(cm, labels, hide_zeroes=False, hide_diagonal=False, hide_threshold=None):
pretty print for confusion matrixes
columnwidth = max([len(x) for x in labels] + [5]) # 5 is value length
empty_cell = " " * columnwidth
# Print header
print(" " + empty_cell, end=" ")
for label in labels:
print("%{0}s".format(columnwidth) % label, end=" ")
print()
# Print rows
for i, label1 in enumerate(labels):
print(" %{0}s".format(columnwidth) % label1, end=" ")
for j in range(len(labels)):
cell = "%{0}.1f".format(columnwidth) % cm[i, j]
if hide_zeroes:
cell = cell if float(cm[i, j]) != 0 else empty_cell
if hide_diagonal:
cell = cell if i != j else empty_cell
if hide_threshold:
cell = cell if cm[i, j] > hide_threshold else empty_cell
print(cell, end=" ")
print()
print_cm(cm, labels = ['Automobiles', 'Medical'])
pd.crosstab(y_true, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)
def plot_precision_recall(y_true,y_score):
Plot a precision recall curve
Parameters
----------
y_true: ls
ground truth labels
y_score: ls
score output from model
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true,y_score[:,1])
plt.plot(recall_curve, precision_curve)
plt.xlabel('Recall')
plt.ylabel('Precision')
auc_val = auc(recall_curve,precision_curve)
print('AUC-PR: {0:1f}'.format(auc_val))
plt.show()
plt.clf()
y_score = clf.predict_proba(vectors_test)
plot_precision_recall(y_true, y_score)
#Params - NOT tuned
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
TOKENIZER = None
MAX_DF = (1.0) # Exclude words that have a frequency greater than the threshold
STOP_WORDS = (stopwords.words('english'), None)
#Params - TUNED
NGRAM_RANGE = ((0,1), (0,2)) #Range for pharases of words
MIN_DF = (0, 0.01) # Exclude words that have a frequency less than the threshold
ALPHA = (0.01, 0.1, 1)
pipeline = Pipeline([('tfidf', TfidfVectorizer()), ('clf', MultinomialNB())])
# uncommenting more parameters will give better exploring power but will
# increase processing time in a combinatorial way
parameters = {
'tfidf__ngram_range':NGRAM_RANGE,
'tfidf__min_df':MIN_DF,
'clf__alpha': ALPHA,
}
def optimize_pipeline(pipeline):
# multiprocessing requires the fork to happen in a __main__ protected
# block
# find the best parameters for both the feature extraction and the
# classifier
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=True)
print("Performing grid search...")
print("pipeline:", [name for name, _ in pipeline.steps])
print("parameters:")
pprint(parameters)
t0 = time()
grid_search.fit(newsgroups_train.data, newsgroups_train.target)
print("done in %0.3fs" % (time() - t0))
print()
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters set:")
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
optimize_pipeline(pipeline)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Dataset
Step2: Create Train and Test Data [from categories-medical and automobiles]
Step3: Explore the data
Step4: Pre-process Data
Step5: Transform Data (Vectorize)
Step6: There are 18000+ features for each document. And on average, 87 out of 18000 features are non-zeros. This is a sparse matrix
Step7: Evaluate Classifier
Step8: Interpretation
Step10: Pretty Print Confusion Matrix
Step12: Interpretation
Step13: Interpretation
|
15,287 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import time
import pylab
import numpy as np
import pandas as pd
import pycupid.locations
people = pd.read_json('/Users/ajmendez/data/okcupid/random.json')
print('Scraping archive found {:,d} random people'.format(len(people)))
locations = people['location'].astype(unicode)#.replace(r'\s+', np.nan, regex=True)
isgood = (locations.str.extract((u'(\u2026)')).isnull()) & (locations.str.len() > 0)
noriginal = len(locations.unique())
unique_locations = locations[isgood].unique()
nlocations = len(unique_locations)
print('There are a total of {:,d} unique locations and {:,d} good ones'.format(noriginal, nlocations))
print(' > missing locations: {:0.1f}%'.format((noriginal-nlocations)*100.0/noriginal))
print(' > missing people: {:0.1f}%'.format((len(locations)-len(np.where(isgood)[0]))*100.0/len(locations)))
# does not seem to pickup the lat/lon notation from the old db
location_map = pd.read_json('/Users/ajmendez/data/okcupid/location_map.json', orient='index')
location_map.columns = ['lat', 'lon']
print('Location cache contains {:,d} locations'.format(len(location_map)))
# load v2:
location_map = pd.read_json('/Users/ajmendez/data/okcupid/locations_v2.json', orient='index')
geonames = pycupid.locations.getGN()
inew = 0
for i, location in enumerate(unique_locations):
if location in location_map.index:
continue
print u'Getting location: {}'.format(location)
try:
loc, (lat, lon) = geonames.geocode(location.encode('utf8'))
except Exception as e:
print u' > Failed: {}'.format(location)
# raise e
# too many loc* names!
location_map.loc[location] = [lat,lon]
inew += 1
# give the API a bit of a break
time.sleep(0.2)
if inew > 1000:
break
print len(location_map)
location_map.to_json('/Users/ajmendez/data/okcupid/locations_v2.json', orient='index')
finished = []
for i, location in enumerate(location_map.index):
if location in finished:
continue
tmp = location_map.loc[location]
isloc = (locations == location)
people.loc[isloc, 'lat'] = tmp['lat']
people.loc[isloc, 'lon'] = tmp['lon']
people.loc[isloc, 'nloc'] = isloc.sum()
finished.append(location)
if (i%1000 == 0):
print i,
# better plots later, this is just a test
people.plot('lon', 'lat', kind='scatter', s=2, lw=0, alpha=0.1)
people.to_csv('/Users/ajmendez/data/okcupid/random_v2.csv', encoding='utf-8')
people = pd.read_csv('/Users/ajmendez/data/okcupid/random_v2.csv')
tmp = people['username'].str.extract((u'(\d+)'))
people['username_number'] = tmp.apply(lambda x: int(x) if isinstance(x, (str, unicode)) else np.nan)
people['username_nlength'] = tmp.apply(lambda x: len(x) if isinstance(x, (str,unicode)) else 0)
people.to_csv('/Users/ajmendez/data/okcupid/random_v3.csv', encoding='utf-8')
names = ['dinosaur', 'saur','saurus', 'dino','jurassic', 'rex', 'sarus',
'pterodactyl', 'archaeopter', 'pteranod', 'pterodact']
people['hasdino'] = people['username'].str.lower().str.extract((u'({})'.format('|'.join(names)))).notnull()
people.to_csv('/Users/ajmendez/data/okcupid/random_v4.csv', encoding='utf-8')
people = pd.read_csv('/Users/ajmendez/data/okcupid/random_v2.csv')
people.to_json('/Users/ajmendez/data/okcupid/random_v2.json', orient='index')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Feature
Step2: Geolocation APIs have hourly limits, so this was originally run using a cron job nightly to build up a large map of locations to (lat/lon)
Step3: User Table
Step4: Feature
Step5: Feature
Step6: Write as json for archive tools
|
15,288 | <ASSISTANT_TASK:>
Python Code:
from lolcrawler_util import read_key, get_summoner_info
api_key = read_key()
name = 'Doublelift'
summoner = get_summoner_info(api_key, name)
usr_id = summoner[name.lower()]['id']
print usr_id
from lolcrawler_util import get_matchlist_by_summoner
matchlist = get_matchlist_by_summoner(usr_id, api_key, rankedQueues='TEAM_BUILDER_DRAFT_RANKED_5x5',
seasons='SEASON2016')
print len(matchlist['matches'])
%matplotlib inline
from collections import Counter
import numpy as np
import matplotlib.pyplot as plt
role_cnt = Counter()
lane_cnt = Counter()
for match in matchlist['matches']:
role_cnt[match['role']] += 1
lane_cnt[match['lane']] += 1
fig = plt.figure()
ax1 = fig.add_subplot(121)
indexes = np.arange(len(role_cnt.keys()))
width = 1
ax1.bar(indexes, role_cnt.values(), width)
plt.xticks(indexes + width * 0.5, role_cnt.keys(), rotation=-30)
plt.xlabel('Role Type')
plt.ylabel('Counts')
plt.title('Role Type of Doublelift in S6')
plt.grid(True)
ax2 = fig.add_subplot(122)
indexes = np.arange(len(lane_cnt.keys()))
width = 1
ax2.bar(indexes, lane_cnt.values(), width)
plt.xticks(indexes + width * 0.5, lane_cnt.keys())
plt.xlabel('Lane Type')
plt.title('Lane Type of Doublelift in S6')
plt.grid(True)
plt.show()
import os
import pickle
path = os.path.join(os.path.dirname('getMatchList.ipynb'), 'data')
f = open(os.path.join(path, 'Doublelift_TEAM_BUILDER_DRAFT_RANKED_5x5_SEASON2016.pickle'), 'r')
doublelift_history = pickle.load(f)
f.close()
path = os.path.join(os.path.dirname('getMatchList.ipynb'), 'data')
f = open(os.path.join(path, 'PraY8D_TEAM_BUILDER_DRAFT_RANKED_5x5_SEASON2016.pickle'), 'r')
pray_history = pickle.load(f)
f.close()
fig = plt.figure(figsize=(12, 15))
ax1 = fig.add_subplot(421)
kda = np.divide(np.array(doublelift_history.kill, dtype=np.float32) +
np.array(doublelift_history.assist, dtype=np.float32),
(np.array(doublelift_history.death, dtype=np.float32)+1))
n, bins, patches = plt.hist(kda, 20, facecolor='green', alpha=0.75)
plt.xlabel('KDA')
plt.ylabel('Counts')
plt.title('Stats of Doublelift in Season 2016')
plt.axis([0, 25, 0, 80])
plt.grid(True)
ax2 = fig.add_subplot(423)
n, bins, patches = plt.hist(doublelift_history.gold, 20, facecolor='green', alpha=0.75)
plt.xlabel('Gold')
plt.ylabel('Counts')
plt.axis([0, 35000, 0, 60])
plt.grid(True)
ax3 = fig.add_subplot(425)
n, bins, patches = plt.hist(doublelift_history.damage, 20, facecolor='green', alpha=0.75)
plt.xlabel('Damage Dealt')
plt.ylabel('Counts')
plt.axis([0, 100000, 0, 60])
plt.grid(True)
ax4 = fig.add_subplot(427)
n, bins, patches = plt.hist(doublelift_history.damage_taken, 20, facecolor='green', alpha=0.75)
plt.xlabel('Damage Taken')
plt.ylabel('Counts')
plt.axis([0, 60000, 0, 60])
plt.grid(True)
ax1 = fig.add_subplot(422)
kda = np.divide(np.array(pray_history.kill, dtype=np.float32)+
np.array(pray_history.assist, dtype=np.float32),
(np.array(pray_history.death, dtype=np.float32)+1))
n, bins, patches = plt.hist(kda, 20, facecolor='blue', alpha=0.75)
plt.xlabel('KDA')
plt.ylabel('Counts')
plt.title('Stats of Pray in Season 2016')
plt.axis([0, 25, 0, 80])
plt.grid(True)
ax2 = fig.add_subplot(424)
n, bins, patches = plt.hist(pray_history.gold, 20, facecolor='blue', alpha=0.75)
plt.xlabel('Gold')
plt.ylabel('Counts')
plt.axis([0, 35000, 0, 60])
plt.grid(True)
ax3 = fig.add_subplot(426)
n, bins, patches = plt.hist(pray_history.damage, 20, facecolor='blue', alpha=0.75)
plt.xlabel('Damage Dealt')
plt.ylabel('Counts')
plt.axis([0, 100000, 0, 60])
plt.grid(True)
ax4 = fig.add_subplot(428)
n, bins, patches = plt.hist(pray_history.damage_taken, 20, facecolor='blue', alpha=0.75)
plt.xlabel('Damage Taken')
plt.ylabel('Counts')
plt.axis([0, 60000, 0, 60])
plt.grid(True)
plt.suptitle('Stats Comparison Between Doublelift and Pray in Season 2016')
plt.show()
fig = plt.figure(figsize=(6, 5))
ax = fig.add_subplot(111)
sc1 = plt.scatter(doublelift_history.gold, doublelift_history.damage, alpha=0.5, color='green')
sc2 = plt.scatter(pray_history.gold, pray_history.damage, alpha=0.5, color='blue')
plt.xlabel('Gold')
plt.ylabel('Damage')
plt.title('Gold to Damage Comparison')
plt.legend((sc1, sc2), ('Doublelift', 'Pray'), scatterpoints=1, loc='upper left')
plt.grid(True)
ax.set_xlim(xmin=0)
ax.set_ylim(ymin=0)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ok, we get his id now, now we have to get his match history. Let's find all his matches in 2016 season and team builder draft 5v5 rank queue.
Step2: 381 games, that almost the total number of games I've played in NA server. Then let's try to analyze those matches.
Step3: Ok, he likes to play ad carry in bottom lane. Just as expected.
Step4: Hmm... Looks pretty!
|
15,289 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
%matplotlib inline
from sklearn.datasets import load_digits
from matplotlib import pyplot as plt
import numpy as np
np.random.seed(42) # for reproducibility
digits = load_digits()
X = digits.data
y = digits.target
zeroes = [X[i] for i in range(len(y)) if y[i] == 0] # all 64-dim lists with label '0'
ones = [X[i] for i in range(len(y)) if y[i] == 1] # all 64-dim lists with label '1'
both = zeroes + ones
labels = [0] * len(zeroes) + [1] * len(ones)
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression() # clf is code speak for 'classifier'
clf.fit(X=both, y=labels)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(both, labels, test_size=0.3)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
clf.predict(X_test)
def print_proba_table(prob_list, stride=1):
mnist_classes = [i for i in range(len(prob_list[0]))]
print("Class:", *mnist_classes, sep="\t")
print("index", *["---" for i in range(len(mnist_classes))], sep="\t")
counter = 0
for prob in prob_list[::stride]:
print(counter*stride, *[round(prob[i], 3) for i in range(len(mnist_classes))], sep="\t")
counter += 1
print_proba_table(clf.predict_proba(X_test), stride=4)
from sklearn.decomposition import PCA
pca = PCA(2)
Xproj = pca.fit_transform(X)
plt.scatter(Xproj.T[0], Xproj.T[1], c=y, alpha=0.5)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
print_proba_table(clf.predict_proba(X_test), stride=10)
uncertain_indices = []
prob = clf.predict_proba(X_test)
for i in range(len(prob)):
# number of classes with > 0.45 confidence
contender_count = sum([1 if p > 0.45 else 0 for p in prob[i]])
if contender_count == 2:
uncertain_indices.append(i)
f, ax = plt.subplots(5, 3, sharex=False, sharey=True)
f.set_size_inches(9, 15)
predictions = clf.predict(X_test)
for i in range(5):
for j in range(3):
ax[i, j].set_xlabel(r"$\^y = $"+str(predictions[uncertain_indices[3*i + j]])
+ r", $y = $"+str(y_test[uncertain_indices[3*i+j]]), size='large')
ax[i, j].imshow(X_test[uncertain_indices[3*i + j]].reshape(8, 8),
cmap='gray', interpolation='none')
f.tight_layout()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We often use $\boldsymbol{X}$ to represent a dataset of input vectors. The $i^{th}$ input vector in $X$ is notated $X_i$, though often times when iterating through our dataset (like in a summation) we will call our datapoints $x \in X$ and write the the $i^{th}$ input vector as $x^{(i)}$. The $j^{th}$ component of the $i^{th}$ input vector is written $x^{(i)}_j$.
Step2: Supervised learning is an area of study within machine learning that entails passing an input vector into a model and outputting a label. Supervised learning is further broken down into classification tasks, in which the label $y$ is taken from some finite set of objects like {red, green blue} or {0, 1, 2, ..., 9} and regression tasks, in which the label $y$ is taken from an infinite set, usually the set of real numbers $\mathbb{R}$. We do this by training our model on $X$, given the correct labels $y$. When we train our model, our model is learning a function that maps from input vectors $x$ to output labels $y$ - hence the name machine learning.
Step3: A lot just happened in those three short lines. Let's step through it line by line
Step4: Amazing! Our predictor was able to predict the labels of the test set with 100% accuracy!
Step5: clf.predict tells us the actual predictions made on the test set.
Step6: clf.predict_proba tells us how confident our predictor is for each label that that is the correct label for the input. The above table, along with the score, tells us that this was a very easy classification task for our predictor.
Step7: Here's a 2D projection of the entire digits dataset using PCA, yikes! By the way, PCA is a linear dimensionality reduction technique, so it gives us a rough idea of what a linear classifier like logistic regression has to deal with. There also exist non-linear dimensionality reduction techniques, which let you project on non-linear manifolds like spheres, instead of linear manifolds like hyperplanes.
Step8: Not so easy now, is it? But is 94.8% accuracy good "enough"? Depends on your application.
Step9: From this table we can tell that for a good portion of our digits our classifier had very high confidence in their class label, even with 10 different classes to choose from. But some digits were able to steal at least a tenth of a percent of confidence from our predictor across four different digits. And from clf.score we know that our predictor got roughly one digit wrong for every 20 digits predicted.
|
15,290 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import sys
print('Python version:')
print(sys.version)
print('Numpy version:')
print(np.__version__)
import sklearn
print('Sklearn version:')
print(sklearn.__version__)
#This is a code cell
#Jupyter allows you to run code within the browser
#try running this cell
x = 5+15+1000
print(x)
print(np.sum(np.arange(1,21)))
x = np.arange(0,2*np.pi,2*np.pi/80.0)
y = np.sin(x)
plt.plot(x,y)
n = 2 # Number of periods
x = np.arange(0,n*2*np.pi,n*2*np.pi/(80.0*n))
y = np.sin(x)
plt.plot(x,y)
l = [] #creating an empty list
print('Empty list:')
print(l)
l.append(5) #appending 5 to the end of the list
print('List containing 5:')
print(l)
l = [1,2,3,'hello','world'] #creating a list containing 5 items
print('List with items:')
print(l)
l.extend([4,5,6]) #appending elements from another list to l
print('List with more items:')
print(l)
print('Printing fourth element in list:')
print(l[3]) #counting starts at 0
print('Printing all elements up until third element in list:')
print(l[:3])
print('Print the last 3 elements in list:')
print(l[-3:])
d = {} #creating empty dictionary
print('Empty dictionary:')
print(d)
d['author'] = 'Shakespeare' #adding an item to the dictionary
print('Dictionary with one element')
print(d)
#adding more items:
d['year'] = 1596
d['title'] = 'The merchant of Venice'
#Accessing items in dictionary:
print_string = d['title'] + ' was written by ' + d['author'] + ' in the year ' + str(d['year'])
print(print_string)
list_of_numbers = [1.,2.,3.,4.,5.,4.,3.,2.,1.]
incremented_list_of_numbers = []
for i in range(len(list_of_numbers)):
number = list_of_numbers[i]
incremented_list_of_numbers.append(number+1)
print('Incremented list:')
print(incremented_list_of_numbers)
#More elegantly
incremented_list_of_numbers2 = []
for number in list_of_numbers:
incremented_list_of_numbers2.append(number+1)
print('Second incremented list:')
print(incremented_list_of_numbers2)
#We can express the for-loop above also so-called in-line:
#Most elegantly
incremented_list_of_numbers3 = [number + 1 for number in list_of_numbers]
print('Third incremented list:')
print(incremented_list_of_numbers3)
#looping over dictionaries
for key in d:
value = d[key]
print(key,value)
# Task #4:
mean_of_numbers = 0.
for i in range(len(list_of_numbers)):
number = list_of_numbers[i]
mean_of_numbers += number
mean_of_numbers = mean_of_numbers/len(list_of_numbers)
print("The mean with the first method is:",mean_of_numbers)
# Task #5:
print([n**2 for n in list_of_numbers])
# Task #6:
print([k for k in d])
# Task #7:
print([d[k] for k in d])
f = open('testdata.txt')
parsed_lines = []
for line in f:
l = line.split(',') #create a list by splitting the string line at every ','
l = [float(x) for x in l] #in-line for-loop that casts strings to floats
parsed_lines.append(l)
plt.plot(parsed_lines[0])
plt.imshow(np.array(parsed_lines).T)
data_matrix = np.array(parsed_lines)
print(data_matrix[:12,:10]) #print the 10 first columns of the 12 first rows
plt.plot(data_matrix[0,:] - data_matrix[-1,:]) #plots the difference between the first and last row
print(data_matrix.shape) #shows the dimensions of the data_matrix, 200 rows, 80 columns
# Task #8
plt.figure()
plt.plot(data_matrix[25,:])
# Task #9
plt.figure()
plt.plot(data_matrix[:,25])
plt.plot(np.mean(data_matrix, axis=0)) #mean row
plt.plot(np.mean(data_matrix, axis=1)) #mean column
# Task #10
plt.figure()
plt.plot(np.max(data_matrix,axis=1))
# Task #11
plt.figure()
plt.plot(np.min(data_matrix,axis=0))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Short Introduction to Python and Jupyter
Step2: Task #3 [10%]
Step3: Plotting
Step4: Task #4 [10%]
Step5: Data structures
Step6: Accessing data in list
Step7: Dictionaries
Step8: Loops
Step9: Quick exercises
Step10: Loading data
Step11: Numpy arrays
Step12: Quick exercises
Step13: Broadcasting over axis
Step14: Quick exercises
|
15,291 | <ASSISTANT_TASK:>
Python Code:
labVersion = 'cs190_week4_v_1_3'
# Data for manual OHE
# Note: the first data point does not include any value for the optional third feature
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDictManual = {}
sampleOHEDictManual[(0,'bear')] = 0
sampleOHEDictManual[(0,'cat')] = 1
sampleOHEDictManual[(0,'mouse')] = 2
sampleOHEDictManual[(1,'black')] = 3
sampleOHEDictManual[(1,'tabby')] = 4
sampleOHEDictManual[(2,'mouse')] = 5
sampleOHEDictManual[(2,'salmon')] = 6
# TEST One-hot-encoding (1a)
from test_helper import Test
Test.assertEqualsHashed(sampleOHEDictManual[(0,'bear')],
'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c',
"incorrect value for sampleOHEDictManual[(0,'bear')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'cat')],
'356a192b7913b04c54574d18c28d46e6395428ab',
"incorrect value for sampleOHEDictManual[(0,'cat')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],
'da4b9237bacccdf19c0760cab7aec4a8359010b0',
"incorrect value for sampleOHEDictManual[(0,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')],
'77de68daecd823babbb58edb1c8e14d7106e83bb',
"incorrect value for sampleOHEDictManual[(1,'black')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],
'1b6453892473a467d07372d45eb05abc2031647a',
"incorrect value for sampleOHEDictManual[(1,'tabby')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],
'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',
"incorrect value for sampleOHEDictManual[(2,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],
'c1dfd96eea8cc2b62785275bca38ac261256e278',
"incorrect value for sampleOHEDictManual[(2,'salmon')]")
Test.assertEquals(len(sampleOHEDictManual.keys()), 7,
'incorrect number of keys in sampleOHEDictManual')
import numpy as np
from pyspark.mllib.linalg import SparseVector
# TODO: Replace <FILL IN> with appropriate code
aDense = np.array([0., 3., 0., 4.])
aSparse = SparseVector(aDense.size, range(aDense.size), aDense)
bDense = np.array([0., 0., 0., 1.])
bSparse = SparseVector(bDense.size, range(bDense.size), bDense)
w = np.array([0.4, 3.1, -1.4, -.5])
print aDense.dot(w)
print aSparse.dot(w)
print bDense.dot(w)
print bSparse.dot(w)
# TEST Sparse Vectors (1b)
Test.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(aDense.dot(w) == aSparse.dot(w),
'dot product of aDense and w should equal dot product of aSparse and w')
Test.assertTrue(bDense.dot(w) == bSparse.dot(w),
'dot product of bDense and w should equal dot product of bSparse and w')
# Reminder of the sample features
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
sampleOneOHEFeatManual = SparseVector(7, [(2, 1), (3, 1)])
sampleTwoOHEFeatManual = SparseVector(7, [(1, 1), (4, 1), (5,1)])
sampleThreeOHEFeatManual = SparseVector(7, [(0, 1), (3, 1), (6,1)])
# TEST OHE Features as sparse vectors (1c)
Test.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector),
'sampleOneOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector),
'sampleTwoOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector),
'sampleThreeOHEFeatManual needs to be a SparseVector')
Test.assertEqualsHashed(sampleOneOHEFeatManual,
'ecc00223d141b7bd0913d52377cee2cf5783abd6',
'incorrect value for sampleOneOHEFeatManual')
Test.assertEqualsHashed(sampleTwoOHEFeatManual,
'26b023f4109e3b8ab32241938e2e9b9e9d62720a',
'incorrect value for sampleTwoOHEFeatManual')
Test.assertEqualsHashed(sampleThreeOHEFeatManual,
'c04134fd603ae115395b29dcabe9d0c66fbdc8a7',
'incorrect value for sampleThreeOHEFeatManual')
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
You should ensure that the indices used to create a SparseVector are sorted.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
OHE = []
for rawFeat in rawFeats:
OHE.append((OHEDict[rawFeat], 1.0))
return SparseVector(numOHEFeats, OHE)
# Calculate the number of features in sampleOHEDictManual
numSampleOHEFeats = len(sampleOHEDictManual)
# Run oneHotEnoding on sampleOne
sampleOneOHEFeat = oneHotEncoding(sampleOne, sampleOHEDictManual, numSampleOHEFeats)
print sampleOneOHEFeat
# TEST Define an OHE Function (1d)
Test.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual,
'sampleOneOHEFeat should equal sampleOneOHEFeatManual')
Test.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]),
'incorrect value for sampleOneOHEFeat')
Test.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual,
numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]),
'incorrect definition for oneHotEncoding')
# TODO: Replace <FILL IN> with appropriate code
sampleOHEData = sampleDataRDD.map(lambda x: oneHotEncoding(x, sampleOHEDictManual, len(sampleOHEDictManual)))
print sampleOHEData.collect()
# TEST Apply OHE to a dataset (1e)
sampleOHEDataValues = sampleOHEData.collect()
Test.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements')
Test.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}),
'incorrect OHE for first sample')
Test.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}),
'incorrect OHE for second sample')
Test.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}),
'incorrect OHE for third sample')
# TODO: Replace <FILL IN> with appropriate code
sampleDistinctFeats = (sampleDataRDD
.flatMap(lambda x: x)
.distinct())
# TEST Pair RDD of (featureID, category) (2a)
Test.assertEquals(sorted(sampleDistinctFeats.collect()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'incorrect value for sampleDistinctFeats')
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDict = (sampleDistinctFeats
.zipWithIndex()
.collectAsMap())
print sampleOHEDict
# TEST OHE Dictionary from distinct features (2b)
Test.assertEquals(sorted(sampleOHEDict.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDict has unexpected keys')
Test.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')
# TODO: Replace <FILL IN> with appropriate code
def createOneHotDict(inputData):
Creates a one-hot-encoder dictionary based on the input data.
Args:
inputData (RDD of lists of (int, str)): An RDD of observations where each observation is
made up of a list of (featureID, value) tuples.
Returns:
dict: A dictionary where the keys are (featureID, value) tuples and map to values that are
unique integers.
return (inputData
.flatMap(lambda x: x)
.distinct()
.zipWithIndex()
.collectAsMap())
sampleOHEDictAuto = createOneHotDict(sampleDataRDD)
print sampleOHEDictAuto
# TEST Automated creation of an OHE dictionary (2c)
Test.assertEquals(sorted(sampleOHEDictAuto.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDictAuto has unexpected keys')
Test.assertEquals(sorted(sampleOHEDictAuto.values()), range(7),
'sampleOHEDictAuto has unexpected values')
# Run this code to view Criteo's agreement
from IPython.lib.display import IFrame
IFrame("http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/",
600, 350)
# TODO: Replace <FILL IN> with appropriate code
# Just replace <FILL IN> with the url for dac_sample.tar.gz
import glob
import os.path
import tarfile
import urllib
import urlparse
# Paste url, url should end with: dac_sample.tar.gz
url = '<FILL IN>'
url = url.strip()
baseDir = os.path.join('data')
inputPath = os.path.join('cs190', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
inputDir = os.path.split(fileName)[0]
def extractTar(check = False):
# Find the zipped archive and extract the dataset
tars = glob.glob('dac_sample*.tar.gz*')
if check and len(tars) == 0:
return False
if len(tars) > 0:
try:
tarFile = tarfile.open(tars[0])
except tarfile.ReadError:
if not check:
print 'Unable to open tar.gz file. Check your URL.'
return False
tarFile.extract('dac_sample.txt', path=inputDir)
print 'Successfully extracted: dac_sample.txt'
return True
else:
print 'You need to retry the download with the correct url.'
print ('Alternatively, you can upload the dac_sample.tar.gz file to your Jupyter root ' +
'directory')
return False
if os.path.isfile(fileName):
print 'File is already available. Nothing to do.'
elif extractTar(check = True):
print 'tar.gz file was already available.'
elif not url.endswith('dac_sample.tar.gz'):
print 'Check your download url. Are you downloading the Sample dataset?'
else:
# Download the file and store it in the same directory as this notebook
try:
urllib.urlretrieve(url, os.path.basename(urlparse.urlsplit(url).path))
except IOError:
print 'Unable to download and store: {0}'.format(url)
extractTar()
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('cs190', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
if os.path.isfile(fileName):
rawData = (sc
.textFile(fileName, 2)
.map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data
print rawData.take(1)
# TODO: Replace <FILL IN> with appropriate code
weights = [.8, .1, .1]
seed = 42
# Use randomSplit with weights and seed
rawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed)
# Cache the data
rawTrainData.cache()
rawValidationData.cache()
rawTestData.cache()
nTrain = rawTrainData.count()
nVal = rawValidationData.count()
nTest = rawTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print rawData.take(1)
# TEST Loading and splitting the data (3a)
Test.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]),
'you must cache the split data')
Test.assertEquals(nTrain, 79911, 'incorrect value for nTrain')
Test.assertEquals(nVal, 10075, 'incorrect value for nVal')
Test.assertEquals(nTest, 10014, 'incorrect value for nTest')
# TODO: Replace <FILL IN> with appropriate code
def parsePoint(point):
Converts a comma separated string into a list of (featureID, value) tuples.
Note:
featureIDs should start at 0 and increase to the number of features - 1.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
Returns:
list: A list of (featureID, value) tuples.
out = []
values = point.split(',')[1:]
for i in range(len(values)):
out.append((i, values[i]))
return out
parsedTrainFeat = rawTrainData.map(parsePoint)
numCategories = (parsedTrainFeat
.flatMap(lambda x: x)
.distinct()
.map(lambda x: (x[0], 1))
.reduceByKey(lambda x, y: x + y)
.sortByKey()
.collect())
print numCategories[2][1]
# TEST Extract features (3b)
Test.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint')
Test.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')
# TODO: Replace <FILL IN> with appropriate code
ctrOHEDict = createOneHotDict(parsedTrainFeat)
numCtrOHEFeats = len(ctrOHEDict.keys())
print numCtrOHEFeats
print ctrOHEDict[(0, '')]
# TEST Create an OHE dictionary from the dataset (3c)
Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict')
Test.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')
from pyspark.mllib.regression import LabeledPoint
# TODO: Replace <FILL IN> with appropriate code
def parseOHEPoint(point, OHEDict, numOHEFeats):
Obtain the label and feature vector for this raw observation.
Note:
You must use the function `oneHotEncoding` in this implementation or later portions
of this lab may not function as expected.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.
numOHEFeats (int): The number of unique features in the training dataset.
Returns:
LabeledPoint: Contains the label for the observation and the one-hot-encoding of the
raw features based on the provided OHE dictionary.
label = point.split(',', 1)[0]
features = parsePoint(point)
OHEfeatures = oneHotEncoding(features, OHEDict, numOHEFeats)
return LabeledPoint(label, OHEfeatures)
OHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHETrainData.cache()
print OHETrainData.take(1)
# Check that oneHotEncoding function was used in parseOHEPoint
backupOneHot = oneHotEncoding
oneHotEncoding = None
withOneHot = False
try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)
except TypeError: withOneHot = True
oneHotEncoding = backupOneHot
# TEST Apply OHE to the dataset (3d)
numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))
numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))
Test.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint')
Test.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')
def bucketFeatByCount(featCount):
Bucket the counts by powers of two.
for i in range(11):
size = 2 ** i
if featCount <= size:
return size
return -1
featCounts = (OHETrainData
.flatMap(lambda lp: lp.features.indices)
.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
featCountsBuckets = (featCounts
.map(lambda x: (bucketFeatByCount(x[1]), 1))
.filter(lambda (k, v): k != -1)
.reduceByKey(lambda x, y: x + y)
.collect())
print featCountsBuckets
import matplotlib.pyplot as plt
x, y = zip(*featCountsBuckets)
x, y = np.log(x), np.log(y)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))
ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$')
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be
ignored.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
OHE = []
for rawFeat in rawFeats:
if rawFeat in OHEDict:
OHE.append((OHEDict[rawFeat], 1.0))
return SparseVector(numOHEFeats, OHE)
OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHEValidationData.cache()
print OHEValidationData.take(1)
# TEST Handling unseen features (3e)
numNZVal = (OHEValidationData
.map(lambda lp: len(lp.features.indices))
.sum())
Test.assertEquals(numNZVal, 372080, 'incorrect number of features')
from pyspark.mllib.classification import LogisticRegressionWithSGD
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
# TODO: Replace <FILL IN> with appropriate code
model0 = LogisticRegressionWithSGD.train(OHETrainData, numIters, stepSize,
regParam = regParam,
regType = regType,
intercept = includeIntercept)
sortedWeights = sorted(model0.weights)
print sortedWeights[:5], model0.intercept
# TEST Logistic regression (4a)
Test.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept')
Test.assertTrue(np.allclose(sortedWeights[0:5],
[-0.45899236853575609, -0.37973707648623956, -0.36996558266753304,
-0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights')
# TODO: Replace <FILL IN> with appropriate code
from math import log
def computeLogLoss(p, y):
Calculates the value of log loss for a given probabilty and label.
Note:
log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it
and when p is 1 we need to subtract a small value (epsilon) from it.
Args:
p (float): A probabilty between 0 and 1.
y (int): A label. Takes on the values 0 and 1.
Returns:
float: The log loss value.
epsilon = 10e-12
if y == 1:
logLoss = -log(p + epsilon)
else:
logLoss = -log(1 - p + epsilon)
return logLoss
print computeLogLoss(.5, 1)
print computeLogLoss(.5, 0)
print computeLogLoss(.99, 1)
print computeLogLoss(.99, 0)
print computeLogLoss(.01, 1)
print computeLogLoss(.01, 0)
print computeLogLoss(0, 1)
print computeLogLoss(1, 1)
print computeLogLoss(1, 0)
# TEST Log loss (4b)
Test.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)],
[0.69314718056, 0.0100503358535, 4.60517018599]),
'computeLogLoss is not correct')
Test.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)],
[25.3284360229, 1.00000008275e-11, 25.3284360229]),
'computeLogLoss needs to bound p away from 0 and 1 by epsilon')
# TODO: Replace <FILL IN> with appropriate code
# Note that our dataset has a very high click-through rate by design
# In practice click-through rate can be one to two orders of magnitude lower
classOneFracTrain = OHETrainData.map(lambda x: x.label).mean()
print classOneFracTrain
logLossTrBase = OHETrainData.map(lambda x: computeLogLoss(classOneFracTrain, x.label)).mean()
print 'Baseline Train Logloss = {0:.3f}\n'.format(logLossTrBase)
# TEST Baseline log loss (4c)
Test.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain')
Test.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')
# TODO: Replace <FILL IN> with appropriate code
from math import exp # exp(-t) = e^-t
def getP(x, w, intercept):
Calculate the probability for an observation given a set of weights and intercept.
Note:
We'll bound our raw prediction between 20 and -20 for numerical purposes.
Args:
x (SparseVector): A vector with values of 1.0 for features that exist in this
observation and 0.0 otherwise.
w (DenseVector): A vector of weights (betas) for the model.
intercept (float): The model's intercept.
Returns:
float: A probability between 0 and 1.
rawPrediction = x.dot(w) + intercept
# Bound the raw prediction value
rawPrediction = min(rawPrediction, 20)
rawPrediction = max(rawPrediction, -20)
return 1.0/(1.0 + exp(-rawPrediction))
trainingPredictions = OHETrainData.map(lambda x: getP(x.features, model0.weights, model0.intercept))
print trainingPredictions.take(5)
# TEST Predicted probability (4d)
Test.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348),
'incorrect value for trainingPredictions')
# TODO: Replace <FILL IN> with appropriate code
def evaluateResults(model, data):
Calculates the log loss for the data given the model.
Args:
model (LogisticRegressionModel): A trained logistic regression model.
data (RDD of LabeledPoint): Labels and features for each observation.
Returns:
float: Log loss for the data.
def evaluateResult(model, point):
# Calculates the log loss for the individual data points given the model.
prob = getP(point.features, model.weights, model.intercept)
return computeLogLoss(prob, point.label)
return data.map(lambda x: evaluateResult(model, x)).mean()
logLossTrLR0 = evaluateResults(model0, OHETrainData)
print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTrBase, logLossTrLR0))
# TEST Evaluate the model (4e)
Test.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')
# TODO: Replace <FILL IN> with appropriate code
logLossValBase = OHEValidationData.map(lambda x: computeLogLoss(classOneFracTrain, x.label)).mean()
logLossValLR0 = evaluateResults(model0, OHEValidationData)
print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, logLossValLR0))
# TEST Validation log loss (4f)
Test.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')
Test.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')
labelsAndScores = OHEValidationData.map(lambda lp:
(lp.label, getP(lp.features, model0.weights, model0.intercept)))
labelsAndWeights = labelsAndScores.collect()
labelsAndWeights.sort(key=lambda (k, v): v, reverse=True)
labelsByWeight = np.array([k for (k, v) in labelsAndWeights])
length = labelsByWeight.size
truePositives = labelsByWeight.cumsum()
numPositive = truePositives[-1]
falsePositives = np.arange(1.0, length + 1, 1.) - truePositives
truePositiveRate = truePositives / numPositive
falsePositiveRate = falsePositives / (length - numPositive)
# Generate layout and plot data
fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))
ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)
ax.set_ylabel('True Positive Rate (Sensitivity)')
ax.set_xlabel('False Positive Rate (1 - Specificity)')
plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)
plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model
pass
from collections import defaultdict
import hashlib
def hashFunction(numBuckets, rawFeats, printMapping=False):
Calculate a feature dictionary for an observation's features based on hashing.
Note:
Use printMapping=True for debug purposes and to better understand how the hashing works.
Args:
numBuckets (int): Number of buckets to use as features.
rawFeats (list of (int, str)): A list of features for an observation. Represented as
(featureID, value) tuples.
printMapping (bool, optional): If true, the mappings of featureString to index will be
printed.
Returns:
dict of int to float: The keys will be integers which represent the buckets that the
features have been hashed to. The value for a given key will contain the count of the
(featureID, value) tuples that have hashed to that key.
mapping = {}
for ind, category in rawFeats:
featureString = category + str(ind)
mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets)
if(printMapping): print mapping
sparseFeatures = defaultdict(float)
for bucket in mapping.values():
sparseFeatures[bucket] += 1.0
return dict(sparseFeatures)
# Reminder of the sample values:
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
# Use four buckets
sampOneFourBuckets = hashFunction(4, sampleOne, True)
sampTwoFourBuckets = hashFunction(4, sampleTwo, True)
sampThreeFourBuckets = hashFunction(4, sampleThree, True)
# Use one hundred buckets
sampOneHundredBuckets = hashFunction(100, sampleOne, True)
sampTwoHundredBuckets = hashFunction(100, sampleTwo, True)
sampThreeHundredBuckets = hashFunction(100, sampleThree, True)
print '\t\t 4 Buckets \t\t\t 100 Buckets'
print 'SampleOne:\t {0}\t\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets)
print 'SampleTwo:\t {0}\t\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets)
print 'SampleThree:\t {0}\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)
# TEST Hash function (5a)
Test.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets')
Test.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0},
'incorrect value for sampThreeHundredBuckets')
# TODO: Replace <FILL IN> with appropriate code
def parseHashPoint(point, numBuckets):
Create a LabeledPoint for this observation using hashing.
Args:
point (str): A comma separated string where the first value is the label and the rest are
features.
numBuckets: The number of buckets to hash to.
Returns:
LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed
features.
label = point.split(',', 1)[0]
features = parsePoint(point)
hashfeatures = SparseVector(numBuckets, hashFunction(numBuckets, features))
return LabeledPoint(label, hashfeatures)
numBucketsCTR = 2 ** 15
hashTrainData = rawTrainData.map(lambda x: parseHashPoint(x, numBucketsCTR))
hashTrainData.cache()
hashValidationData = rawValidationData.map(lambda x: parseHashPoint(x, numBucketsCTR))
hashValidationData.cache()
hashTestData = rawTestData.map(lambda x: parseHashPoint(x, numBucketsCTR))
hashTestData.cache()
print hashTrainData.take(1)
# TEST Creating hashed features (5b)
hashTrainDataFeatureSum = sum(hashTrainData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTrainDataLabelSum = sum(hashTrainData
.map(lambda lp: lp.label)
.take(100))
hashValidationDataFeatureSum = sum(hashValidationData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashValidationDataLabelSum = sum(hashValidationData
.map(lambda lp: lp.label)
.take(100))
hashTestDataFeatureSum = sum(hashTestData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTestDataLabelSum = sum(hashTestData
.map(lambda lp: lp.label)
.take(100))
Test.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData')
Test.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData')
Test.assertEquals(hashValidationDataFeatureSum, 776,
'incorrect number of features in hashValidationData')
Test.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData')
Test.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData')
Test.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')
# TODO: Replace <FILL IN> with appropriate code
def computeSparsity(data, d, n):
Calculates the average sparsity for the features in an RDD of LabeledPoints.
Args:
data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.
d (int): The total number of features.
n (int): The number of observations in the RDD.
Returns:
float: The average of the ratio of features in a point to total features.
return data.map(lambda x: len(x.features.indices) / float(d)).mean()
averageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain)
averageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain)
print 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE)
print 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash)
# TEST Sparsity (5c)
Test.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04),
'incorrect value for averageSparsityOHE')
Test.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03),
'incorrect value for averageSparsityHash')
numIters = 500
regType = 'l2'
includeIntercept = True
# Initialize variables using values from initial model training
bestModel = None
bestLogLoss = 1e10
# TODO: Replace <FILL IN> with appropriate code
stepSizes = [1, 10]
regParams = [1e-6, 1e-3]
for stepSize in stepSizes:
for regParam in regParams:
model = (LogisticRegressionWithSGD
.train(hashTrainData, numIters, stepSize, regParam=regParam, regType=regType,
intercept=includeIntercept))
logLossVa = evaluateResults(model, hashValidationData)
print ('\tstepSize = {0:.1f}, regParam = {1:.0e}: logloss = {2:.3f}'
.format(stepSize, regParam, logLossVa))
if (logLossVa < bestLogLoss):
bestModel = model
bestLogLoss = logLossVa
print ('Hashed Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, bestLogLoss))
# TEST Logistic model with hashed features (5d)
Test.assertTrue(np.allclose(bestLogLoss, 0.4481683608), 'incorrect value for bestLogLoss')
from matplotlib.colors import LinearSegmentedColormap
# Saved parameters and results. Eliminate the time required to run 36 models
stepSizes = [3, 6, 9, 12, 15, 18]
regParams = [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2]
logLoss = np.array([[ 0.45808431, 0.45808493, 0.45809113, 0.45815333, 0.45879221, 0.46556321],
[ 0.45188196, 0.45188306, 0.4518941, 0.4520051, 0.45316284, 0.46396068],
[ 0.44886478, 0.44886613, 0.44887974, 0.44902096, 0.4505614, 0.46371153],
[ 0.44706645, 0.4470698, 0.44708102, 0.44724251, 0.44905525, 0.46366507],
[ 0.44588848, 0.44589365, 0.44590568, 0.44606631, 0.44807106, 0.46365589],
[ 0.44508948, 0.44509474, 0.44510274, 0.44525007, 0.44738317, 0.46365405]])
numRows, numCols = len(stepSizes), len(regParams)
logLoss = np.array(logLoss)
logLoss.shape = (numRows, numCols)
fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7),
hideLabels=True, gridWidth=0.)
ax.set_xticklabels(regParams), ax.set_yticklabels(stepSizes)
ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Step Size')
colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)
image = plt.imshow(logLoss,interpolation='nearest', aspect='auto',
cmap = colors)
pass
# TODO: Replace <FILL IN> with appropriate code
# Log loss for the best model from (5d)
logLossTest = evaluateResults(bestModel, hashTestData)
# Log loss for the baseline model
logLossTestBaseline = hashTestData.map(lambda x: computeLogLoss(classOneFracTrain, x.label)).mean()
print ('Hashed Features Test Log Loss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTestBaseline, logLossTest))
# TEST Evaluate on the test set (5e)
Test.assertTrue(np.allclose(logLossTestBaseline, 0.537438),
'incorrect value for logLossTestBaseline')
Test.assertTrue(np.allclose(logLossTest, 0.455616931), 'incorrect value for logLossTest')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: (1b) Sparse vectors
Step3: (1c) OHE features as sparse vectors
Step5: (1d) Define a OHE function
Step6: (1e) Apply OHE to a dataset
Step7: Part 2
Step8: (2b) OHE Dictionary from distinct features
Step10: (2c) Automated creation of an OHE dictionary
Step11: Part 3
Step12: (3a) Loading and splitting the data
Step14: (3b) Extract features
Step15: (3c) Create an OHE dictionary from the dataset
Step17: (3d) Apply OHE to the dataset
Step20: Visualization 1
Step22: (3e) Handling unseen features
Step23: Part 4
Step25: (4b) Log loss
Step26: (4c) Baseline log loss
Step28: (4d) Predicted probability
Step30: (4e) Evaluate the model
Step31: (4f) Validation log loss
Step32: Visualization 2
Step34: Part 5
Step36: (5b) Creating hashed features
Step38: (5c) Sparsity
Step39: (5d) Logistic model with hashed features
Step40: Visualization 3
Step41: (5e) Evaluate on the test set
|
15,292 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext gremlin
import asyncio
import aiogremlin
import networkx as nx
g = nx.scale_free_graph(10)
nx.draw_networkx(g)
@asyncio.coroutine
def stream(gc):
results = []
resp = yield from gc.submit("x + x", bindings={"x": 1})
while True:
result = yield from resp.stream.read()
if result is None:
break
results.append(result)
return results
loop = asyncio.get_event_loop()
gc = aiogremlin.GremlinClient()
results = loop.run_until_complete(stream(gc))
results
loop.run_until_complete(gc.close()) # Explicitly close client!!!
%%gremlin
graph = TinkerFactory.createModern()
g = graph.traversal(standard())
g.V().has('name','marko').out('knows').values('name')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What's a graph?
Step2: Graphs are everywhere these days!
Step4: ipython-gremlin
|
15,293 | <ASSISTANT_TASK:>
Python Code:
%run "../Functions/1. Google form analysis.ipynb"
%run "../Functions/4. User comparison.ipynb"
#getAllResponders()
setAnswerTemporalities(gform)
# small sample
#allData = getAllUserVectorData( getAllUsers( rmdf152 )[:10] )
# complete set
#allData = getAllUserVectorData( getAllUsers( rmdf152 ) )
# subjects who answered the gform
allData = getAllUserVectorData( getAllResponders() )
# 10 subjects who answered the gform
#allData = getAllUserVectorData( getAllResponders()[:10] )
efficiencies = allData.loc['efficiency'].sort_values()
efficiencies.index = range(0, len(allData.columns))
efficiencies.plot(title = 'efficiency')
efficiencies2 = allData.loc['efficiency'].sort_values()
efficiencies2 = efficiencies2[efficiencies2 != 0]
efficiencies2.index = range(0, len(efficiencies2))
efficiencies2 = np.log(efficiencies2)
efficiencies2.plot(title = 'efficiency log')
maxChapter = allData.loc['maxChapter'].sort_values()
maxChapter.index = range(0, len(allData.columns))
maxChapter.plot(title = 'maxChapter')
len(allData.columns)
userIds = getAllResponders()
_source = correctAnswers
# _source is used as correction source, if we want to include answers to these questions
#def getAllUserVectorData( userIds, _source = [] ):
# result
isInitialized = False
allData = []
f = FloatProgress(min=0, max=len(userIds))
display(f)
for userId in userIds:
#print(str(userId))
f.value += 1
if not isInitialized:
isInitialized = True
allData = getUserDataVector(userId, _source = _source)
else:
allData = pd.concat([allData, getUserDataVector(userId, _source = _source)], axis=1)
#print('done')
allData
userId
methods = ['pearson', 'kendall', 'spearman']
_allUserVectorData = allData.T
_method = methods[0]
_title='RedMetrics Correlations'
_abs=True
_clustered=False
_figsize = (20,20)
#def plotAllUserVectorDataCorrelationMatrix(
# _allUserVectorData,
# _method = methods[0],
# _title='RedMetrics Correlations',
# _abs=False,
# _clustered=False,
# _figsize = (20,20)
#):
_progress = FloatProgress(min=0, max=3)
display(_progress)
# computation of correlation matrix
_m = _method
if(not (_method in methods)):
_m = methods[0]
_correlation = _allUserVectorData.astype(float).corr(_m)
_progress.value += 1
if(_abs):
_correlation = _correlation.abs()
_progress.value += 1
# plot
if(_clustered):
sns.clustermap(_correlation,cmap=plt.cm.jet,square=True,figsize=_figsize)
else:
_fig = plt.figure(figsize=_figsize)
_ax = plt.subplot(111)
_ax.set_title(_title)
sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True)
_progress.value += 1
gform['Temporality'].unique()
allData.loc['scoreundefined'].dropna()
getAllUsers(rmdf152)[:10]
len(getAllUsers(rmdf152))
userSessionsRelevantColumns = ['customData.localplayerguid', 'sessionId']
userSessions = rmdf152[rmdf152['type']=='start'].loc[:,userSessionsRelevantColumns]
userSessions = userSessions.rename(index=str, columns={'customData.localplayerguid': 'userId'})
userSessions.head()
#groupedUserSessions = userSessions.groupby('customData.localplayerguid')
#groupedUserSessions.head()
#groupedUserSessions.describe().head()
checkpointsRelevantColumns = ['sessionId', 'customData.localplayerguid', 'type', 'section', 'userTime']
checkpoints = rmdf152.loc[:, checkpointsRelevantColumns]
checkpoints = checkpoints[checkpoints['type']=='reach'].loc[:,['section','sessionId','userTime']]
checkpoints = checkpoints[checkpoints['section'].str.startswith('tutorial', na=False)]
#checkpoints = checkpoints.groupby("sessionId")
#checkpoints = checkpoints.max()
checkpoints.head()
#assembled = userSessions.combine_first(checkpoints)
assembled = pd.merge(userSessions, checkpoints, on='sessionId', how='outer')
assembled.head()
userSections = assembled.drop('sessionId', 1)
userSections.head()
userSections = userSections.dropna()
userSections.head()
checkpoints = userSections.groupby("userId")
checkpoints = checkpoints.max()
checkpoints.head()
#userTimedSections = userSections.groupby("userId").agg({ "userTime": np.min })
#userTimedSections = userSections.groupby("userId")
userTimes = userSections.groupby("userId").agg({ "userTime": [np.min, np.max] })
userTimes["duration"] = pd.to_datetime(userTimes["userTime"]["amax"]) - pd.to_datetime(userTimes["userTime"]["amin"])
userTimes["duration"] = userTimes["duration"].map(lambda x: np.timedelta64(x, 's'))
userTimes = userTimes.sort_values(by=['duration'], ascending=[False])
userTimes.head()
sessionCount = 1
_rmDF = rmdf152
sample = gform
before = False
after = True
gfMode = False
rmMode = True
#def getAllUserVectorDataCustom(before, after, gfMode = False, rmMode = True, sessionCount = 1, _rmDF = rmdf152)
userIds = []
if (before and after):
userIds = getSurveysOfUsersWhoAnsweredBoth(sample, gfMode = gfMode, rmMode = rmMode)
elif before:
if rmMode:
userIds = getRMBefores(sample)
else:
userIds = getGFBefores(sample)
elif after:
if rmMode:
userIds = getRMAfters(sample)
else:
userIds = getGFormAfters(sample)
if(len(userIds) > 0):
userIds = userIds[localplayerguidkey]
allUserVectorData = getAllUserVectorData(userIds, _rmDF = _rmDF)
allUserVectorData = allUserVectorData.T
result = allUserVectorData[allUserVectorData['sessionsCount'] == sessionCount].T
else:
print("no matching user")
result = []
result
getAllUserVectorDataCustom(False, True)
userIdsBoth = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = True, rmMode = True)[localplayerguidkey]
allUserVectorData = getAllUserVectorData(userIdsBoth)
allUserVectorData = allUserVectorData.T
allUserVectorData[allUserVectorData['sessionsCount'] == 1]
testUser = "3685a015-fa97-4457-ad73-da1c50210fe1"
def getScoreFromBinarized(binarizedAnswers):
gformIndices = binarizedAnswers.index.map(lambda s: int(s.split(correctionsColumnNameStem)[1]))
return pd.Series(np.dot(binarizedAnswers, np.ones(binarizedAnswers.shape[1])), index=gform.loc[gformIndices, localplayerguidkey])
#allResponders = getAllResponders()
#gf_both = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = True, rmMode = False)
rm_both = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = False, rmMode = True)
#gfrm_both = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = True, rmMode = True)
sciBinarizedBefore = getAllBinarized(_form = getRMBefores(rm_both))
sciBinarizedAfter = getAllBinarized(_form = getRMAfters(rm_both))
scoresBefore = getScoreFromBinarized(sciBinarizedBefore)
scoresAfter = getScoreFromBinarized(sciBinarizedAfter)
medianBefore = np.median(scoresBefore)
medianAfter = np.median(scoresAfter)
maxScore = sciBinarizedBefore.shape[1]
indicators = pd.DataFrame()
indicators['before'] = scoresBefore
indicators['after'] = scoresAfter
indicators['delta'] = scoresAfter - scoresBefore
indicators['maxPotentialDelta'] = maxScore - scoresBefore
for index in indicators['maxPotentialDelta'].index:
if (indicators.loc[index, 'maxPotentialDelta'] == 0):
indicators.loc[index, 'maxPotentialDelta'] = 1
indicators['relativeBefore'] = scoresBefore / medianBefore
indicators['relativeAfter'] = scoresAfter / medianBefore
indicators['relativeDelta'] = indicators['delta'] / medianBefore
indicators['realizedPotential'] = indicators['delta'] / indicators['maxPotentialDelta']
indicators['increaseRatio'] = indicators['before']
for index in indicators['increaseRatio'].index:
if (indicators.loc[index, 'increaseRatio'] == 0):
indicators.loc[index, 'increaseRatio'] = 1
indicators['increaseRatio'] = indicators['delta'] / indicators['increaseRatio']
indicators
(min(indicators['relativeBefore']), max(indicators['relativeBefore'])),\
(min(indicators['relativeDelta']), max(indicators['relativeDelta'])),\
medianBefore,\
np.median(indicators['relativeBefore']),\
np.median(indicators['relativeDelta'])\
indicatorX = 'relativeBefore'
indicatorY = 'relativeDelta'
def scatterPlotIndicators(indicatorX, indicatorY):
print(indicatorX + ' range: ' + str((min(indicators[indicatorX]), max(indicators[indicatorX]))))
print(indicatorY + ' range: ' + str((min(indicators[indicatorY]), max(indicators[indicatorY]))))
print(indicatorX + ' median: ' + str(np.median(indicators[indicatorX])))
print(indicatorY + ' median: ' + str(np.median(indicators[indicatorY])))
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(indicators[indicatorX], indicators[indicatorY])
plt.xlabel(indicatorX)
plt.ylabel(indicatorY)
# vertical line
plt.plot( [np.median(indicators[indicatorX]), np.median(indicators[indicatorX])],\
[min(indicators[indicatorY]), max(indicators[indicatorY])],\
'k-', lw=2)
# horizontal line
plt.plot( [min(indicators[indicatorX]), max(indicators[indicatorX])],\
[np.median(indicators[indicatorY]), np.median(indicators[indicatorY])],\
'k-', lw=2)
indicators.columns
scatterPlotIndicators('relativeBefore', 'relativeDelta')
scatterPlotIndicators('relativeBefore', 'realizedPotential')
scatterPlotIndicators('relativeBefore', 'increaseRatio')
scatterPlotIndicators('relativeBefore', 'relativeAfter')
scatterPlotIndicators('maxPotentialDelta', 'realizedPotential')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data vectors of users
Step2: getAllUserVectorData
Step3: Correlation Matrix
Step4: List of users and their sessions
Step5: List of sessions with their checkpoints achievements
Step6: Assembly of both
Step7: Time analysis
Step8: TODO
Step9: user progress classification
|
15,294 | <ASSISTANT_TASK:>
Python Code:
import github3
import json
from os.path import join
import pprint
import requests
from urllib.parse import urljoin
TOKEN=''
gh = github3.login(token=TOKEN)
type(gh)
url = 'https://api.github.com/orgs/jupyterhub/repos'
response = requests.get(url)
if response.status_code != 200:
# This means something went wrong.
raise ApiError('GET /orgs/ {}'.format(resp.status_code))
repos = response.json()
pprint.pprint(repos)
print('The total number of repos in the organization is {}'.format(len(repos)))
# print repos
print('{0:30s} {1:20s}\n'.format('Repository name', 'open_issues_count'))
for num in range(0, len(repos)):
print('{0:30s} {1:4d}\n'.format(repos[num]['name'], repos[num]['open_issues_count']))
for num in range(0, len(repos)):
print('{0:30s} {1:50s}\n'.format(repos[num]['name'], repos[num]['description']))
print('{0:30s} {1:20s}\n'.format('Repository name', 'open_issues_count'))
for num in range(0, len(repos)):
print('{0:30s} {1:4d} {2:20s}\n'.format(repos[num]['name'], repos[num]['open_issues_count'], repos[num]['description']))
def get_issues(my_org, my_repo):
for issue in gh.iter_repo_issues(owner=my_org, repository=my_repo):
print(issue.number, issue.title)
my_org = 'jupyterhub'
my_repo = 'configurable-http-proxy'
get_issues(my_org, my_repo)
my_org = 'jupyterhub'
my_repo = 'jupyterhub'
get_issues(my_org, my_repo)
subgroup={'authenticators':['oauthenticator', 'ldapauthenticator'],
'spawners':['dockerspawner', 'sudospawner', 'kubespawner', 'batchspawner', 'wrapspawner', 'systemdspawner'],
'deployments':['jupyterhub-deploy-docker', 'jupyterhub-deploy-teaching', 'jupyterhub-deploy-hpc', 'jupyterhub-example-kerberos'],
'fundamentals':['jupyterhub', 'configurable-http-proxy', 'hubshare', 'jupyterhub-labextension'],
'community':['jupyterhub-tutorial', 'jupyterhub-2016-workshop'],
}
print(subgroup['authenticators'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: GitHub Authorization
Step2: Basic API request
Step3: Issues in an organization's repos
|
15,295 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt #We might need this
#First, let us load the data
#Catalog from HSC
cat_hsc = np.loadtxt('./Catalog_HSC.csv')
x_hsc = cat_hsc[:,0]
y_hsc = cat_hsc[:,1]
#Catalog from HST
cat_hst = np.loadtxt('./Catalog_HST.csv')
x_hst = cat_hst[:,0]
y_hst = cat_hst[:,1]
#First, check the number of stars in each survey:
ns_hst = #fill in
ns_hsc = #...
#Print the result
print()
#This is a graphic representation of our data content:
%matplotlib qt
plt.title('star catalogs in COSMOS')
plt.plot(x_hsc, y_hsc, 'or', label = 'hsc catalog')
plt.plot(x_hst, y_hst, 'ob', label = 'hst catalog')
plt.legend()
plt.xlabel('ra')
plt.ylabel('dec')
plt.show()
def distance(point1, point2):
''' Returns the distance between two points with coordinates (x,y).
Parameters
----------
point1: list
2D coordinates of a point
point2: list
2D coordinates of a point
Returns
-------
d: float
the distance between point1 and point2
'''
return
point1 = [x_hst[0], y_hst[0]]
point2 = [x_hsc[0], y_hsc[0]]
print(distance(point1, point2))
# Answer should be 0.6648994838877168
def point_to_points_distance(point, coordinates):
''' Returns the distance between one point and all the points in coordinates.
Parameters
----------
point: list
2D coordinates of a point
coordinates: list
set of N 2D coordinates stored in a list with shape Nx2
Returns
-------
d: list
the distance between point and each point in coordinates in an array with size N
'''
#Declaring an empty list
d = []
for c in coordinates:
# for each point in coordinates, take the distance to point and concatenate it to d
d.append(distance(point, c))
#make d a numpy array and return it
return np.array(d)
coords = np.concatenate((x_hsc[:10,None], y_hsc[:10,None]), axis = 1)
print(point_to_points_distance(point1, coords))
# The answer should look like [0.66489948 0.4628197 0.39672485 0.43854084 0.32165335 0.30223269
# 0.65765909 0.65411548 0.6474303 0.79301678]
def your_function(coord1, coord2): # Choose an adequate name for your function
''' Returns the distance between points in two sets of coordinates.
Parameters
coord1: array
array of size Nx2 that contains the [x, y] positions of a catalog
coord2: array
array of size Mx2 that contains the [x, y] positions of a catalog
Returns
dist: array
array of size NxM that contains the euclidean distances between points in the two datasets
'''
return
# In order not to spend the whole evening here, let us reduce the dataset size:
#Select stars in hsc in the frame: 150.0<x<150.1 and 2.0<y<2.1
loc_hsc = #please fill these
x_hsc_exp = x_hsc[loc_hsc]
y_hsc_exp = y_hsc[loc_hsc]
loc_hst = #And that
x_hst_exp = x_hst[loc_hst]
y_hst_exp = y_hst[loc_hst]
#Once you are done with the exercise, feel free to try with larger selections to see how it impacts computation time
import distances as dt
# Insert the names of your functions in the following array:
methods = [your_function, dt.double_loop, dt.with_indices, dt.one_loop, dt.one_loop_reverse, dt.scipy_version, dt.newaxis_magic]
#An empty variable to store computation time
timers = []
# Making sets of coordinates of size Nx2 to feed your functions with the right format
c2 = np.concatenate((x_hst_exp[:,None], y_hst_exp[:,None]), axis = 1)
c1 = np.concatenate((x_hsc_exp[:,None], y_hsc_exp[:,None]), axis = 1)
for f in methods:
print(f.__name__)
r = %timeit -o f(c1, c2)
timers.append(r)
#View the results:
plt.figure(figsize=(10,6))
plt.bar(np.arange(len(methods)), [r.best*1000 for r in timers], log=True) # Set log to True for logarithmic scale
plt.xticks(np.arange(len(methods))+0.2, [f.__name__ for f in methods], rotation=30)
plt.xlabel('Method')
plt.ylabel('Time (ms)')
plt.yscale('log')
plt.show()
#Let us compute the distances as we did before, but this time, with the whole dataset.
#Of course, a fast method is to be prefered
c1 = #Please fill these. Same as before but with all the dataset
c2 = #
def get_match(coord_ref, coord2, rad):
'''
matches coordinates of stars between two datasets and computes the distance between the position of the stars in the 2 datasets
Parameters
coord_ref: numpy array (Nx2)
coordinates (ra, dec) of stars in a FoV from a given dataset
coord2: numpy array (Mx2)
coordinates (ra dec) of stars in the same FoV in an other dataset
rad: float
radius (deg) around stars in coord_ref where to find a corresponding star in coord2
Returns
modulus:numpy array (N')
containing the distance between matching stars
v_coord: numpy array(N',2)
coordinates in the coord_ref set of matching stars
'''
#Declare two empty arrays to store the coordinates and distances.
#...
s = np.size(coord_ref[:,0])#This is just for representation
print('number of points in reference catalog: {0}'.format(s))
#for each star in coord_ref
for i,c in enumerate(coord_ref):
#This is just here to keep track of the algorithm's progression
if i % 3000 == 0:
print('point number {0} out of {1}'.format(i, s))
#compute the distance from c to all stars in coord2
r = #...
#Find the closest star from coord 2 to c
loc = #...
#Make sure that there is only one star matching (it can happen that 2 match)
#Here I just arbitrarily pick one, but you can find a way to discard these stars
if np.size(loc) > 1:
loc = loc[0]
#record the distance between matching stars
rmin = #...
#Check whether the closest distance is smaller than rad
if #...:
#if yes, place the coordinates and the distance in an array
#... tip: use append()
return #...
# Use your function
coord , r = get_match(c1, c2, 0.3/3600.)
#Spatial distribution of distances
plt.title('distribution of distances accross the FoV')
#...
#Global representation
#...
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check that the loaded data are consistent with what we expect
Step2: To begin with, let's write a function that returns the algebraic distance between two points
Step3: Now let's test it by comparing the distance between the first point of each dataset.
Step4: Let's take it one step further and compare the distance between one point and a set of points
Step5: Let's test it on the first 10 points in the HSC catalog and the first point of the HST catalog
Step6: Now let's get to work. We would like to associate stars in one survey to their conterpart (if it exists) in the other survey. We will start by comparing the positions between each point of one survey to the position of each point in the other survey.
Step7: Now, let us take a look at the computation times
Step8: Identifying matching stars (optional)
Step9: Now I would like to have a representation for the work we have done that informs me about what is in my datasets. Namely, what is the error on star positions between the two datasets? We would like to have a global view of this error but also an impression of the error as a function of the position on the field. For the latter, I suggest you use the 'scatter' function from matplotlib.
|
15,296 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import kwant
%run matplotlib_setup.ipy
from matplotlib import pyplot
lat = kwant.lattice.square()
def make_lead_x(W=10, t=1):
syst = kwant.Builder(kwant.TranslationalSymmetry([-1, 0]))
syst[(lat(0, y) for y in range(W))] = 4 * t
syst[lat.neighbors()] = -t
return syst
def make_wire_with_flat_potential(W=10, L=2, t=1):
def onsite(s, V):
return (4 - V) * t
# Construct the scattering region.
sr = kwant.Builder()
sr[(lat(x, y) for x in range(L) for y in range(W))] = onsite
sr[lat.neighbors()] = -t
# Build and attach lead from both sides.
lead = make_lead_x(W, t)
sr.attach_lead(lead)
sr.attach_lead(lead.reversed())
return sr
def plot_transmission(syst, energy, params):
# Compute conductance
trans = []
for param in params:
smatrix = kwant.smatrix(syst, energy, args=[param])
trans.append(smatrix.transmission(1, 0))
pyplot.plot(params, trans)
_syst = make_wire_with_flat_potential()
kwant.plot(_syst)
_syst = _syst.finalized()
kwant.plotter.bands(_syst.leads[0])
plot_transmission(_syst, 1, np.linspace(-2, 0, 51))
from math import atan2, pi, sqrt
def rectangular_gate_pot(distance, left, right, bottom, top):
Compute the potential of a rectangular gate.
The gate hovers at the given distance over the plane where the
potential is evaluated.
Based on J. Appl. Phys. 77, 4504 (1995)
http://dx.doi.org/10.1063/1.359446
d, l, r, b, t = distance, left, right, bottom, top
def g(u, v):
return atan2(u * v, d * sqrt(u**2 + v**2 + d**2)) / (2 * pi)
def func(x, y, voltage):
return voltage * (g(x-l, y-b) + g(x-l, t-y) +
g(r-x, y-b) + g(r-x, t-y))
return func
_gate1 = rectangular_gate_pot(10, 20, 50, -50, 15)
_gate2 = rectangular_gate_pot(10, 20, 50, 25, 90)
def qpc_potential(site, V):
x, y = site.pos
return _gate1(x, y, V) + _gate2(x, y, V)
def make_barrier(pot, W=40, L=70, t=1):
def onsite(*args):
return 4 * t - pot(*args)
# Construct the scattering region.
sr = kwant.Builder()
sr[(lat(x, y) for x in range(L) for y in range(W))] = onsite
sr[lat.neighbors()] = -t
# Build and attach lead from both sides.
lead = make_lead_x(W, t)
sr.attach_lead(lead)
sr.attach_lead(lead.reversed())
return sr
qpc = make_barrier(qpc_potential)
kwant.plot(qpc);
kwant.plotter.map(qpc, lambda s: qpc_potential(s, 1));
fqpc = qpc.finalized()
kwant.plotter.bands(fqpc.leads[0]);
plot_transmission(fqpc, 0.3, np.linspace(-1, 0, 101))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: "MOSFET" toy model
Step2: The next one is slightly modified
Step3: The following function also needs to be modified slightly
Step4: Let's put the above functions to some use
Step6: Your turn!
Step7: The function below is almost like make_wire_with_flat_potential. The difference is that it can be tuned to use a general potential using its first parameter.
Step8: Let's construct the system and plot the potential. The lambda expression is used to fix the gate voltage to a particular value.
Step9: To get an idea of the involved energy scales it's useful to plot the band structure
Step10: Finally, let's plot transmission as a function of the gate voltage. We see the hallmark of the QPC
|
15,297 | <ASSISTANT_TASK:>
Python Code:
from configparser import ConfigParser
from os.path import join
from os import pardir
config = ConfigParser()
config.read(join(pardir,'src','credentials.ini'))
APP_KEY = config['twitter']['app_key']
APP_SECRET = config['twitter']['app_secret']
OAUTH_TOKEN = config['twitter']['oauth_token']
OAUTH_TOKEN_SECRET = config['twitter']['oauth_token_secret']
from twitter import oauth, Twitter, TwitterHTTPError
auth = oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET,
APP_KEY, APP_SECRET)
twitter_api = Twitter(auth=auth)
twitter_api.retry = True
brogrammers = ['jakevdp', 'rasbt', 'GaelVaroquaux', 'amuellerml', 'fperez_org',
'fpedregosa', 'ogrisel', 'dontusethiscode', 'randal_olson', 'tdhopper' ]
sisgrammers = ['pkafei', 'LorenaABarba', 'jessicamckellar', 'heddle317', 'diana_clarke',
'wholemilk', 'spang', 'cecilycarver', 'juliaelman', 'b0rk']
brotweets = []
for bro in brogrammers:
brotweets.extend(twitter_api.statuses.user_timeline(screen_name=bro, count=100))
sistweets = []
for sis in sisgrammers:
sistweets.extend(twitter_api.statuses.user_timeline(screen_name=sis, count=100))
import re
def clean_tweet(tweet):
Simplest preprocess.
Convert a tweet to lowercarse and replace URLs and @username by a generic token
Args:
tweet (str): Tweet to clean.
Returns:
str: Preprocessed tweet
tweet = tweet.lower()
# Remove URL and replace them with a token
URL_REGEX = r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
tweet = re.sub(URL_REGEX, '<url>', tweet, flags=re.MULTILINE)
# Remove usernames and replace them with a token
tweet = re.sub("@([A-Za-z0-9_]+)", "<user>", tweet)
# Remove repeated spaces
tweet = re.sub(r"\s{2,}", " ", tweet)
# If a character is repeated more than 4 time, keep only 3 repetitions.
tweet = re.sub(r'(.)\1{4,}', r'\1\1\1', tweet)
return tweet
import pandas as pd
dataset = []
# Gather the text
for tweet in brotweets:
cleaned_tweet = clean_tweet(tweet['text'])
dataset.append({'id': tweet['id'], 'text': cleaned_tweet, 'class': 0})
for tweet in sistweets:
cleaned_tweet = clean_tweet(tweet['text'])
dataset.append({'id': tweet['id'], 'text': cleaned_tweet, 'class': 1})
pd_dataset = pd.DataFrame(dataset)
pd_dataset.head()
pd_dataset.to_csv('../corpora/full_dataset.csv')
pd_dataset[['class', 'id']].to_csv('../corpora/ep16.csv')
import pandas as pd
DATASET_PATH = "../corpora/full_dataset.csv"
pd_dataset = pd.DataFrame.from_csv(DATASET_PATH)
pd_dataset.head()
import nltk.data
#nltk.download()
from nltk.tokenize import TweetTokenizer, word_tokenize
from nltk.corpus import stopwords
from collections import Counter
import re
import scipy.stats as stats
print(', '.join(stopwords.words('english')[:20]))
def get_vocabulary(corpus, tokenizer):
Get the vocabulary of a dataset.
Get a vocabulary of a set of tweets after removing stopwords, non letters,
and replacing each number by the token <number>
Args:
corpus (list of tweets): A list of tweets.
tokenizer (function): tokenizer function. To get the tokens of each tweet.
Returns:
Counter: Vocabulary with the frequency of each word in it.
stop_words = stopwords.words('english')
# Remove puntuation marks
no_punks = [re.sub(r'\W', ' ', tweet) for tweet in corpus]
# Tokenize and remove stop words
clean_tokens = []
for tweet in no_punks:
# Replace different numbers with a token
tweet = re.sub(r"\.\d+\s*", ".<number> ", tweet)
tweet = re.sub(r"\d+\s*", " <number> ", tweet)
tokens = tokenizer(tweet)
tokens = [token for token in tokens if token not in stop_words]
clean_tokens.extend(tokens)
# Build the vocabulary
return Counter(clean_tokens)
tknzr = TweetTokenizer()
brotweets = pd_dataset[pd_dataset['class'] == 0]['text'].tolist()
sistweets = pd_dataset[pd_dataset['class'] == 1]['text'].tolist()
brocabulary = get_vocabulary(brotweets, tknzr.tokenize)
siscabulary = get_vocabulary(sistweets, tknzr.tokenize)
brocabulary.most_common(10)
siscabulary.most_common(10)
from bokeh.plotting import figure, show, vplot, ColumnDataSource
from bokeh.io import output_notebook
from bokeh.models import HoverTool
output_notebook()
MOST_COMMON = 50
mc_brocavulary = brocabulary.most_common(int(MOST_COMMON/2))
mc_siscavulary = siscabulary.most_common(int(MOST_COMMON/2))
fr_brocavulary, fr_siscavulary = [], []
most_common_words = mc_brocavulary + mc_siscavulary
words = list(set(word for word, _ in most_common_words))
for word in words:
if word in brocabulary:
fr_brocavulary.append(brocabulary[word])
else:
fr_brocavulary.append(0)
if word in siscabulary:
fr_siscavulary.append(siscabulary[word])
else:
fr_siscavulary.append(0)
import numpy as np
range_words=list(range(1,len(words)+1))
source = ColumnDataSource(data=dict(range_words=range_words,
words=words,
freq_true=fr_brocavulary,
freq_false=fr_siscavulary))
hover = HoverTool()
hover.point_policy = "follow_mouse"
hover = HoverTool(
tooltips=[
("words", "@words"),
]
)
TOOLS="pan,wheel_zoom,box_zoom,reset,save"
p = figure(title = "Vocabulary gender", x_range=words, tools=[TOOLS, hover])
p.xaxis.axis_label = 'Words'
p.yaxis.axis_label = 'Frequency'
p.circle('range_words', 'freq_true', source=source, fill_alpha=0.2, size=10, color="navy")
p.circle('range_words', 'freq_false', source=source, fill_alpha=0.2, size=10, color='red')
p.xaxis.major_label_orientation = np.pi/4
show(p)
tweet_lens_bro = [len(tweet) for tweet in brotweets]
hist_bro, edges_bro = np.histogram(tweet_lens_bro, density=True, bins=20)
tweet_lens_bro.sort()
tweet_lens_sis = [len(tweet) for tweet in sistweets]
hist_sis, edges_sis = np.histogram(tweet_lens_sis, density=True, bins=20)
tweet_lens_sis.sort()
p = figure(title="")
p.quad(top=hist_bro, bottom=0, left=edges_bro[:-1], right=edges_bro[1:],
fill_color="navy", line_color="#033649", fill_alpha=0.3)
p.quad(top=hist_sis, bottom=0, left=edges_sis[:-1], right=edges_sis[1:],
fill_color="red", line_color="#033649", fill_alpha=0.3)
sigma = np.std(tweet_lens_bro)
mu = np.mean(tweet_lens_bro)
pdf = stats.norm.pdf(tweet_lens_bro, mu, sigma)
p.line(tweet_lens_bro, pdf, line_color="navy", line_width=6, alpha=0.7, legend="PDF")
sigma = np.std(tweet_lens_sis)
mu = np.mean(tweet_lens_sis)
pdf = stats.norm.pdf(tweet_lens_sis, mu, sigma)
p.line(tweet_lens_sis, pdf, line_color="red", line_width=6, alpha=0.7, legend="PDF")
p.xaxis.axis_label = 'len(tweets)'
p.yaxis.axis_label = '# tweets'
show(p)
from sklearn import cross_validation
X = pd_dataset['text'].tolist()
y = pd_dataset['class'].tolist()
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
print("Examples in train: {}".format(len(X_train)))
print("Examples in test: {}".format(len(X_test)))
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(analyzer = "word",
tokenizer = None,
preprocessor = None,
stop_words = None,
ngram_range=(1, 1),
max_features = 5000)
# Fit the train
BOW_train = vectorizer.fit_transform(X_train)
BOW_train = BOW_train.toarray()
# Transform the test
BOW_test = vectorizer.transform(X_test)
BOW_test = BOW_test.toarray()
print('Train: {{0|1}}^({}x{})'.format(BOW_train.shape[0], BOW_train.shape[1]))
print('Test: {{0|1}}^({}x{})'.format(BOW_test.shape[0], BOW_test.shape[1]))
vocab = vectorizer.get_feature_names()
print('\nVOCABULARY EXTRACT: {}'.format(', '.join(vocab[500:600])))
np.set_printoptions(threshold=np.nan)
print('\nTWEET REPRESENTATION: {}'.format(BOW_train[0]))
from os.path import join
import numpy as np
def load_glove_dict(glove_filepath):
Build a dictionary with GloVe values.
Read the GloVe resource and build a dictionary where the key is the word
and the value its GloVe representation
Args:
glove_filepath (str): Path where the GloVe resource is.
Returns:
dict: Dictionary with GloVe data.
glove_embeddings = {}
# TODO: check if the filepath exists
with open(glove_filepath) as glove_file:
for line in glove_file:
split_line = line.split()
word, vector = split_line[0], np.asarray(split_line[1:])
glove_embeddings[word] = vector
return glove_embeddings
GLOVE_PATH = '../../../resources/GloVe/twitter_dataset'
embedding_size = '25'
glove_file = join(GLOVE_PATH, 'glove.twitter.27B.' + embedding_size + 'd.txt')
glove_25 = load_glove_dict(glove_file)
embedding_size = '100'
glove_file = join(GLOVE_PATH, 'glove.twitter.27B.' + embedding_size + 'd.txt')
glove_100 = load_glove_dict(glove_file)
def get_most_common_vocab(most_common, vocabulary):
Get the most common words in a vocabulary
Args:
most_common (int): Number of most common word that want to be retrieved.
vocabulary (Counter): Vocabulary with words and frequencies of each word.
Returns:
set: set of most common words in this vocabulary.
most_common_words = vocabulary.most_common(int(most_common))
return set(word for word, _ in most_common_words)
def get_words_to_plot(most_common, vocabulary, dictionary):
words_to_plot = {}
unseen_words = []
for word in get_most_common_vocab(most_common, vocabulary):
if word in dictionary:
words_to_plot[word] = dictionary[word]
else:
unseen_words.append(word)
return words_to_plot, unseen_words
from sklearn.manifold import TSNE
def plot_tsne(dictionary, most_common):
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
words_to_plot_bros, unseen_words_bros = get_words_to_plot(most_common, brocabulary, dictionary)
words_to_plot_sis, unseen_words_sis = get_words_to_plot(most_common, siscabulary, dictionary)
low_dim_embs_bros = tsne.fit_transform(list(words_to_plot_bros.values()))
low_dim_embs_sis = tsne.fit_transform(list(words_to_plot_sis.values()))
words_bros = list(words_to_plot_bros.keys())
range_words_bros=list(range(1,len(words_bros)+1))
source_bros = ColumnDataSource(data=dict(range_words=range_words_bros,
words_bros=words_bros,
x=low_dim_embs_bros[:,0],
y=low_dim_embs_bros[:,1]))
words_sis = list(words_to_plot_sis.keys())
range_words_sis = list(range(1,len(words_sis)+1))
source_sis = ColumnDataSource(data=dict(range_words=range_words_sis,
words_sis=words_sis,
x=low_dim_embs_sis[:,0],
y=low_dim_embs_sis[:,1]))
hover = HoverTool()
hover.point_policy = "follow_mouse"
hover = HoverTool(
tooltips=[
("words_bros", "@words_bros"),
("words_sis", "@words_sis"),
]
)
TOOLS="pan,wheel_zoom,box_zoom,reset,save"
p = figure(title = "Word visualization", tools=[TOOLS, hover])
p.circle('x', 'y', source=source_bros, fill_alpha=0.2, size=10, color='navy')
p.circle('x', 'y', source=source_sis, fill_alpha=0.2, size=10, color='red')
show(p)
return set(unseen_words_bros + unseen_words_sis)
unseen_words = plot_tsne(glove_100, 1000)
print(', '.join(unseen_words))
def tokenize_dataset(tokenizer, dataset):
tokenize_dataset = []
for tweet in dataset:
# Replace different numbers with a token
tweet = re.sub(r"\.\d+\s*", ".<number> ", tweet)
tweet = re.sub(r"\d+\s*", " <number> ", tweet)
tokens = tokenizer(tweet)
tokenize_dataset.append(tokens)
return tokenize_dataset
X_train_tokenized = tokenize_dataset(TweetTokenizer().tokenize, X_train)
X_test_tokenized = tokenize_dataset(TweetTokenizer().tokenize, X_test)
def get_embeddings(dataset, dictionary, embedding_size):
X_emebeddings = []
for tweet in dataset:
tweet_embeddings = []
for word in tweet:
if word in dictionary:
tweet_embeddings.append(dictionary[word])
if not tweet_embeddings:
tweet_embeddings.append(np.zeros(embedding_size))
# Each tweet would have a different number of words and ML techniques requiere fixed inputs.
X_emebeddings.append(np.mean(np.asarray(tweet_embeddings, dtype=np.float32), axis=0))
return X_emebeddings
X_train_GloVe = get_embeddings(X_train_tokenized, glove_100, 100)
X_test_GloVe = get_embeddings(X_test_tokenized, glove_100, 100)
from gensim.models import word2vec
# Initialize and train the model (this will take some time)
model = word2vec.Word2Vec(X_train_tokenized,
workers = 4,
size = 100,
min_count = 1, # How many times a word should appear to be taken into account
window = 5,
sample = 1e-3 , # Downsample setting for frequent words
batch_words = 100) # Batches of examples passed to worker threads
# This model won't be updated
model.init_sims(replace=True)
model_name = "word2vec"
model.save(model_name)
model.syn0.shape
_ = plot_tsne(model, 1000)
model.most_similar("python")
X_train_word2vec = get_embeddings(X_train_tokenized, model, 100)
X_test_word2vec = get_embeddings(X_test_tokenized, model, 100)
from sklearn import svm
from sklearn.metrics import classification_report, roc_curve, auc
clf = svm.SVC()
clf.fit(BOW_train, y_train)
predicction_BOW = clf.predict(BOW_test)
target_names = ['Bros', 'Sis']
print(classification_report(y_test, predicction_BOW, target_names=target_names))
clf = svm.SVC()
clf.fit(X_train_GloVe, y_train)
predicction_GloVe = clf.predict(X_test_GloVe)
print(classification_report(y_test, predicction_GloVe, target_names=target_names))
clf = svm.SVC()
clf.fit(X_train_word2vec, y_train)
predicction_word2vec = clf.predict(X_test_word2vec)
print(classification_report(y_test, predicction_word2vec, target_names=target_names))
false_positive_rate_bow, true_positive_rate_bow, _ = roc_curve(y_test, predicction_BOW)
roc_auc_bow = auc(false_positive_rate_bow, true_positive_rate_bow)
false_positive_rate_glove, true_positive_rate_glove, _ = roc_curve(y_test, predicction_GloVe)
roc_auc_glove = auc(false_positive_rate_glove, true_positive_rate_glove)
false_positive_rate_w2v, true_positive_rate_w2v, _ = roc_curve(y_test, predicction_word2vec)
roc_auc_w2v = auc(false_positive_rate_w2v, true_positive_rate_w2v)
from bokeh.palettes import Spectral6
p = figure(title="Receiver Operating Characteristic", tools=TOOLS)
p.line(false_positive_rate_bow, true_positive_rate_bow, legend='BoW ROC curve (area = {:.2f})'.format(roc_auc_bow),
line_color="green", line_width=2)
p.line(false_positive_rate_glove, true_positive_rate_glove,
legend='GloVE ROC curve (area = {:.2f})'.format(roc_auc_glove),
line_color="blue", line_width=2)
p.line(false_positive_rate_w2v, true_positive_rate_w2v,
legend='W2V ROC curve (area = {:.2f})'.format(roc_auc_w2v),
line_color="yellow", line_width=2)
p.line([0.0, 1.0], [0.0, 1.05], legend='Guessing',
line_color="gray", line_width=2, line_dash=(4, 4))
p.xaxis.axis_label = 'False Positive Rate'
p.yaxis.axis_label = 'True Positive Rate'
p.legend.location = 'bottom_right'
show(p)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Full disclaimer
Step3: 2.2. Clean your data
Step4: 2.3. Share your data
Step6: 2.4. Study your data
Step7: 2.5. Split your data in train and test
Step8: 3. In the beginning we had a bag of words (or maybe the set)
Step11: 4. Word embeddings
Step12: 4.2. But I want to train word embeddings with my own data!
Step13: 5. Now fight!
|
15,298 | <ASSISTANT_TASK:>
Python Code:
import json
import numpy as np
import pandas as pd
import xgboost as xgb
import tensorflow as tf
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from google.cloud import bigquery
from google.colab import auth
auth.authenticate_user()
# Note: this query may take a few minutes to run
%%bigquery df --project your-cloud-project
SELECT
arr_delay,
carrier,
origin,
dest,
dep_delay,
taxi_out,
distance
FROM
`cloud-training-demos.flights.tzcorr`
WHERE
extract(year from fl_date) = 2015
ORDER BY fl_date ASC
LIMIT 300000
df = df.dropna()
df = shuffle(df, random_state=2)
df.head()
# Only include origins and destinations that occur frequently in the dataset
df = df[df['origin'].map(df['origin'].value_counts()) > 500]
df = df[df['dest'].map(df['dest'].value_counts()) > 500]
df = pd.get_dummies(df, columns=['carrier', 'origin', 'dest'])
# Create a boolean column to indicate whether flight was > 30 mins delayed
df.loc[df['arr_delay'] >= 30, 'arr_delay_bool'] = 1
df.loc[df['arr_delay'] < 30, 'arr_delay_bool'] = 0
df['arr_delay_bool'].value_counts()
classify_model_labels = df['arr_delay_bool']
classify_model_data = df.drop(columns=['arr_delay', 'arr_delay_bool'])
x,y = classify_model_data,classify_model_labels
x_train,x_test,y_train,y_test = train_test_split(x,y)
model = xgb.XGBRegressor(
objective='reg:logistic'
)
# Given the dataset size, this may take 1-2 minutes to run
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
acc = accuracy_score(y_test, np.round(y_pred))
print(acc)
# Save the model
model.save_model('model.bst')
# Set your cloud project
PROJECT = 'your-cloud-project'
!gcloud config set project $PROJECT
BUCKET = PROJECT + '_flight_model_bucket'
# Create a bucket if you don't have one
# You only need to run this once
!gsutil mb gs://$BUCKET
!gsutil cp 'model.bst' gs://$BUCKET
# Create the model resource
!gcloud ai-platform models create flight_delay_prediction
# Create the version
!gcloud ai-platform versions create 'v1' \
--model 'flight_delay_prediction' \
--origin gs://$BUCKET \
--runtime-version=1.15 \
--framework 'XGBOOST' \
--python-version=3.7
# Get a prediction on the first example from our test set
!rm input.json
num_examples = 10
with open('input.json', 'a') as f:
for i in range(num_examples):
f.write(str(x_test.iloc[i].values.tolist()))
f.write('\n')
!cat input.json
# Make a prediction to the deployed model
!gcloud ai-platform predict --model 'flight_delay_prediction' --version \
'v1' --json-instances 'input.json'
# Compare this with actual values
print(y_test.iloc[:5])
tf_model = tf.keras.Sequential([
tf.keras.layers.Dense(32, activation='relu', input_shape=[len(x_train.iloc[0])]),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
tf_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
tf_model.fit(x_train, y_train, epochs=10, validation_split=0.1)
metrics = tf_model.evaluate(x_test, y_test)
print(metrics)
tf_model_path = 'gs://' + BUCKET + '/tf'
tf_model.save(tf_model_path, save_format='tf')
!gcloud ai-platform versions create 'v2' \
--model 'flight_delay_prediction' \
--origin $tf_model_path \
--runtime-version=2.1 \
--framework 'TENSORFLOW' \
--python-version=3.7
# Make a prediction to the new version
!gcloud ai-platform predict --model 'flight_delay_prediction' --version \
'v2' --json-instances 'input.json'
regression_model_labels = df['arr_delay']
regression_model_data = df.drop(columns=['arr_delay', 'arr_delay_bool'])
x,y = regression_model_data,regression_model_labels
x_train,x_test,y_train,y_test = train_test_split(x,y)
model = xgb.XGBRegressor(
objective='reg:linear'
)
# This will take 1-2 minutes to run
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
for i,val in enumerate(y_pred[:10]):
print(val)
print(y_test.iloc[i])
print()
model.save_model('model.bst')
!gsutil cp model.bst gs://$BUCKET/regression/
!gcloud ai-platform models create 'flights_regression'
# Create the version
!gcloud ai-platform versions create 'v1' \
--model 'flights_regression' \
--origin gs://$BUCKET/regression \
--runtime-version=1.15 \
--framework 'XGBOOST' \
--python-version=3.7
!gcloud ai-platform predict --model 'flighs_regression' --version \
'v1' --json-instances 'input.json'
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download and preprocess data
Step2: In the following cell, replace your-cloud-project with the name of your GCP project.
Step3: Model version #1
Step4: Deploying classification model to AI Platform
Step5: Model version #2
Step6: Note that accuracy will be similar to the XGBoost model. We're just using this to demonstrate how training a model with a different framework could be deployed as a new version.
Step7: Next we'll deploy the updated TF model to AI Platform as a v2.
Step8: Alternative
|
15,299 | <ASSISTANT_TASK:>
Python Code:
# Load required packages
from sklearn import datasets
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.preprocessing import StandardScaler
import numpy as np
from sklearn.svm import SVC
# Load the data
dataset = datasets.load_breast_cancer()
# Create X from the features
X = dataset.data
# Create y from the target
y = dataset.target
# Create a scaler object
sc = StandardScaler()
# Fit the scaler to the feature data and transform
X_std = sc.fit_transform(X)
# Create a list of 10 candidate values for the C parameter
C_candidates = dict(C=np.logspace(-4, 4, 10))
# Create a gridsearch object with the support vector classifier and the C value candidates
clf = GridSearchCV(estimator=SVC(), param_grid=C_candidates)
# Fit the cross validated grid search on the data
clf.fit(X_std, y)
# Show the best value for C
clf.best_estimator_.C
cross_val_score(clf, X_std, y)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get Data
Step2: Standardize Data
Step3: Create Inner Cross Validation (For Parameter Tuning)
Step4: The code below isn't necessary for parameter tuning using nested cross validation, however to demonstrate that our inner cross validation grid search can find the best value for the parameter C, we will run it once here
Step5: Create Outer Cross Validation (For Model Evaluation)
|