Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
---|---|---|
15,600 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
p = np.poly1d([6., 1., -2.])
p(0), p(1), p(10)
p.order
p.roots
# Comprobando las raíces
p(p.roots)
np.poly1d([2.,-3.,7.]).roots
import matplotlib.pyplot as plt
xvalues = np.linspace(-10.,10.,100)
plt.plot(xvalues, p(xvalues), '-')
plt.show()
y = p(xvalues) + np.random.randn(xvalues.size)*p(xvalues).std()/2
# Intentar ajustar un polinomio de grado 3
np.polyfit(xvalues,y,deg=3)
# Intentar ajustar un polinomio de grado 2
p3 = np.poly1d( np.polyfit(xvalues,y,deg=2) )
p3
plt.plot(xvalues, y, 'xr', xvalues, p3(xvalues), '-b')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La clase poly1D representa polinomios unidimensionales con base en sus coeficientes. Sea el polinomio
Step2: Se puede evaluar el valor de $p(x)$
Step3: Se puede determinar el orden del polinomio con
Step4: y sus raíces
Step5: Y que tal el polinomio con raíces complejas $p2(x) = 2 x^2 -3 x +7$
Step6: Se puede obtener la gráfica del polinomio
Step7: Ajuste de curvas mediante polinomios
Step8: Teniendo un conjunto de values x,y se puede determinar el polimonio que mejor se ajusta
Step9: Obtener la gráfica de las observaciones y del polinomio de mejor ajuste
|
15,601 | <ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds, axis=0)
y_train_folds = np.array_split(y_train, num_folds, axis=0)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
k_to_accuracies[k] = []
for i in xrange(num_folds):
Xtr = np.delete(X_train_folds, i, axis=0).reshape(-1, X_train.shape[1])
ytr = np.delete(y_train_folds, i, axis=0).reshape(-1, 1)
Xval = X_train_folds[i]
yval = y_train_folds[i]
classifier.train(Xtr, ytr)
yval_predict = classifier.predict(Xval, k)
k_to_accuracies[k].append(np.mean(yval_predict == yval))
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
|
15,602 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
return None, None, None, None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Number of Layers
num_layers = None
# Embedding Size
encoding_embedding_size = None
decoding_embedding_size = None
# Learning Rate
learning_rate = None
# Dropout Keep Probability
keep_probability = None
display_step = None
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
15,603 | <ASSISTANT_TASK:>
Python Code:
debug_flag = False
import datetime
import glob
import logging
import lxml
import os
import six
import xml
import xmltodict
import zipfile
# paper identifier
paper_identifier = "Newsday"
archive_identifier = "Newsday_20171006230211_00001"
# source
source_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/proquest_hnp/data"
source_paper_path = "{}/{}".format( source_paper_folder, paper_identifier )
# uncompressed
uncompressed_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/uncompressed"
uncompressed_paper_path = "{}/{}".format( uncompressed_paper_folder, paper_identifier )
# make sure an identifier is set before you make a path here.
if ( ( archive_identifier is not None ) and ( archive_identifier != "" ) ):
# identifier is set.
source_archive_file = "{}.zip".format( archive_identifier )
source_archive_path = "{}/{}".format( source_paper_path, source_archive_file )
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, archive_identifier )
#-- END check to see if archive_identifier present. --#
%pwd
# current working folder
current_working_folder = "/home/jonathanmorgan/work/django/research/work/phd_work/data/article_loading/proquest_hnp/{}".format( paper_identifier )
current_datetime = datetime.datetime.now()
current_date_string = current_datetime.strftime( "%Y-%m-%d-%H-%M-%S" )
logging_file_name = "{}/research-data_load-{}-{}.log.txt".format( current_working_folder, paper_identifier, current_date_string )
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
filename = logging_file_name,
filemode = 'w' # set to 'a' if you want to append, rather than overwrite each time.
)
# init django
django_init_folder = "/home/jonathanmorgan/work/django/research/work/phd_work"
django_init_path = "django_init.py"
if( ( django_init_folder is not None ) and ( django_init_folder != "" ) ):
# add folder to front of path.
django_init_path = "{}/{}".format( django_init_folder, django_init_path )
#-- END check to see if django_init folder. --#
%run $django_init_path
# context_text imports
from context_text.article_coding.article_coding import ArticleCoder
from context_text.article_coding.article_coding import ArticleCoding
from context_text.article_coding.open_calais_v2.open_calais_v2_article_coder import OpenCalaisV2ArticleCoder
from context_text.collectors.newsbank.newspapers.GRPB import GRPB
from context_text.collectors.newsbank.newspapers.DTNB import DTNB
from context_text.models import Article
from context_text.models import Article_Subject
from context_text.models import Newspaper
from context_text.shared.context_text_base import ContextTextBase
# context_text_proquest_hnp
from context_text_proquest_hnp.proquest_hnp_newspaper_helper import ProquestHNPNewspaperHelper
# python_utilities
from python_utilities.logging.logging_helper import LoggingHelper
# init
my_logging_helper = LoggingHelper()
my_logging_helper.set_logger_name( "proquest_hnp-article-loading-{}".format( paper_identifier ) )
log_message = None
my_paper = ProquestHNPNewspaperHelper()
paper_instance = my_paper.initialize_from_database( paper_identifier )
my_paper.source_all_papers_folder = source_paper_folder
my_paper.destination_all_papers_folder = uncompressed_paper_folder
print( my_paper )
print( paper_instance )
my_paper = ProquestHNPNewspaperHelper()
my_paper.paper_identifier = paper_identifier
my_paper.source_all_papers_folder = source_paper_folder
my_paper.source_paper_path = source_paper_path
my_paper.destination_all_papers_folder = uncompressed_paper_folder
my_paper.destination_paper_path = uncompressed_paper_path
my_paper.paper_start_year = 1940
my_paper.paper_end_year = 1989
my_newspaper = Newspaper.objects.get( id = 7 )
my_paper.newspaper = my_newspaper
phnp_newspaper_instance = my_paper.create_PHNP_newspaper()
print( phnp_newspaper_instance )
# create folder to hold the results of decompressing paper's zip files.
did_uncomp_paper_folder_exist = my_paper.make_dest_paper_folder()
# decompress the files
my_paper.uncompress_paper_zip_files()
%cd $uncompressed_paper_path
%ls
# loop over files in the current archive folder path.
object_type_to_count_map = my_paper.process_archive_object_types( uncompressed_archive_path )
xml_folder_list = glob.glob( "{}/*".format( uncompressed_paper_path ) )
print( "folder_list: {}".format( xml_folder_list ) )
# build map of all object types for a paper to the overall counts of each
paper_object_type_to_count_map = my_paper.process_paper_object_types()
news_object_type_list = []
news_object_type_list.append( 'Article|Feature' )
news_object_type_list.append( 'Feature|Article' )
news_object_type_list.append( 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y' )
# get list of all object types
master_object_type_list = my_paper.get_all_object_types()
print( "Object Types: {}".format( master_object_type_list ) )
# directory to work in.
uncompressed_archive_folder = "Newsday_20171006231925_00050"
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder )
print( 'Uncompressed archive folder: {}'.format( uncompressed_archive_path ) )
# build map of file types to lists of files of that type in specified folder.
object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path )
# which types do we want to preview?
#types_to_output = news_object_type_list
types_to_output = [ "Advertisement|Classified Advertisement" ]
types_to_output = [ "A|d|v|e|r|t|i|s|e|m|e|n|t" ]
types_to_output = [ 'Advertisement|Classified Advertisement' ]
types_to_output = [ 'Article|Feature' ]
types_to_output = [ 'B|i|r|t|h| |N|o|t|i|c|e' ]
types_to_output = [ 'Classified Advertisement|Advertisement' ]
types_to_output = [ 'Commentary|Editorial' ]
types_to_output = [ 'Correspondence|Letter to the Editor' ]
types_to_output = [ 'C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t' ]
types_to_output = [ 'E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c' ]
types_to_output = [ 'Editorial|Commentary' ]
types_to_output = [ 'Feature|Article' ]
types_to_output = [ 'Front Matter|Table of Contents' ]
types_to_output = [ 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y' ]
types_to_output = [ 'G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n' ]
types_to_output = [ 'I|l|l|u|s|t|r|a|t|i|o|n' ]
types_to_output = [ 'I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h' ]
types_to_output = [ 'Legal Notice|News' ]
types_to_output = [ 'Letter to the Editor|Correspondence' ]
types_to_output = [ 'Marriage Announcement|News' ]
types_to_output = [ 'N|e|w|s' ]
types_to_output = [ 'News|Legal Notice' ]
types_to_output = [ 'News|Marriage Announcement' ]
types_to_output = [ 'News|Military/War News' ]
types_to_output = [ 'O|b|i|t|u|a|r|y' ]
types_to_output = [ 'R|e|v|i|e|w' ]
types_to_output = [ 'S|t|o|c|k| |Q|u|o|t|e' ]
types_to_output = [ 'Table of Contents|Front Matter' ]
types_to_output = [ 'Table Of Contents|Front Matter' ]
types_to_output = [ 'U|n|d|e|f|i|n|e|d' ]
# declare variables
xml_file_path_list = None
xml_file_path_count = None
xml_file_path_example_list = None
xml_file_path = None
xml_file = None
xml_dict = None
xml_string = None
# loop over types
for object_type in types_to_output:
# print type and count
xml_file_path_list = object_type_to_file_path_map.get( object_type, [] )
xml_file_path_count = len( xml_file_path_list )
xml_file_path_example_list = xml_file_path_list[ : 10 ]
print( "\n- {} - {} files:".format( object_type, xml_file_path_count ) )
for xml_file_path in xml_file_path_example_list:
print( "----> {}".format( xml_file_path ) )
# try to parse the file
with open( xml_file_path ) as xml_file:
# parse XML
xml_dict = xmltodict.parse( xml_file.read() )
#-- END with open( xml_file_path ) as xml_file: --#
# pretty-print
xml_string = xmltodict.unparse( xml_dict, pretty = True )
# output
print( xml_string )
#-- END loop over example file paths. --#
#-- END loop over object types. --#
# directory to work in.
uncompressed_archive_folder = "Newsday_20171006231925_00050"
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder )
print( 'Uncompressed archive folder: {}'.format( uncompressed_archive_path ) )
# build map of file types to lists of files of that type in specified folder.
object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path )
# which types do we want to preview?
types_to_output = news_object_type_list
# declare variables
xml_file_path_list = None
xml_file_path_count = None
xml_file_path_example_list = None
xml_file_path = None
xml_file = None
xml_dict = None
xml_string = None
# loop over types
for object_type in types_to_output:
# print type and count
xml_file_path_list = object_type_to_file_path_map.get( object_type, [] )
xml_file_path_count = len( xml_file_path_list )
xml_file_path_example_list = xml_file_path_list[ : 10 ]
print( "\n- {} - {} files:".format( object_type, xml_file_path_count ) )
for xml_file_path in xml_file_path_example_list:
print( "----> {}".format( xml_file_path ) )
# try to parse the file
with open( xml_file_path ) as xml_file:
# parse XML
xml_dict = xmltodict.parse( xml_file.read() )
#-- END with open( xml_file_path ) as xml_file: --#
# pretty-print
xml_string = xmltodict.unparse( xml_dict, pretty = True )
# output
print( xml_string )
#-- END loop over example file paths. --#
#-- END loop over object types. --#
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup - Imports
Step2: Setup - working folder paths
Step3: Setup - logging
Step4: Setup - virtualenv jupyter kernel
Step5: Setup - Initialize LoggingHelper
Step6: Setup - initialize ProquestHNPNewspaper
Step7: set up manually
Step8: If desired, add to database.
Step9: Find articles to be loaded
Step10: For each *.zip file in the paper's source folder
Step11: Work with uncompressed files
Step12: parse and load XML files
Step13: Processing 5752 files in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/BostonGlobe/BG_20171002210239_00001
Step14: Example output
Step15: explore all known object types
Step16: files in archive Newsday_20171006231925_00050 - 1989
|
15,604 | <ASSISTANT_TASK:>
Python Code:
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
from sklearn.cross_validation import cross_val_score
n_samples = 1000
np.random.seed(0)
X = np.sort(np.random.rand(n_samples))
y = np.cos(1.5 * np.pi * X) + np.random.randn(n_samples) * 0.1
X = X[:, np.newaxis]
K = 100
def cv_mse(degree):
polynomial_features = PolynomialFeatures(degree=degree)
linear_regression = LinearRegression()
model = Pipeline([("polynomial_features", polynomial_features),
("linear_regression", linear_regression)])
scores = -cross_val_score(model, X, y, "mean_squared_error", cv=K)
return scores
scores1 = cv_mse(3)
sns.distplot(scores1)
print(scores1.mean(), scores1.std())
D = 9
degrees = 2**np.arange(D)
all_scores = np.zeros((K, D))
for i, d in enumerate(degrees):
scores = cv_mse(d)
all_scores[:, i] = scores
df = pd.DataFrame(-np.log(all_scores), columns=degrees)
df.describe()
df.mean().plot(kind="bar", rot=0, yerr=df.std())
plt.show()
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Lasso
from sklearn.cross_validation import cross_val_score
data = load_diabetes()
X = data.data[:200]
y = data.target[:200]
model = Lasso()
alphas = np.logspace(-4, -.5, 50)
scores = list()
for alpha in alphas:
model.alpha = alpha
this_scores = cross_val_score(model, X, y, "mean_squared_error", cv=5)
scores.append(np.mean(this_scores))
plt.semilogx(alphas, scores)
plt.ylabel('CV score')
plt.xlabel('alpha')
plt.axhline(np.max(scores), linestyle=':')
plt.show()
from sklearn.linear_model import LassoCV
alphas = np.logspace(-4, -.5, 50)
lasso_cv = LassoCV(alphas=alphas, cv=5)
lasso_cv.fit(X, y)
print(lasso_cv.alpha_ )
scores = -lasso_cv.mse_path_.mean(axis=1)
plt.semilogx(lasso_cv.alphas_, scores)
plt.axhline(np.max(scores), linestyle=':')
plt.axvline(lasso_cv.alpha_ , linestyle=':')
plt.ylabel('CV score')
plt.xlabel('alpha')
plt.show()
from sklearn.linear_model import LassoCV
alphas = np.logspace(-4, -.5, 50)
lasso_cv = LassoCV(alphas=alphas, cv=5)
lasso_cv.fit(X, y)
scores = -lasso_cv.mse_path_.mean(axis=1)
scores_std = lasso_cv.mse_path_.std(axis=1)
scores_std1 = scores + scores_std / np.sqrt(len(lasso_cv.mse_path_))
scores_std2 = scores - scores_std / np.sqrt(len(lasso_cv.mse_path_))
alpha_1se = lasso_cv.alphas_[np.argmax(scores_std1 > np.max(scores))]
print(alpha_1se)
plt.semilogx(lasso_cv.alphas_, scores)
plt.semilogx(lasso_cv.alphas_, scores_std1, 'o-')
plt.semilogx(lasso_cv.alphas_, scores_std2, 'o-')
plt.axhline(np.max(scores), linestyle=':')
plt.axvline(lasso_cv.alpha_ , linestyle=':')
plt.axvline(alpha_1se, linestyle=':')
plt.ylabel('CV score')
plt.xlabel('alpha')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 정규화 하이퍼 모수 최적화
Step2: One Standard Error Rule
|
15,605 | <ASSISTANT_TASK:>
Python Code:
from rmtk.vulnerability.derivation_fragility.equivalent_linearization.miranda_2000_firm_soils import miranda_2000_firm_soils
from rmtk.vulnerability.common import utils
%matplotlib inline
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
gmrs_folder = '../../../../../../rmtk_data/accelerograms'
minT, maxT = 0.01, 2.00
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
damage_model_file = "../../../../../../rmtk_data/damage_model.csv"
damage_model = utils.read_damage_model(damage_model_file)
damping_ratio = 0.05
PDM, Sds = miranda_2000_firm_soils.calculate_fragility(capacity_curves, gmrs, damage_model, damping_ratio)
IMT = "Sa"
period = 2.0
regression_method = "least squares"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
utils.plot_vulnerability_model(vulnerability_model)
taxonomy = "RC"
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load capacity curves
Step2: Load ground motion records
Step3: Load damage state thresholds
Step4: Obtain the damage probability matrix
Step5: Fit lognormal CDF fragility curves
Step6: Plot fragility functions
Step7: Save fragility functions
Step8: Obtain vulnerability function
Step9: Plot vulnerability function
Step10: Save vulnerability function
|
15,606 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import pastas as ps
ps.set_log_level("ERROR")
%matplotlib inline
# This notebook has been developed using Pastas version 0.9.9 and Python 3.7
print("Pastas version: {}".format(ps.__version__))
print("Pandas version: {}".format(pd.__version__))
print("Numpy version: {}".format(np.__version__))
print("Python version: {}".format(os.sys.version))
obs = pd.read_csv('obs.csv', index_col='Date', parse_dates=True) * 0.3048
rain = pd.read_csv('rain.csv', index_col='Date', parse_dates=True) * 0.3048
rain = rain.asfreq("D", fill_value=0.0) # There are some nan-values present
evap = pd.read_csv('evap.csv', index_col='Date', parse_dates=True) * 0.3048
ml = ps.Model(obs.loc[::14], name='Kingstown')
rm = ps.RechargeModel(rain, evap, name='recharge', rfunc=ps.Gamma)
ml.add_stressmodel(rm)
ml.solve(tmax="2014");
# Print some information on the model fit for the validation period
print("\nThe R2 and the RMSE in the validation period are ", ml.stats.rsq(tmin="2015", tmax="2019").round(2),
"and", ml.stats.rmse(tmin="2015", tmax="2019").round(2), ", respectively.")
ml.plots.results(tmax="2018");
ml.plots.diagnostics()
# Set matplotlib params to create publication figures
params = {
'axes.labelsize': 18,
'axes.grid': True,
'font.size': 16,
'font.family': 'serif',
'legend.fontsize': 16,
'xtick.labelsize': 16,
'ytick.labelsize': 16,
'text.usetex': False,
'figure.figsize': [8.2, 5],
'lines.linewidth' : 2
}
plt.rcParams.update(params)
# Save figures or not
savefig = True
figpath = "figures"
if not os.path.exists(figpath):
os.mkdir(figpath)
rfunc = ps.Gamma(cutoff=0.999)
p = [100, 1.5, 15]
b = np.append(0, rfunc.block(p))
s = rfunc.step(p)
rfunc2 = ps.Hantush(cutoff=0.999)
p2 = [-100, 4, 15]
b2 = np.append(0, rfunc2.block(p2))
s2 = rfunc2.step(p2)
# Make a figure of the step and block response
fig, [ax1, ax2] = plt.subplots(1, 2, sharex=True, figsize=(8, 4))
ax1.plot(b)
ax1.plot(b2)
ax1.set_ylabel("block response")
ax1.set_xlabel("days")
ax1.legend(["Gamma", "Hantush"], handlelength=1.3)
ax1.axhline(0.0, linestyle="--", c="k")
ax2.plot(s)
ax2.plot(s2)
ax2.set_xlim(0,100)
ax2.set_ylim(-105, 105)
ax2.set_ylabel("step response")
ax2.set_xlabel("days")
ax2.axhline(0.0, linestyle="--", c="k")
ax2.annotate('', xy=(95, 100), xytext=(95, 0),
arrowprops={'arrowstyle': '<->'})
ax2.annotate('A', xy=(95, 100), xytext=(85, 50))
ax2.annotate('', xy=(95, -100), xytext=(95, 0),
arrowprops={'arrowstyle': '<->'})
ax2.annotate('A', xy=(95, 100), xytext=(85, -50))
plt.tight_layout()
if savefig:
path = os.path.join(figpath, "impuls_step_response.eps")
plt.savefig(path, dpi=300, bbox_inches="tight")
fig, [ax1, ax2, ax3] = plt.subplots(3,1, sharex=True, figsize=(8, 7))
ax1.plot(obs, 'k.',label='obs', markersize=2)
ax1.set_ylabel('head (m)', labelpad=0)
ax1.set_yticks([-4, -3, -2])
plot_rain = ax2.plot(rain * 1000, color='k', label='prec', linewidth=1)
ax2.set_ylabel('rain (mm/d)', labelpad=-5)
ax2.set_xlabel('Date');
ax2.set_ylim([0,150])
ax2.set_yticks(np.arange(0, 151, 50))
plot_evap = ax3.plot(evap * 1000,'k', label='evap', linewidth=1)
ax3.set_ylabel('evap (mm/d)')
ax3.tick_params('y')
ax3.set_ylim([0,8])
plt.xlim(['2003','2019'])
plt.xticks([str(x) for x in np.arange(2004, 2019, 2)], rotation=0, horizontalalignment='center')
ax2.set_xlabel("")
ax3.set_xlabel("year")
if savefig:
path = os.path.join(figpath, "data_example_1.eps")
plt.savefig(path, bbox_inches='tight', dpi=300)
# Create the main plot
fig, ax = plt.subplots(figsize=(16,5))
ax.plot(obs, marker=".", c="grey", linestyle=" ")
ax.plot(obs.loc[:"2013":14], marker="x", markersize=7, c="C3", linestyle=" ", mew=2)
ax.plot(ml.simulate(tmax="2019"), c="k")
plt.ylabel('head (m)')
plt.xlabel('year')
plt.title("")
plt.xticks([str(x) for x in np.arange(2004, 2019, 2)], rotation=0, horizontalalignment='center')
plt.xlim('2003', '2019')
plt.ylim(-4.7, -1.6)
plt.yticks(np.arange(-4, -1, 1))
# Create the arrows indicating the calibration and validation period
ax.annotate("calibration period", xy=("2003-01-01", -4.6), xycoords='data',
xytext=(300, 0), textcoords='offset points',
arrowprops=dict(arrowstyle="->"), va="center", ha="center")
ax.annotate("", xy=("2014-01-01", -4.6), xycoords='data',
xytext=(-230, 0), textcoords='offset points',
arrowprops=dict(arrowstyle="->"), va="center", ha="center")
ax.annotate("validation", xy=("2014-01-01", -4.6), xycoords='data',
xytext=(150, 0), textcoords='offset points',
arrowprops=dict(arrowstyle="->"), va="center", ha="center")
ax.annotate("", xy=("2019-01-01", -4.6), xycoords='data',
xytext=(-85, 0), textcoords='offset points',
arrowprops=dict(arrowstyle="->"), va="center", ha="center")
plt.legend(["observed head", "used for calibration","simulated head"], loc=2, numpoints=3)
# Create the inset plot with the step response
ax2 = plt.axes([0.66, 0.65, 0.22, 0.2])
s = ml.get_step_response("recharge")
ax2.plot(s, c="k")
ax2.set_ylabel("response")
ax2.set_xlabel("days", labelpad=-15)
ax2.set_xlim(0, s.index.size)
ax2.set_xticks([0, 300])
if savefig:
path = os.path.join(figpath, "results.eps")
plt.savefig(path, bbox_inches='tight', dpi=300)
from matplotlib.font_manager import FontProperties
font = FontProperties()
#font.set_size(10)
font.set_weight('normal')
font.set_family('monospace')
font.set_name("courier new")
plt.text(-1, -1, str(ml.fit_report()), fontproperties=font)
plt.axis('off')
plt.tight_layout()
if savefig:
path = os.path.join(figpath, "fit_report.eps")
plt.savefig(path, bbox_inches='tight', dpi=600)
fig, ax1 = plt.subplots(1,1, figsize=(8, 3))
ml.residuals(tmax="2019").plot(ax=ax1, c="k")
ml.noise(tmax="2019").plot(ax=ax1, c="C0")
plt.xticks([str(x) for x in np.arange(2004, 2019, 2)], rotation=0, horizontalalignment='center')
ax1.set_ylabel('(m)')
ax1.set_xlabel('year')
ax1.legend(["residuals", "noise"], ncol=2)
if savefig:
path = os.path.join(figpath, "residuals.eps")
plt.savefig(path, bbox_inches='tight', dpi=300)
fig, ax2 = plt.subplots(1,1, figsize=(9, 2))
n =ml.noise()
conf = 1.96 / np.sqrt(n.index.size)
acf = ps.stats.acf(n)
ax2.axhline(conf, linestyle='--', color="dimgray")
ax2.axhline(-conf, linestyle='--', color="dimgray")
ax2.stem(acf.index, acf.values)
ax2.set_ylabel('ACF (-)')
ax2.set_xlabel('lag (days)')
plt.xlim(0, 370)
plt.ylim(-0.25, 0.25)
plt.legend(["95% confidence interval"])
if savefig:
path = os.path.join(figpath, "acf.eps")
plt.savefig(path, bbox_inches='tight', dpi=300)
h, test = ps.stats.ljung_box(ml.noise())
print("The hypothesis that there is significant autocorrelation is:", h)
test
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2. Reading the time series
Step2: Step 3. Creating the model
Step3: Step 4. Adding stress models
Step4: Step 5. Solving the model
Step5: Step 6. Visualizing the results
Step6: 7. Diagnosing the noise series
Step7: Make plots for publication
Step8: Make a plot of the impulse and step response for the Gamma and Hantush functions
Step9: Make a plot of the stresses used in the model
Step10: Make a custom figure of the model fit and the estimated step response
Step11: Make a figure of the fit report
Step12: Make a Figure of the noise, residuals and autocorrelation
|
15,607 | <ASSISTANT_TASK:>
Python Code:
odds = [1, 3, 5, 7]
print('odds are:', odds)
print('first element:', odds[0])
print('last element:', odds[3])
print('"-1" element:', odds[-1])
odds[0] = 10
print('first element:', odds[0])
salsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
my_salsa = salsa
salsa[0] = 'hot peppers'
salsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
my_salsa = list(salsa)
salsa[0] = 'hot peppers'
x = [['pepper', 'zucchini', 'onion'],
['cabbage', 'lettuce', 'garlic'],
['apple', 'pear', 'banana']]
my_list = ["banana", 10, 35, "orange"]
my_list.pop()
my_list.append("orange")
my_list
odds = [1, 3, 5, 7]
primes = odds
primes.append(2)
primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
subset = primes[0:12:3]
print('subset', subset)
beatles = "In an octopus's garden in the shade"
counts = [2, 4, 6, 8, 10]
repeats = counts * 2
print(repeats)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can access elements of a list using indices – numbered positions of elements in the list. These positions are numbered starting at 0, so the first element has an index of 0.
Step2: Note that the elements of lists, like arrays are "mutable" or changeable after they are defined
Step3: Exercise 1
Step4: If you print out the above what do you get for the salsa recipie? How does this compare to the below, similar operation?
Step5: Can you explain the difference in outputs?
Step6: Here is an example of how nested indexing works visually
Step7: While modifying in place, it is useful to remember that Python treats lists in a slightly counter-intuitive way.
Step8: Exercise 4
Step9: Notice that the slice taken begins with the first entry in the range, followed by entries taken at equally-spaced intervals (the steps) thereafter.
Step10: Exercise 5
|
15,608 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from itertools import product
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection
import matplotlib.pyplot as plt
%matplotlib notebook
# Illustrating the use of itertools product
for ix,value in enumerate(product(range(2), repeat=3)):
print(ix, value)
type(value[0])
def naming(y, yhat, s):
if y == 0 and yhat == 0 and s == 0:
return (y, yhat, s, 'TN0')
if y == 0 and yhat == 0 and s == 1:
return (y, yhat, s, 'TN1')
if y == 0 and yhat == 1 and s == 0:
return (y, yhat, s, 'A')
if y == 0 and yhat == 1 and s == 1:
return (y, yhat, s, 'C')
if y == 1 and yhat == 0 and s == 0:
return (y, yhat, s, 'D')
if y == 1 and yhat == 0 and s == 1:
return (y, yhat, s, 'B')
if y == 1 and yhat == 1 and s == 0:
return (y, yhat, s, 'TP0')
if y == 1 and yhat == 1 and s == 1:
return (y, yhat, s, 'TP1')
def name2position(variables):
ix_y = np.where(np.array(variables) == 'Y')[0][0]
ix_yhat = np.where(np.array(variables) == 'Yhat')[0][0]
ix_s = np.where(np.array(variables) == 'S')[0][0]
return (ix_y, ix_yhat, ix_s)
#variables = ['S', 'Yhat', 'Y', 'condition']
variables = ['Y', 'Yhat', 'S', 'condition']
ix_y, ix_yhat, ix_s = name2position(variables)
all_possibilities = pd.DataFrame(index=range(8), columns=variables, dtype='int')
for ix, value in enumerate(product([0,1], repeat=len(variables)-1)):
all_possibilities.iloc[ix] = naming(value[ix_y], value[ix_yhat], value[ix_s])
# Bug in pandas, creates a dataframe of floats. Workaround.
for col in all_possibilities.columns[:-1]:
all_possibilities[col] = pd.to_numeric(all_possibilities[col], downcast='integer')
all_possibilities
def plot_cube(ax, cube_definition):
From https://stackoverflow.com/questions/44881885/python-draw-3d-cube
cube_definition_array = [
np.array(list(item))
for item in cube_definition
]
points = []
points += cube_definition_array
vectors = [
cube_definition_array[1] - cube_definition_array[0],
cube_definition_array[2] - cube_definition_array[0],
cube_definition_array[3] - cube_definition_array[0]
]
points += [cube_definition_array[0] + vectors[0] + vectors[1]]
points += [cube_definition_array[0] + vectors[0] + vectors[2]]
points += [cube_definition_array[0] + vectors[1] + vectors[2]]
points += [cube_definition_array[0] + vectors[0] + vectors[1] + vectors[2]]
points = np.array(points)
edges = [
[points[0], points[3], points[5], points[1]],
[points[1], points[5], points[7], points[4]],
[points[4], points[2], points[6], points[7]],
[points[2], points[6], points[3], points[0]],
[points[0], points[2], points[4], points[1]],
[points[3], points[6], points[7], points[5]]
]
faces = Poly3DCollection(edges, linewidths=1, edgecolors='k')
faces.set_facecolor((0,0,1,0.1))
ax.add_collection3d(faces)
# Plot the points themselves to force the scaling of the axes
ax.scatter(points[:,0], points[:,1], points[:,2], s=50)
ax.set_aspect('equal')
ax.set_xlabel(variables[ix_s])
ax.set_ylabel(variables[ix_yhat])
ax.set_zlabel(variables[ix_y])
ax.grid(False)
return
cube_definition = [
(0,0,0), (0,1,0), (1,0,0), (0,0,1)
]
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
plot_cube(ax, cube_definition)
for ix, row in all_possibilities.iterrows():
ax.text(row[ix_s], row[ix_yhat], row[ix_y], row[3], size=30)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
ax.plot([0,0,1,1], [0,1,0,1], 'bo')
ax.set_xlabel('FN -- FP')
ax.set_ylabel('favouritism')
ax.text(0, 0, naming(1, 0, 0)[3], size=30)
ax.text(0, 1, naming(1, 0, 1)[3], size=30)
ax.text(1, 0, naming(0, 1, 0)[3], size=30)
ax.text(1, 1, naming(0, 1, 1)[3], size=30)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Explaination of the conditions
Step3: Studying the trade off
|
15,609 | <ASSISTANT_TASK:>
Python Code:
mu = pymc.Uniform('mu', 0, 1e5)
deaths = pymc.Poisson('deaths', mu = 2.0*mu, observed=True, value=[3])
model = pymc.MCMC((mu, deaths))
model.sample(10000, burn=100, burn_till_tuned=True)
print(model.summary())
pymc.Matplot.plot(model)
mu = pymc.Gamma('mu', 3.0, 5.0)
deaths = pymc.Poisson('deaths', mu = 2.0*mu, observed=True, value=[3])
model = pymc.MCMC((mu, deaths))
model.sample(10000, burn=100, burn_till_tuned=True)
print(model.summary())
pymc.Matplot.plot(model)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This does not match the example very well. The example is centered around 0.9
|
15,610 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
def sigmoid(x):
return 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation here
self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = X @ self.weights_input_to_hidden # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error terms - Replace these values with your calculations.
# slope of the f(x) = x is 1, so the delta of ouput layer is error itself
output_error_term = error * 1
# TODO: Calculate the hidden layer's contribution to the error
# error back-progogate to hidden layer
hidden_error = output_error_term * self.weights_hidden_to_output
#hidden_outputs = hidden_outputs[:,None]
# align the shape of hidden_error
hidden_error = hidden_error.T
# error_delta_hidden_node = Weight_hidden_node * error_delata_output * partial_derivative_of_hidden_output
hidden_error_term = hidden_error * (hidden_outputs * (1 - hidden_outputs))
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:,None]
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
#self.weights_input_to_hidden.plot()
#print('self.weights_hidden_to_output')
#print(self.weights_hidden_to_output)
#print('self.weights_input_to_hidden')
#print(self.weights_input_to_hidden)
#print('-------------------------------')
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 2000
learning_rate = 0.5
hidden_nodes = 9
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
15,611 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# sanity check for python setup
import sys
print(sys.executable)
print(sys.path)
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn import datasets
np.random.seed(5)
centers=[[1,1],[-1,-1],[1,-1]]
iris=datasets.load_iris()
X=iris.data
y=iris.target
estimators = {'k_means_iris_3': KMeans(n_clusters=3),
'k_means_iris_8':KMeans(n_clusters=8),
'k_means_iris_bad_init': KMeans(n_clusters=3, n_init=1,init='random')}
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(4,3))
plt.clf()
ax=Axes3D(fig,rect=[0,0,.95,1],elev=48,azim=134)
plt.cla()
est.fit(X)
labels=est.labels_
ax.scatter(X[:,3],X[:,0],X[:,2],c=labels.astype(np.float))
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
fignum=fignum+1
# Plot the ground truth
fig=plt.figure(fignum,figsize=(4,3))
plt.clf()
ax=Axes3D(fig,rect=[0,0,.95,1],elev=48,azim=134)
plt.cla()
for name,label in [('Setosa',0),('Versicolour',1),('Virginica',2)]:
ax.text3D(X[y==label,3].mean(),
X[y==label,0].mean()+1.5,
X[y==label,2].mean(),name,
horizontalalignment='center',
bbox=dict(alpha=.5,edgecolor='w',facecolor='w'))
# Reorder the labels to have colors matching the cluster results
# y = np.choose(y,[1,2,0]).astype(np.float)
yint=np.choose(y, [1,2,0])#.astype(np.float)
ax.scatter(X[:,3],X[:,0],X[:,2],c=yint)
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
plt.show()
import pandas as pd
print(type(X))
print(type(y))
print(X.shape)
print(y.shape)
print(pd.DataFrame(y).iloc[:,0].unique())
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X,y)
y_pred=knn.predict(X)
# compute classification accuracy for the logistic regression model
from sklearn import metrics
print(metrics.accuracy_score(y,y_pred))
### 1c. KNN (K=1)
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X,y)
y_pred=knn.predict(X)
print(metrics.accuracy_score(y,y_pred))
# STEP 1: split X and y into training and testing sets
from sklearn.cross_validation import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4,random_state=4)
knn=KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train,y_train)
y_pred=knn.predict(X_test)
print(metrics.accuracy_score(y_test,y_pred))
knn=KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train,y_train)
y_pred=knn.predict(X_test)
print(metrics.accuracy_score(y_test,y_pred))
# try K=1 through K=25 and record testing accuracy
k_range=range(1,26)
# We can create Python dictionary using [] or dict()
scores=[]
# We use a loop through the range 1 to 26
# We append the scores in the dictionary
for k in k_range:
knn=KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train,y_train)
y_pred=knn.predict(X_test)
scores.append(metrics.accuracy_score(y_test,y_pred))
print(scores)
# import Matplotlib (scientific plotting library)
#import matplotlib.pyplot as plt
# allow plots to appear within the notebook
#%matplotlib inline
# plot the relationship between $K$ and testing accuracy
# plt.plot(x_axis,y_axis)
plt.plot(k_range,scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Testing Accuracy')
print(pd.DataFrame(X).describe())
print(pd.DataFrame(X).head())
pd.DataFrame(X).head(10)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1b. KNN (K=5)
Step2: 2. Evaluation procedure 2 - Train/test split
|
15,612 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial import cKDTree
from scipy.spatial.distance import cdist
from metpy.gridding.gridding_functions import calc_kappa
from metpy.gridding.interpolation import barnes_point, cressman_point
from metpy.gridding.triangles import dist_2
plt.rcParams['figure.figsize'] = (15, 10)
def draw_circle(x, y, r, m, label):
nx = x + r * np.cos(np.deg2rad(list(range(360))))
ny = y + r * np.sin(np.deg2rad(list(range(360))))
plt.plot(nx, ny, m, label=label)
np.random.seed(100)
pts = np.random.randint(0, 100, (10, 2))
xp = pts[:, 0]
yp = pts[:, 1]
zp = xp * xp / 1000
sim_gridx = [30, 60]
sim_gridy = [30, 60]
grid_points = np.array(list(zip(sim_gridx, sim_gridy)))
radius = 40
obs_tree = cKDTree(list(zip(xp, yp)))
indices = obs_tree.query_ball_point(grid_points, r=radius)
x1, y1 = obs_tree.data[indices[0]].T
cress_dist = dist_2(sim_gridx[0], sim_gridy[0], x1, y1)
cress_obs = zp[indices[0]]
cress_val = cressman_point(cress_dist, cress_obs, radius)
x2, y2 = obs_tree.data[indices[1]].T
barnes_dist = dist_2(sim_gridx[1], sim_gridy[1], x2, y2)
barnes_obs = zp[indices[1]]
ave_spacing = np.mean((cdist(list(zip(xp, yp)), list(zip(xp, yp)))))
kappa = calc_kappa(ave_spacing)
barnes_val = barnes_point(barnes_dist, barnes_obs, kappa)
for i, zval in enumerate(zp):
plt.plot(pts[i, 0], pts[i, 1], '.')
plt.annotate(str(zval) + ' F', xy=(pts[i, 0] + 2, pts[i, 1]))
plt.plot(sim_gridx, sim_gridy, '+', markersize=10)
plt.plot(x1, y1, 'ko', fillstyle='none', markersize=10, label='grid 0 matches')
plt.plot(x2, y2, 'ks', fillstyle='none', markersize=10, label='grid 1 matches')
draw_circle(sim_gridx[0], sim_gridy[0], m='k-', r=radius, label='grid 0 radius')
draw_circle(sim_gridx[1], sim_gridy[1], m='b-', r=radius, label='grid 1 radius')
plt.annotate('grid 0: cressman {:.3f}'.format(cress_val), xy=(sim_gridx[0] + 2, sim_gridy[0]))
plt.annotate('grid 1: barnes {:.3f}'.format(barnes_val), xy=(sim_gridx[1] + 2, sim_gridy[1]))
plt.axes().set_aspect('equal', 'datalim')
plt.legend()
plt.annotate('grid 0: ({}, {})'.format(sim_gridx[0], sim_gridy[0]),
xy=(sim_gridx[0] + 2, sim_gridy[0]))
plt.plot(sim_gridx[0], sim_gridy[0], '+', markersize=10)
mx, my = obs_tree.data[indices[0]].T
mz = zp[indices[0]]
for x, y, z in zip(mx, my, mz):
d = np.sqrt((sim_gridx[0] - x)**2 + (y - sim_gridy[0])**2)
plt.plot([sim_gridx[0], x], [sim_gridy[0], y], '--')
xave = np.mean([sim_gridx[0], x])
yave = np.mean([sim_gridy[0], y])
plt.annotate('distance: {}'.format(d), xy=(xave, yave))
plt.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))
plt.xlim(0, 80)
plt.ylim(0, 80)
plt.axes().set_aspect('equal', 'datalim')
dists = np.array([22.803508502, 7.21110255093, 31.304951685, 33.5410196625])
values = np.array([0.064, 1.156, 3.364, 0.225])
cres_weights = (radius * radius - dists * dists) / (radius * radius + dists * dists)
total_weights = np.sum(cres_weights)
proportion = cres_weights / total_weights
value = values * proportion
val = cressman_point(cress_dist, cress_obs, radius)
print('Manual cressman value for grid 1:\t', np.sum(value))
print('Metpy cressman value for grid 1:\t', val)
plt.annotate('grid 1: ({}, {})'.format(sim_gridx[1], sim_gridy[1]),
xy=(sim_gridx[1] + 2, sim_gridy[1]))
plt.plot(sim_gridx[1], sim_gridy[1], '+', markersize=10)
mx, my = obs_tree.data[indices[1]].T
mz = zp[indices[1]]
for x, y, z in zip(mx, my, mz):
d = np.sqrt((sim_gridx[1] - x)**2 + (y - sim_gridy[1])**2)
plt.plot([sim_gridx[1], x], [sim_gridy[1], y], '--')
xave = np.mean([sim_gridx[1], x])
yave = np.mean([sim_gridy[1], y])
plt.annotate('distance: {}'.format(d), xy=(xave, yave))
plt.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))
plt.xlim(40, 80)
plt.ylim(40, 100)
plt.axes().set_aspect('equal', 'datalim')
dists = np.array([9.21954445729, 22.4722050542, 27.892651362, 38.8329756779])
values = np.array([2.809, 6.241, 4.489, 2.704])
weights = np.exp(-dists**2 / kappa)
total_weights = np.sum(weights)
value = np.sum(values * (weights / total_weights))
print('Manual barnes value:\t', value)
print('Metpy barnes value:\t', barnes_point(barnes_dist, barnes_obs, kappa))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate random x and y coordinates, and observation values proportional to x * y.
Step2: Set up a cKDTree object and query all of the observations within "radius" of each grid point.
Step3: For grid 0, we will use Cressman to interpolate its value.
Step4: For grid 1, we will use barnes to interpolate its value.
Step5: Plot all of the affiliated information and interpolation values.
Step6: For each point, we will do a manual check of the interpolation values by doing a step by
Step7: Step through the cressman calculations.
Step8: Now repeat for grid 1, except use barnes interpolation.
Step9: Step through barnes calculations.
|
15,613 | <ASSISTANT_TASK:>
Python Code:
%pylab notebook
VB = 120.0 # Battery voltage (V)
r = 0.3 # Resistance (ohms)
l = 1.0 # Bar length (m)
B = 0.6 # Flux density (T)
F = arange(0,51,10) # Force (N)
F # Lets print the variable to check.
# Can you exaplain why "arange(0,50,10)" gives not the array below?
i = F / (l * B) # Current (A)
eind = VB - i * r # Induced voltage (V)
v_bar = eind / (l * B); # Velocity (m/s)
plot(F, v_bar);
rc('text', usetex=True) # enable LaTeX commands for plot
title(r'\textbf{Plot of velocity versus applied force}')
xlabel(r'\textbf{Force (N)}')
ylabel(r'\textbf{Velocity (m/s)}')
axis([0, 50, 0, 200])
grid()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define all the parameters
Step2: Select the forces to apply to the bar
Step3: Calculate the currents flowing in the motor
Step4: Calculate the induced voltages on the bar
Step5: Calculate the velocities of the bar
Step6: Plot the velocity of the bar versus force
|
15,614 | <ASSISTANT_TASK:>
Python Code:
# local
from fludashboard.libs.flu_data import prepare_keys_name
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df_hist = pd.read_csv('../data/historical_estimated_values.csv', encoding='utf-8')
df_inci = pd.read_csv('../data/current_estimated_values.csv', encoding='utf-8')
df_typi = pd.read_csv('../data/mem-typical.csv', encoding='utf-8')
df_thre = pd.read_csv('../data/mem-report.csv', encoding='utf-8')
prepare_keys_name(df_hist)
prepare_keys_name(df_inci)
prepare_keys_name(df_typi)
prepare_keys_name(df_thre)
level_dict = {
'L0': 'Baixa',
'L1': 'Epidêmica',
'L2': 'Alta',
'L3': 'Muito alta'
}
df_inci.columns
df_inci.head(5)
df_typi.head(5)
df_thre.tail(5)
k = ['epiyear', 'epiweek', 'base_epiyear', 'base_epiweek']
df_inci2017 = df_inci[
(df_inci.epiyear == 2017) &
# (df_inci.epiweek >= 15) &
(df_inci.dado == 'srag') &
(df_inci.escala == 'incidência') &
(df_inci.uf == 'BR')
].copy()
df_inci2017.sort_values(['epiyear', 'epiweek'], inplace=True)
df_inci_chart = df_inci2017.copy()
df_inci_chart.index = df_inci_chart.epiweek
k = ['epiyear', 'epiweek', 'base_epiyear', 'base_epiweek']
df_hist2017 = df_hist[
(df_hist.base_epiyear == 2017) &
(df_hist.base_epiweek == 23) &
(df_hist.dado == 'srag') &
(df_hist.escala == 'incidência') &
(df_hist.uf == 'BR')
].copy()
df_hist2017.sort_values(['epiyear', 'epiweek'], inplace=True)
df_hist_chart = df_hist2017.copy()
df_hist_chart.index = df_hist_chart.epiweek
# 50% estimated cases
df_inci_chart[['srag', '50%', '2.5%', '97.5%']].plot()
plt.title('Incidence')
plt.grid(True)
plt.show()
df_hist_chart[['srag', '50%', '2.5%', '97.5%']].plot()
plt.title('Historial')
plt.grid(True)
plt.show()
df_hist2017['estimated_cases'] = df_hist2017['50%']
df = pd.merge(
df_inci2017[['epiweek', 'srag', '2.5%', '97.5%']],
df_hist2017[['epiweek', 'estimated_cases']],
on='epiweek', how='outer'
)
df.set_index('epiweek', inplace=True)
df.plot()
plt.grid(True)
plt.title('Incidence X Historial')
plt.show()
df_hist[
(df_hist.base_epiyear == 2017) &
(df_hist.dado == 'srag') &
(df_hist.escala == 'incidência') &
(df_hist.uf == 'BR')
].base_epiweek.unique()
# First, last keep only stable weeksfor notification curve:
df_inci2017.loc[(df_inci2017.situation != 'stable'), 'srag'] = np.nan
# Adapt historical dataset:
df_hist.sort_values(['epiyear', 'epiweek'], inplace=True)
df_hist['estimated_cases'] = df_hist['50%']
# User selected week:
y = 2017
w = 23
def week_data(y, w):
df_week_inci = df_inci2017[(df_inci2017.epiweek <= w)]
df_week_hist = df_hist[
(df_hist.base_epiyear == y) &
(df_hist.base_epiweek == w) &
(df_hist.dado == 'srag') &
(df_hist.escala == 'incidência') &
(df_hist.uf == 'BR')
].copy()
df = pd.merge(
df_week_inci[['epiweek', 'srag']],
df_week_hist[['epiweek', 'estimated_cases', '2.5%', '97.5%']],
on='epiweek', how='outer'
)
df.set_index('epiweek', inplace=True)
return df
df = week_data(y, w)
df.plot()
plt.grid(True)
plt.show()
w = 28
df = week_data(y, w)
df.plot()
plt.grid(True)
plt.show()
w = 33
df = week_data(y, w)
df.plot()
plt.grid(True)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this example, we show the current year incidence up to given week.<br>
Step2: UF
Step3: Entries with dfthresholds['se típica do inicio do surto'] = NaN have activity too low for proper epidemic threshold definition
Step4: Displaying data for user selected week w<a name="_historical data display"></a>
|
15,615 | <ASSISTANT_TASK:>
Python Code:
from deriva.core import ErmrestCatalog, get_credential
scheme = 'https'
hostname = 'dev.facebase.org'
catalog_number = 1
credential = get_credential(hostname)
assert scheme == 'http' or scheme == 'https', "Invalid http scheme used."
assert isinstance(hostname, str), "Hostname not set."
assert isinstance(catalog_number, int), "Invalid catalog number"
catalog = ErmrestCatalog(scheme, hostname, catalog_number, credential)
pb = catalog.getPathBuilder()
dataset = pb.isa.dataset
dataset
new_entity = {
'title': 'A test dataset by derivapy',
'description': 'This was created by the deriva-py API.',
'project': 311
}
entities = dataset.insert([new_entity], defaults={'id', 'accession'})
list(entities)
entities[0]['description'] = 'A test dataset that was updated by derivapy'
updated_entities = dataset.update(entities)
list(updated_entities)
entities[0]['description'] = 'Yet another update using derivapy'
entities[0]['title'] = 'And a title change'
updated_entities = dataset.update(entities, [dataset.id], [dataset.description, 'title'])
list(updated_entities)
path = dataset.filter(dataset.RID == entities[0]['RID'])
path.delete()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This example uses a development server with a throw away catalog. You will not have sufficient permissions to be able to run this example. This notebook is for documentation purpose only.
Step2: Use DERIVA-Auth to get a credential or use None if your catalog allows anonymous access.
Step3: Now, connect to your catalog and the pathbuilder interface for the catalog.
Step4: For this example, we will create or modify entities of the "Dataset" table of a catalog that uses the FaceBase data model.
Step5: Insert example
Step6: The insert operation returns the inserted entities, which now have any system generated attributes filled in.
Step7: Update example
Step8: Similar to the insert operation, the update operation also returns the updated entities. Notice that the system-managed 'RMT' (Row Modified Timestamp) attribute has been update too.
Step9: Update with custom correlation and targets specified
Step10: Delete example
Step11: On successful delete, no content will be returned.
|
15,616 | <ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Jona Sassenhagen <jona.sassenhagen@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import mne
from mne.stats import spatio_temporal_cluster_test
from mne.datasets import sample
from mne.channels import find_ch_connectivity
from mne.viz import plot_compare_evokeds
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = {'Aud/L': 1, 'Aud/R': 2, 'Vis/L': 3, 'Vis/R': 4}
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg='mag', eog=True)
reject = dict(mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
epochs.drop_channels(['EOG 061'])
epochs.equalize_event_counts(event_id)
X = [epochs[k].get_data() for k in event_id] # as 3D matrix
X = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering
connectivity, ch_names = find_ch_connectivity(epochs.info, ch_type='mag')
print(type(connectivity)) # it's a sparse matrix!
plt.imshow(connectivity.toarray(), cmap='gray', origin='lower',
interpolation='nearest')
plt.xlabel('{} Magnetometers'.format(len(ch_names)))
plt.ylabel('{} Magnetometers'.format(len(ch_names)))
plt.title('Between-sensor adjacency')
# set cluster threshold
threshold = 50.0 # very high, but the test is quite sensitive on this data
# set family-wise p-value
p_accept = 0.01
cluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,
threshold=threshold, tail=1,
n_jobs=1, buffer_size=None,
connectivity=connectivity)
T_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
# configure variables for visualization
colors = {"Aud": "crimson", "Vis": 'steelblue'}
linestyles = {"L": '-', "R": '--'}
# organize data for plotting
evokeds = {cond: epochs[cond].average() for cond in event_id}
# loop over clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = T_obs[time_inds, ...].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(times=0, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False,
colorbar=False, mask_params=dict(markersize=10))
image = ax_topo.images[0]
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += "s (mean)"
plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,
colors=colors, linestyles=linestyles, show=False,
split_legend=True, truncate_yaxis='auto')
# plot temporal cluster extent
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],
color='orange', alpha=0.3)
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Find the FieldTrip neighbor definition to setup sensor connectivity
Step4: Compute permutation statistic
Step5: Note. The same functions work with source estimate. The only differences
|
15,617 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
class Plan: pass
# Plan 1 = Cigna HDHP/HSA
p1 = Plan()
p1.family_deductible = 4000.00 # Same deductible for both family and individual
p1.individual_deductible = 4000.00
p1.family_oopmax = 6000.00 # Same out-of-pocket max for family and individual
p1.individual_oopmax = 6000.00
p1.premium_monthly = 48*2 # price/pay period * 2 pay periods/month
p1.hsa_contribution = 2400.00
p1.coinsurance_rate = 0.1
# Plan 2 = Cigna PPO $1000
p2 = Plan()
p2.family_deductible = 2000.00
p2.individual_deductible = 1000.00
p2.family_oopmax = 8000.00
p2.individual_oopmax = 4000.00
p2.premium_monthly = 48*2 # price/pay period * 2 pay periods/month
p2.hsa_contribution = 0.0
p2.coinsurance_rate = 0.2
# Plan 3 = Cigna PPO $500
p3 = Plan()
p3.family_deductible = 1000.00
p3.individual_deductible = 500.00
p3.family_oopmax = 7000.00
p3.individual_oopmax = 3500.00
p3.premium_monthly = 113*2 # price/pay period * 2 pay periods/month
p3.hsa_contribution = 0.0
p3.coinsurance_rate = 0.1
# For the purposes of this estimation, we are assuming the deductible
# is always larger than the HSA contribution amount
def apply_deductible_and_hsa(cost, deductible, hsa):
cost_to_you = 0
cost_remaining = 0
# Apply HSA
deductible_minus_hsa = deductible - hsa
if cost <= hsa:
cost_to_you = 0
cost_remaining = 0
elif cost <= deductible:
cost_to_you = cost - hsa
cost_remaining = 0
elif cost > deductible:
cost_to_you = deductible_minus_hsa
cost_remaining = cost - deductible
return (cost_to_you, cost_remaining)
def apply_coinsurance(cost, coinsurance_rate):
return cost * coinsurance_rate
def apply_oopmax(cost, oopmax):
if cost >= oopmax:
return oopmax
else:
return cost
def setup_graph(title='', x_label='', y_label='', fig_size=None):
fig = plt.figure()
if fig_size != None:
fig.set_size_inches(fig_size[0], fig_size[1])
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
def individual_cost(plan, gross_cost):
(cost_to_you, cost_remaining) = apply_deductible_and_hsa(gross_cost,
plan.individual_deductible,
plan.hsa_contribution)
cost_to_you += apply_coinsurance(cost_remaining, plan.coinsurance_rate)
cost_to_you = apply_oopmax(cost_to_you, plan.individual_oopmax)
# Apply yearly premiums - note that the out-of-pocket max doesn't include
# the premiums; thus, we apply them after applying out-of-pocket max.
cost_to_you += (plan.premium_monthly * 12)
return cost_to_you
def family_cost(plan, gross_cost):
(cost_to_you, cost_remaining) = apply_deductible_and_hsa(gross_cost,
plan.family_deductible,
plan.hsa_contribution)
cost_to_you += apply_coinsurance(cost_remaining, plan.coinsurance_rate)
cost_to_you = apply_oopmax(cost_to_you, plan.family_oopmax)
# Apply yearly premiums - note that the out-of-pocket max doesn't include
# the premiums; thus, we apply them after applying out-of-pocket max.
cost_to_you += (plan.premium_monthly * 12)
return cost_to_you
# Should be the monthly premium times 12 (to make up the yearly premium).
family_cost(p1, 0)
p1.premium_monthly * 12.0
family_cost(p2, 0)
p2.premium_monthly * 12.0
family_cost(p3, 0)
p3.premium_monthly * 12.0
# Should be monthly premium times 12
family_cost(p1, 2400) == p1.premium_monthly * 12
(p1.premium_monthly * 12) + \
(p1.family_deductible - p1.hsa_contribution) + \
(6000 - p1.family_deductible) * p1.coinsurance_rate
family_cost(p1, 6000)
# Calculate costs
gross_costs = range(0, 70000)
p1_costs = [family_cost(p1, cost) for cost in gross_costs]
p2_costs = [family_cost(p2, cost) for cost in gross_costs]
p3_costs = [family_cost(p3, cost) for cost in gross_costs]
# Do graph
setup_graph(title='Family costs', x_label='Gross cost', y_label='Cost to you', fig_size=(12,7))
ax = plt.subplot(1,1,1)
p1_graph, = ax.plot(gross_costs, p1_costs, label="Cigna HDHP/HSA")
p2_graph, = ax.plot(gross_costs, p2_costs, label="Cigna PPO $1000")
p3_graph, = ax.plot(gross_costs, p3_costs, label="Cigna PPO $500")
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, loc='upper left')
plt.show()
# Calculate costs
gross_costs = range(0, 50000)
p1_costs = [individual_cost(p1, cost) for cost in gross_costs]
p2_costs = [individual_cost(p2, cost) for cost in gross_costs]
p3_costs = [individual_cost(p3, cost) for cost in gross_costs]
# Do graph
setup_graph(title='Individual costs', x_label='Gross cost', y_label='Cost to you', fig_size=(12,7))
ax = plt.subplot(1,1,1)
p1_graph, = ax.plot(gross_costs, p1_costs, label="Cigna HDHP/HSA")
p2_graph, = ax.plot(gross_costs, p2_costs, label="Cigna PPO $1000")
p3_graph, = ax.plot(gross_costs, p3_costs, label="Cigna PPO $500")
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, loc='upper left')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper functions
Step2: Plan cost functions
Step3: Sanity Tests
Step4: Cost less than HSA
Step5: Cost greater than HSA and deductible
Step6: Family Cost graph
Step7: Individual Cost
|
15,618 | <ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', dataset='lc01')
b.add_dataset('mesh', times=[0], columns=['intensities*'])
print(b['gravb_bol'])
print(b['gravb_bol@primary'])
print(b.run_checks())
b['teff@primary'] = 8500
b['gravb_bol@primary'] = 0.8
print(b.run_checks())
b['teff@primary'] = 7000
b['gravb_bol@primary'] = 0.2
print(b.run_checks())
b['teff@primary'] = 6000
b['gravb_bol@primary'] = 1.0
print(b.run_checks())
b['teff@primary'] = 6000
b['gravb_bol@primary'] = 0.32
b.run_compute(model='gravb_bol_32')
afig, mplfig = b['primary@mesh01@gravb_bol_32'].plot(fc='intensities', ec='None', show=True)
b['gravb_bol@primary'] = 1.0
b.run_compute(model='gravb_bol_10')
afig, mplfig = b['primary@mesh01@gravb_bol_10'].plot(fc='intensities', ec='None', show=True)
np.nanmax((b.get_value('intensities', component='primary', model='gravb_bol_32') - b.get_value('intensities', component='primary', model='gravb_bol_10'))/b.get_value('intensities', component='primary', model='gravb_bol_10'))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
Step3: If you have a logger enabled, PHOEBE will print a warning if the value of gravb_bol is outside the "suggested" ranges. Note that this is strictly a warning, and will never turn into an error at b.run_compute().
Step4: Influence on Intensities
Step5: Comparing these two plots, it is essentially impossible to notice any difference between the two models. But if we compare the intensities directly, we can see that there is a subtle difference, with a maximum difference of about 3%.
|
15,619 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
A = np.array([[1,1,1], [3,1,2], [2,3,4]])
b = np.array([6, 11, 20])
A
b
x = np.linalg.solve(A, b)
x
A = np.matrix([[1,1,1], [3,1,2], [2,3,4]])
A
np.linalg.inv(A)
A = np.matrix([[1,2,2],[2,4,1],[3,6,4]])
A
np.linalg.matrix_rank(A)
A = np.matrix([[1,2,3], [4,5,6], [7,8,9]])
A
A.transpose()
A.T
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gaussian Elimination
Step2: Gaussian-Jordan Elimination
Step3: Column space
Step4: Projection Matrix
|
15,620 | <ASSISTANT_TASK:>
Python Code:
some_digits = X[36001]
some_digits_img = some_digits.reshape(28, 28)
plt.imshow(some_digits_img, cmap=matplotlib.cm.binary, interpolation='Nearest')
plt.axis("off")
plt.show()
### checking out its label
y[36001]
# MNIST dataset is already split into train(first 60000) and test(last 10000)
X_train, y_train, X_test, y_test = X[:60000], y[:60000], X[60000:], y[60000:]
# let's suffle these train set to avoid any bias for algo
suffled_indices = np.random.permutation(60000)
X_train, y_train = X_train[suffled_indices], y_train[suffled_indices]
# converting the target into binary info, Foy 5
y_train_5 = (y_train==5)
y_test_5 = (y_test==5)
sgd_clf = SGDClassifier(random_state=142)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digits])
from sklearn.model_selection import StratifiedShuffleSplit
skfolds = StratifiedShuffleSplit(n_splits=3, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
# creating classified object
clone_clf = SGDClassifier(random_state=42)
# splitting the data into train n test
X_train_folds = X_train[train_index]
y_train_folds = y_train_5[train_index]
X_test_folds = X_train[test_index]
y_test_folds = y_train_5[test_index]
# fitting the data
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_folds)
print(np.mean(y_pred==y_test_folds), end=' ')
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
# being a image is 5 accuracy
print(np.mean(y_train_5)) # which means not 5 is
print(1-np.mean(y_train_5))
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
y_pred_5 = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
confusion_matrix(y_train_5, y_pred_5)
from sklearn.metrics import precision_score, recall_score
print("precision score: ", precision_score(y_train_5, y_pred_5))
print("recall score: ", recall_score(y_train_5, y_pred_5))
from sklearn.metrics import f1_score
print("f1-score: ", f1_score(y_train_5, y_pred_5))
y_score = sgd_clf.decision_function([some_digits])
y_score
threshold = 0
y_pred = (y_score>threshold)
y_pred
# let's increase the threshold
threshold = 200000
y_pred = (y_score>threshold)
y_pred
# as we increase the threshold, recall is decresing , let's check the same for all the values
print(X_train.shape, y_train_5.shape)
print(y_train_5[:3])
y_pred_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv = 3, method='decision_function')
y_pred_scores=y_pred_scores[:,1]
print(y_pred_scores[:3])
y_pred = (y_pred_scores>threshold)
np.mean(y_pred) # TRUE accuracy
from sklearn.metrics import precision_recall_curve
print(y_train_5.shape, y_pred_scores.shape)
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_pred_scores)
y_pred_scores[:3]
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.figure(figsize=(10,8))
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.xlabel("Threshold")
plt.legend(loc="upper left")
plt.ylim([-0.1, 1.1])
plt.grid(True)
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.show()
def plot_precision_vs_recall(precisions, recalls):
plt.figure(figsize=(10,8))
plt.plot(recalls[:-1], precisions[:-1], "b-", linewidth=2)
plt.xlabel("recall")
plt.ylabel("precision")
plt.legend(loc="upper left")
plt.ylim([-0.1, 1.1])
plt.grid(True, which='major')
plot_precision_vs_recall(precisions, recalls)
plt.show()
y_train_pred_90 = (y_pred_scores > 70000)
print("Precision: ", precision_score(y_train_5, y_train_pred_90))
print("Recall: ", recall_score(y_train_5, y_train_pred_90))
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_pred_scores)
def plot_roc_curve(fpr, tpr, label=None, figno=1):
plt.figure(figno, figsize=(10, 8))
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.grid(True)
plt.legend(loc='lower right')
plot_roc_curve(fpr, tpr)
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_pred_scores)
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(random_state=142)
y_pred_prob = cross_val_predict(rf_clf, X_train, y_train_5, method='predict_proba', cv=3)
y_pred_prob[:3]
# But to plot a ROC curve, you need scores, not probabilities.
# A simple solution is to use the positive class’s probability as the score:
y_forest_scores = y_pred_prob[:,-1]
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_forest_scores)
plot_roc_curve(fpr_forest, tpr_forest, label='RandomForest')
plot_roc_curve(fpr, tpr, label='SGD')
roc_auc_score(y_train_5, y_forest_scores)
sgdm_clf = SGDClassifier(random_state=142)
sgdm_clf.fit(X_train, y_train)
sgdm_clf.predict([some_digits])
sgdm_clf.decision_function([some_digits])
argmax = np.argmax(sgdm_clf.decision_function([some_digits]))
argmax
sgdm_clf.classes_[argmax]
from sklearn.multiclass import OneVsOneClassifier, OneVsRestClassifier
ovo_clf = OneVsOneClassifier(SGDClassifier(random_state=142))
ovo_clf.fit(X_train, y_train)
ovo_clf.classes_
len(ovo_clf.estimators_) # there are 45 estimator
y_pred_ovo = cross_val_predict(ovo_clf, X_train, y_train, cv=5)
cm=confusion_matrix(y_train, y_pred_ovo)
plt.matshow(cm, cmap=plt.cm.gray)
# normalizing the confusion matrix
row_sums = cm.sum(axis=1, keepdims=True)
norm_conf_mx = cm / row_sums
np.fill_diagonal(norm_conf_mx,0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
plt.show()
def plot_digits()
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's try to train a BINARY classification
Step2: A good place to start is with a Stochastic Gradient Descent (SGD) classifier, using Scikit-Learn’s SGDClassifier class. This classifier has the advantage of being capable of handling very large datasets efficiently. This is in part because SGD deals with training instances independently, one at a time (which also makes SGD well suited for online learning)
Step3: Measuring the Accuracy
Step4: Now using CV
Step5: As we can see accuract is above 90% in all cases, it seems weird, let's check the base estimate
Step6: base classifier gives 90% accuracy bacause our data is skewed towards "Not 5" category
Step7: Confusion Matrix
Step8: Precision and Recall
Step9: F1 Score
Step10: There is always a trade off between precision and recall and it denepeds on the threshold value which help to decide whether it is a positive value or negative. Scikit-Learn does not let you set the threshold directly, but it does give you access to the decision scores
Step11: Precision Recall Curve
Step12: You can see that precision really starts to fall sharply around 80% recall. You will probably want to
Step13: The ROC Curve
Step14: Once again there is a tradeoff
Step15: Since the ROC curve is so similar to the precision/recall (or PR) curve, you may wonder how to decide which one to use. As a
Step16: MultiClass Classification
Step17: If you want to force ScikitLearn to use one-versus-one or one-versus-all, you can use the
|
15,621 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import PIL.Image
im = PIL.Image.open("/Users/valeriaalvarez/Documents/rh.jpeg")
col,row = im.size
A = np.zeros((row*col, 5))
pixels = im.load()
print(pixels[187,250])
for i in range(col):
for j in range(row):
#print("i=%d, j=%d" % (i,j))
r,g,b = pixels[i,j]
radiohead[i*col + j,:] = r,g,b,i,j
im
im.size
print(radiohead)
radiohead.shape
import numpy as np
RH = np.linalg
U, s, Vh = RH.svd(radiohead, full_matrices=False)
assert np.allclose(radiohead, np.dot(U, np.dot(np.diag(s), Vh)))
R_H = np.dot(np.dot(U, np.diag(s)), Vh)
print(np.std(radiohead), np.std(R_H), np.std(radiohead - R_H))
U
s
s.shape
Vh
import scipy.linalg as sc
S = sc.diagsvd(s, 5, 5)
S
U @ S @ Vh
# otra forma de hacer la verificación
rd_head=dot(U, np.dot(np.diag(s), Vh))
rd_head
from scipy import *
from pylab import *
img = imread("/Users/valeriaalvarez/Documents/rh.jpeg")[:,:,0]
gray()
figure(1)
imshow(img)
m,n = img.shape
m,n
U,s,Vt = svd(img)
Vt.shape
s.shape
S = resize(s,[m,1])*eye(m,n)
S
S.shape
m,n = img.shape
U,s,Vt = svd(img)
S = resize(s,[m,1])*eye(m,n)
k = 5
imshow(dot(U[:,1:k],dot(S[1:k,1:k],Vt[1:k,:])))
show()
k = 10
imshow(dot(U[:,1:k],dot(S[1:k,1:k],Vt[1:k,:])))
show()
k = 25
imshow(dot(U[:,1:k],dot(S[1:k,1:k],Vt[1:k,:])))
show()
from numpy import *
import numpy as np
def gen_mat(i,j):
A = floor(random.rand(i,j)*20-0) # se está haciendo una matriz aleatoria de 4x4
b = floor(random.rand(j,1)*20-0) # este es el vector de resultados b
return A,b
A,b= gen_mat(4,4)
A
b
#Esto sólo sirve para matrices cuadradas.
def Inversa(A):
if((A.shape[0] == A.shape[1])):
U,s,V=np.linalg.svd(A)
Inversa = np.dot(np.dot(V.T,linalg.inv(diag(s))),U.T)
return Inversa
else:
return "La Matriz no es cuadrada, calcula la pseudoinversa"
w=Inversa(A)
w
m,n=A.shape
m,n
U, s, V = np.linalg.svd(A)
s
U.shape
V.shape
S=eye(m,n)
S
S = resize(s,[m,1])*eye(m,n)
S
Sigma = np.zeros([U.shape[1],V.shape[0]])
Sigma
def P_Inversa(A):
import numpy as np
if isinstance(A, np.ndarray):
U, s, V = np.linalg.svd(A)
m,n=A.shape
S = resize(s,[m,1])*eye(m,n)
for i in range(S.shape[0]):
for j in range(S.shape[1]):
if (i == j):
if (s[i] == 0):
S[i,j] == 0
else:
S[i,j] = 1/s[i]
P_Inversa = np.dot(np.dot(V.T,S.T),U.T)
return(P_Inversa)
else:
return "Errores en la especificación"
A_I=P_Inversa(A)
A_I
B = np.linalg.pinv(A)
np.allclose(A, np.dot(A, np.dot(B, A)))
B
def Solucion(A,b):
import numpy as np
if isinstance(A, np.ndarray):
if isinstance(b, np.ndarray):
if((A.shape[1] == b.shape[0])):#la matriz y el vector son compatibles
if((A.shape[0] == A.shape[1])):
A_inv=Inversa(A)
x_sol = np.dot(A_inv,b)
else:
A_inv=P_Inversa(A)
x_sol = np.dot(A_inv.T,b)
return(x_sol)
else:
return "A y b son incompatibles"
else:
return "Problemas con b"
else:
return "Problemas con A"
x = Solucion(A,b)
x
import numpy as np
A1 =np.array([[1,1],[0,0]])
A1
b1=np.array([[1],[1]])
b1
x1 = Solucion(A1,b1)
x1
d=np.linalg.det(A1)
d
def Solucion(A,b):
import numpy as np
import warnings
if isinstance(A, np.ndarray):
if isinstance(b, np.ndarray):
if((A.shape[1] == b.shape[0])):#la matriz y el vector son compatibles
if((A.shape[0] == A.shape[1]) and np.linalg.det(A) != 0):
A_inv=Inversa(A)
x_sol = np.dot(A_inv,b)
return(x_sol)
elif (np.linalg.det(A)==0):
A_inv=P_Inversa(A)
x_sol = np.dot(A_inv.T,b)
print(x_sol)
print("La Matriz A es singular")
else:
A_inv=P_Inversa(A)
x_sol = np.dot(A_inv.T,b)
return(x_sol)
else:
return "A y b son incompatibles"
else:
return "Problemas con b"
else:
return "Problemas con A"
x1 = Solucion(A1,b1)
x1
b2=np.array([[1],[2]])
x2 = Solucion(A1,b2)
x2
b3=np.array([[1],[0]])
x3 = Solucion(A1,b3)
x3
A2 =np.array([[1,1],[0,1e-32]])
A2
x4 = Solucion(A2,b1)
x4
x5 = Solucion(A2,b2)
x5
x6 = Solucion(A2,b3)
x6
b4=np.array([[5],[7]])
x7 = Solucion(A1,b4)
x7
x8 = Solucion(A2,b4)
x8
b5=np.array([[5],[1e-13]])
x9 = Solucion(A1,b5)
x9
x10 = Solucion(A2,b5)
x10
import numpy as np
from pandas import DataFrame
import pandas as pd
import os
tabla = pd.read_csv("/Users/valeriaalvarez/Documents/tarea2.csv")
tabla
df=pd.DataFrame(tabla)
df
import statsmodels.formula.api as sm
result = sm.ols(formula="sat_score ~ study_hours", data=df).fit()
print (result.params)
study_hours=tabla["study_hours"]
sat_score=tabla["sat_score"]
Suma_study_hours = sum(study_hours)
Suma_sat_score = sum(sat_score)
Suma_hours_score = sum(study_hours*sat_score)
Suma_hours_2 = sum(study_hours**2)
Suma_score_2 = sum(sat_score**2)
obs = len(study_hours)
Alpha = Suma_sat_score/obs - (Beta*Suma_study_hours)/obs
Alpha
Beta = (obs*Suma_hours_score - Suma_study_hours*Suma_sat_score)/(obs*Suma_hours_2 - Suma_study_hours**2)
Beta
def mco(b0, b1, X):
rows = len(X)
sat_score_e1 = np.array([b0 + b1*X[i] for i in range(rows)])
return(sat_score_e1)
sat_score_e1=mco(Alpha, Beta, study_hours)
sat_score_e1
len(df["study_hours"])
col_b0 = [1 for i in range(len(x))]
X_m = np.array([col_b0,df["study_hours"]])
X_m
parametros = np.array([Alpha,Beta])
sat_score_e2=np.dot(X_m.T,parametros)
sat_score_e2
np.linalg.pinv((X_m).dot(X_m.T)).dot(X_m).dot(sat_score)
import matplotlib.pyplot as plt
plt.subplot(223)
plt.scatter(study_hours, sat_score_e1, label="pronosticado")
plt.scatter(study_hours, sat_score, label="observado")
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylabel('sat score')
plt.xlabel('study hours')
plt.title('Ajuste Modelo')
print(plt.show())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: - Realizar la descomposición SVD
Step2: - Verificar la descomposición SVD
Step3: con el método anterior, no logré imprimir la imagen con el imshow
Step4: ¿Qué tiene que ver este proyecto con compresión de imágenes?
Step5: Comprobación de que el proceso anterior sale correctamente.
Step6: Para obtener la solución para cualquier matriz, hay que juntar las 2 funciones anteriores
Step7: Jugar con el sistema Ax=b donde A = [[1,1],[0,0]] y b puede tomar distintos valores.
Step8: Al hacer el ejercicio con A1 = [[1,1],[0,0]] y b=([[1],[1]]), se obtiene un error de que el sistema no se puede resolver porque A es una matriz singular, las columnas de A son iguales por lo que su determinante es igual a 0. Para evitar este error y que se resuelva por la pseudoinversa, se mete la condición del determinante en la función.
Step9: Se vuelve a correr el ejercicio con las modificaciones al código
Step10: Se prueba con vectores b diferentes
Step11: Las soluciones parecen ser de la forma [x,0]
Step12: Al cambiar la entrada [2,2] de la matriz A original de 0 a 1-e32 y correr para diferentes valores de b, se observa que el resultado esta determinado por la elección de b[2,1] sin importar el valor de b[1,1]. En este caso las soluciones parecen ser de la forma [-x,x] donde x=b[2,1].
Step13: Programar un script que lea el archivo study_vs_sat.csv y lo almacene como un dataframe en pandas.
Step14: Plantear un problema de optimización que intente hacer una aproximación de la forma
Step15: Plantear como un problema de optimización que intente hacer una aproximación de la forma <code>sat_score ~ alpha + beta*study_hours minimizando la suma de los errores de predicción al cuadrado. Pueden consultar este link https
Step16: Programar una función que reciba valores de alpha, beta y el vector sat_score y devuelva un vector array de numpy de predicciones alpha + beta*study_hours_i, con un valor por cada individuo
Step17: Definir un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda con la variable study_hours. Observen que X[alpha,beta] devuelve alpha + betastudy_hours_i en cada entrada y que entonces el problema se vuelve sat_score ~ X*[alpha,beta].
Step18: Calcular la pseudoinversa B=(X^TX)^(-1)X^T*sat_score para obtener alpha y beta soluciones
Step19: Comparar la solución para obtener los parámetros anteriormente planteada y la que se obtienen al hacel la multiplicación de matrices directa.
|
15,622 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math, sys, os, numpy as np
import torch
from matplotlib import pyplot as plt, rcParams, animation, rc
from ipywidgets import interact, interactive, fixed
from ipywidgets.widgets import *
rc('animation', html='html5')
rcParams['figure.figsize'] = 3, 3
%precision 4
np.set_printoptions(precision=4, linewidth=100)
# Generating fake data
def lin(a,b,x): return a*x+b
def gen_fake_data(n, a, b):
x = s = np.random.uniform(0,1,n)
y = lin(a,b,x) + 0.1 * np.random.normal(0,3,n)
return x, y
x, y = gen_fake_data(50, 3., 8.)
plt.scatter(x,y, s=8); plt.xlabel("x"); plt.ylabel("y");
def mean_square_error(y_hat, y):
return np.power(y_hat - y, 2).mean()
# say our prediction is a=10, b=5; computing our error:
y_hat = lin(10,5,x)
mean_square_error(y_hat, y)
# generate some more data
x, y = gen_fake_data(10000, 3., 8.)
print(x.shape, y.shape)
dtype = torch.cuda.FloatTensor
# from NumPy to PyTorch
x = torch.from_numpy(x)
y = torch.from_numpy(y)
from torch.autograd import Variable
# Create random Tensors to hold x and y, and wrap them in Variables.
# Setting requires_grad=False indicates that we don't need to compute gradients
# wrt to these Variables.
x = Variable(x.type(dtype), requires_grad=False)
y = Variable(y.type(dtype), requires_grad=False)
# Create random Tensors for weights a and b, ard wrap them in Variables.
# Setting requires_grad+True indicates that we want to compute gradients wrt
# these Variables.
a = Variable(torch.randn(1).type(dtype), requires_grad=True)
b = Variable(torch.randn(1).type(dtype), requires_grad=True)
a.grad = Variable(dtype(a.size()).zero_())
b.grad = Variable(dtype(b.size()).zero_())
a, b
learning_rate = 1e-3
for t in range(10000):
# Forward pass: copmute predicted y using operatiosn on Variables
y_pred = x * a + b
loss = (y_pred - y).pow(2).mean()
if t % 1000 == 0: print(loss.data)
# Manually zero the gradients
a.grad.data.zero_()
b.grad.data.zero_()
# Computes the gradient of loss wrt all Variables with requires_grad=True
# After this call a.grad and b.grad will be Variables holding the gradient
# of the loss wrt a and b respectively.
loss.backward()
# Update a and b using gradient descent; a.data and b.data are Tensors,
# a.grad and b.grad are Variables and a.grad.data and b.grad.data are
# Tensors.
a.data -= learning_rate * a.grad.data
b.data -= learning_rate * b.grad.data
print(a, b)
x, y = gen_fake_data(50, 3., 8.)
def sse(y,y_pred): return ((y-y_pred)**2).sum()
def loss(y,a,b,x): return sse(y, lin(a,b,x))
def avg_loss(y,a,b,x): return np.sqrt(loss(y,a,b,x)/y.shape[0])
a_guess = -1.
b_guess = 1.
avg_loss(y, a_guess, b_guess, x)
lr = 0.01
def upd():
global a_guess, b_guess
y_pred = lin(a_guess, b_guess, x)
dydb = 2 * (y_pred - y)
dyda = x * dydb
a_guess -= lr * dyda.mean()
b_guess -= lr * dydb.mean()
fig = plt.figure(dpi=100, figsize=(5,4))
plt.scatter(x,y)
line, = plt.plot(x, lin(a_guess, b_guess, x))
plt.close()
def animate(i):
line.set_ydata(lin(a_guess, b_guess, x))
for i in range(30): upd()
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(0, 40), interval=100)
ani
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Components of Learning
Step2: You want to find parameters (weights) a and b such that you minimize the error btwn the points and the line a * x + b. Note that a and b are unknown. For a regression problem, the most common error function or loss function is the mean squared error.
Step3: Gradient Descent
Step4: Stochasting Gradient Descent (SGD)
|
15,623 | <ASSISTANT_TASK:>
Python Code:
% matplotlib inline
import os
import numpy as np
import nibabel as nib
from nipy.labs.utils.simul_multisubject_fmri_dataset import surrogate_3d_dataset
import nipy.algorithms.statistics.rft as rft
from __future__ import print_function, division
import math
import matplotlib.pyplot as plt
import palettable.colorbrewer as cb
from nipype.interfaces import fsl
import pandas as pd
import nipy.algorithms.statistics.intvol as intvol
from matplotlib import colors
import scipy.stats as stats
smooth_FWHM = 3
smooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2)))
data = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(500,500,500),noise_level=1)
plt.figure(figsize=(6,4))
plt.imshow(data[1:20,1:20,1])
plt.colorbar()
plt.show()
minimum = data.min()
newdata = data - minimum #little trick because fsl.model.Cluster ignores negative values
img=nib.Nifti1Image(newdata,np.eye(4))
img.to_filename("files/RF.nii.gz")
cl=fsl.model.Cluster()
cl.inputs.threshold = 0
cl.inputs.in_file="files/RF.nii.gz"
cl.inputs.out_localmax_txt_file="files/locmax.txt"
cl.inputs.num_maxima=1000000
cl.inputs.connectivity=26
cl.inputs.terminal_output='none'
cl.run()
peaks = pd.read_csv("files/locmax.txt",sep="\t").drop('Unnamed: 5',1)
peaks.Value = peaks.Value + minimum
peaks[:5]
len(peaks)
col=cb.qualitative.Set1_8.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(peaks.Value,40,normed=1,facecolor=col[0],alpha=0.75,lw=0)
ax.set_xlim([-1,5])
plt.show()
def nulprobdens(exc,peaks):
v = exc
u = peaks - v
f0 = (2+(u+v)**2)*(u+v)*np.exp(-(u+v)**2/2)/(v**2*np.exp(-v**2/2))
return f0
fig,axs=plt.subplots(1,5,figsize=(13,3))
fig.subplots_adjust(hspace = .5, wspace=0.3)
axs=axs.ravel()
thresholds=[2,2.5,3,3.5,4]
bins=np.arange(2,5,0.5)
x=np.arange(2,10,0.0001)
twocol=cb.qualitative.Paired_10.mpl_colors
for i in range(5):
thr=thresholds[i]
axs[i].hist(peaks.Value[peaks.Value>thr],lw=0,facecolor=twocol[i*2-2],normed=True,bins=np.arange(thr,5,0.1))
axs[i].set_xlim([thr,5])
axs[i].set_ylim([0,3])
xn = x[x>thr]
yn = nulprobdens(thr,xn)
axs[i].plot(xn,yn,color=twocol[i*2-1],lw=3)
axs[i].set_title("threshold:"+str(thr))
axs[i].set_xticks(np.arange(thr,5,0.5))
axs[i].set_yticks([1,2])
plt.show()
fig,axs=plt.subplots(1,4,figsize=(13,7))
fig.subplots_adjust(hspace = .1, wspace=0.1)
axs=axs.ravel()
thresholds=np.arange(0,4,1)
cmap = colors.ListedColormap(['white', 'black'])
bounds=[0,0.5,1]
norm = colors.BoundaryNorm(bounds, cmap.N)
for t in range(len(thresholds)):
mask = np.zeros(shape=data.shape,dtype=np.intp)
mask[data>thresholds[t]]=1
axs[t].imshow(mask[1:200,1:200,20],cmap=cmap,norm=norm)
axs[t].set_title("threshold:"+str(thresholds[t]))
axs[t].patch.set_visible(False)
axs[t].axis('off')
EulerDens = []
EulerDensInv = []
urange = np.arange(-4,4,0.3)
for t in urange:
mask = np.zeros(shape=data.shape,dtype=np.intp)
mask[data>t]=1
EulerDens.append(intvol.EC3d(mask))
mask2 = 1-mask
EulerDensInv.append(intvol.EC3d(mask2))
sumpeak = []
for t in urange:
sumpeak.append(sum(peaks.Value>t))
plt.figure(figsize=(7,5))
plt.plot(urange,EulerDens,color=col[1],lw=3,label="observed Euler Characteristic")
plt.plot(urange,EulerDensInv,color=col[2],lw=3,label="observed inverse Euler Characteristic")
plt.plot(urange,sumpeak,color=col[3],lw=3,label="Number of peaks")
plt.legend(loc="upper right",frameon=False)
plt.ylim([-600000,1200000])
plt.show()
smoothnesses = [0,3,6,9]
minima = []
for sm in range(len(smoothnesses)):
smooth_FWHM = smoothnesses[sm]
smooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2)))
data = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(500,500,500),noise_level=1)
minimum = data.min()
newdata = data - minimum #little trick because fsl.model.Cluster ignores negative values
minima.append(minimum)
img=nib.Nifti1Image(newdata,np.eye(4))
img.to_filename(os.path.join("files/RF_"+str(sm)+".nii.gz"))
cl=fsl.model.Cluster()
cl.inputs.threshold = 0
cl.inputs.in_file=os.path.join("files/RF_"+str(sm)+".nii.gz")
cl.inputs.out_localmax_txt_file=os.path.join("files/locmax_"+str(sm)+".txt")
cl.inputs.num_maxima=10000000
cl.inputs.connectivity=26
cl.inputs.terminal_output='none'
cl.run()
col=cb.qualitative.Set1_8.mpl_colors+cb.qualitative.Set2_8.mpl_colors
plt.figure(figsize=(10,5))
ax=plt.subplot(111)
for sm in range(len(smoothnesses)):
file = os.path.join("files/RF_"+str(sm)+".nii.gz")
tvals = nib.load(file).get_data().astype('float64')+minima[sm]
values, base = np.histogram(tvals,100,normed=1)
ax.plot(base[:-1],values,label="smoothness: "+str(smoothnesses[sm]),color=col[sm],lw=2)
ax.set_xlim([-4,4])
ax.set_ylim([0,0.5])
ax.legend(loc="lower right",frameon=False)
ax.set_title("distribution of peak heights for different smoothing kernels (FWHM)")
plt.show()
all = []
for sm in range(len(smoothnesses)):
peaks = pd.read_csv(os.path.join("files/locmax_"+str(sm)+".txt"),sep="\t").drop('Unnamed: 5',1).Value
peaks = peaks + minima[sm]
all.append(peaks)
col=cb.qualitative.Set1_8.mpl_colors+cb.qualitative.Set2_8.mpl_colors
plt.figure(figsize=(10,5))
ax=plt.subplot(111)
for sm in range(len(smoothnesses)):
values, base = np.histogram(all[sm],30,normed=1)
ax.plot(base[:-1],values,label="smoothness: "+str(smoothnesses[sm]),color=col[sm],lw=2)
ax.set_xlim([-1,5])
ax.set_ylim([0,1.2])
ax.legend(loc="lower right",frameon=False)
ax.set_title("distribution of peak heights for different smoothing kernels (FWHM)")
plt.show()
# random sample
smplm = []
for i in range(100000):
smpl = np.random.standard_normal((n,))
smplm.append(max(smpl))
# distribution
xm = np.arange(-1,5,0.001)
ym = n*stats.norm.cdf(xm)**(n-1)*stats.norm.pdf(xm)
# histogram
twocol=cb.qualitative.Paired_10.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(smplm,100,normed=1,facecolor=twocol[0],alpha=0.75,lw=0)
ax.plot(xm,ym,color=twocol[1],lw=3)
ax.set_xlim([-1,5])
plt.show()
n = (500**3)/len(all[1])
n
# distribution of a maximum
xm = np.arange(-1,5,0.001)
ym = n*stats.norm.cdf(xm)**(n-1)*stats.norm.pdf(xm)
# histogram
twocol=cb.qualitative.Paired_10.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(all[1],100,normed=1,facecolor=twocol[0],alpha=0.75,lw=0)
ax.plot(xm,ym,color=twocol[1],lw=3)
ax.set_xlim([-1,5])
plt.show()
# random sample
smplmc = []
n = 2
mean = [0,0]
r = 0.2
cov = [[1,r],[r,1]]
for i in range(100000):
smpl = np.random.multivariate_normal(mean,cov,int(n/n))
smplmc.append(np.max(smpl))
# distribution
xmc = np.arange(-2,3,0.001)
corf = (1-r)/np.sqrt(1-r**2)
ymc = n*stats.norm.cdf(corf*xmc)**(n-1)*stats.norm.pdf(xmc)
# histogram
twocol=cb.qualitative.Paired_10.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(smplmc,100,normed=1,facecolor=twocol[2],alpha=0.75,lw=0)
ax.plot(xmc,ymc,color=twocol[3],lw=3)
ax.set_xlim([-1,5])
plt.show()
# random sample
smplmc = []
n = 10
mean = np.array([0,0,0,0,0,0,0,0,0,0])
r = 0.5
cov = np.array([[1,r,r,r,r,r,r,r,r,r],
[r,1,r,r,r,r,r,r,r,r],
[r,r,1,r,r,r,r,r,r,r],
[r,r,r,1,r,r,r,r,r,r],
[r,r,r,r,1,r,r,r,r,r],
[r,r,r,r,r,1,r,r,r,r],
[r,r,r,r,r,r,1,r,r,r],
[r,r,r,r,r,r,r,1,r,r],
[r,r,r,r,r,r,r,r,1,r],
[r,r,r,r,r,r,r,r,r,1]
])
for i in range(100):
smpl = np.random.multivariate_normal(mean,cov,int(n/n))
smplmc.append(np.max(smpl))
# distribution (just max of gaussian normal)
xm = np.arange(-1,5,0.001)
corf = (1-r)/np.sqrt(1-r**2)
ym = n*stats.norm.cdf(xm)**(n-1)*stats.norm.pdf(xm)
# histogram
twocol=cb.qualitative.Paired_10.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(smplm,100,normed=1,facecolor=twocol[0],alpha=0.75,lw=0)
ax.plot(xm,ym,color=twocol[1],lw=3)
ax.set_xlim([-1,5])
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulate very large RF
Step2: Show part of the RF (20x20x1)
Step3: Save RF
Step4: Run fsl cluster to extract local maxima
Step5: Read and print top of file with peaks , print total number of peaks.
Step6: Plot histogram local maxima
Step7: 2. First approach
Step8: Compute density function over a range with different excursion thresholds
Step9: In this figure, we see the observed tail distribution of the local maxima (light colored) in our simulated data. The thick line represents the theoretical distribution of the local maxima above a certain threshold. It is only a good approximation for really high thresholds.
Step10: Now we'll look at the number of peaks and the Euler Characteristic against the threshold.
Step11: In this plot we can indeed see that the Euler Characteristic gives the number of peaks, but only above a certain threshold that is high enough. Below these higher thresholds, is gives # peaks - # holes. Is there a way to estimate the number of holes in the presence of peaks using the EC? I don't think so, it's the exact same problem as the number of paeks in the presence of holes? Therefore the Euler Characteristic cannot give us information for lower thresholds.
Step12: Look at the distribution of the voxels?
Step13: So how about the distribution of the maximum of a sample of these distributions?
Step14: As expected, from a certain smoothness (3 x voxel size), the distribution remains the same.
Step15: Now can we just state that the peaks in our unsmoothed random field is the maxima of a sample?
Step16: Show histogram of peaks of uncorrected field with the distribution of maximum of a sample of size n.
Step17: Ok, I'm stuck.
|
15,624 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig = plt.figure()
ax = fig.add_subplot(111)
N = 10
x = np.random.rand(N)
y = np.random.rand(N)
z = np.random.rand(N)
circles, triangles, dots = ax.plot(x, 'ro', y, 'g^', z, 'b.')
ax.set_ylim(0,1)
plt.axis('off')
def update(data):
circles.set_ydata(data[0])
triangles.set_ydata(data[1])
return circles, triangles
def generate():
while True:
yield np.random.rand(2, N)
anim = animation.FuncAnimation(fig, update, generate, interval=500)
plt.show()
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# 使用meshgrid函数创建二维的坐标网络
u = np.linspace(-1,1,100)
x, y = np.meshgrid(u, u)
z = x**2 + y**2
# 指定行和列的步幅,以及绘制曲面所用的色彩表(color map)
ax.plot_surface(x, y, z, rstride=4, cstride=4, cmap=cm.rainbow_r)
plt.show()
x
fig = plt.figure(figsize=(3,5))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
u = np.linspace(-1, 1, 100)
x, y = np.meshgrid(u, u)
z = x**2 + y**2
ax1.contour(x, y, z)
ax2.contourf(x, y, z)
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3, cmap=cm.winter_r)
# 绘制等高线
cset = ax.contourf(X, Y, Z, zdir='z', offset=-100, cmap=cm.coolwarm)
cset = ax.contourf(X, Y, Z, zdir='x', offset=-40, cmap=cm.coolwarm)
cset = ax.contourf(X, Y, Z, zdir='y', offset=40, cmap=cm.coolwarm)
ax.set_xlabel('X')
ax.set_xlim(-40, 40)
ax.set_ylabel('Y')
ax.set_ylim(-40, 40)
ax.set_zlabel('Z')
ax.set_zlim(-100, 100)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. 动画
Step2: 2. 三维绘图
Step3: 3. 绘制等高线图
Step4: 4. 结合三维绘图和等高线图
|
15,625 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
s
s = pd.Series([1,3,5,np.nan,6,8])
s
d = {'a' : 0., 'b' : 1., 'c' : 2.}
pd.Series(d)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 数据结构
Step2: From dict
|
15,626 | <ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/biothings/biothings_explorer#egg=biothings_explorer
# import modules from biothings_explorer
from biothings_explorer.hint import Hint
from biothings_explorer.user_query_dispatcher import FindConnection
from biothings_explorer.hint import Hint
ht = Hint()
parkDis = ht.query("Parkinson disease")['DiseaseOrPhenotypicFeature'][0]
parkDis
from biothings_explorer.user_query_dispatcher import FindConnection
fc = FindConnection(input_obj=parkDis, output_obj='ChemicalSubstance', intermediate_nodes=['Gene'])
fc.connect(verbose=True)
df = fc.display_table_view()
dfFilt = df.loc[df['output_name'].notnull()].query('pred1 == "causedBy" and pred2 == "targetedBy"')
dfFilt
dfFiltUnique = dfFilt[["input","node1_name","output_name"]].drop_duplicates()
dfFiltUnique
import pandas as pd
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_colwidth', 2000)
print("Num genes: "+str(len(dfFiltUnique["node1_name"].value_counts()))+"\n")
print(dfFiltUnique["node1_name"].value_counts())
knownPDgenes = dfFilt[dfFilt["node1_name"].isin(["LRRK2","COMT","DRD2","SNCA"])][["input","node1_name","output_name"]].drop_duplicates().sort_values("node1_name")
compounds = knownPDgenes.groupby(['node1_name'])['output_name'].apply(','.join)
count = knownPDgenes.groupby(['node1_name'])['output_name'].count()
result = pd.DataFrame({ 'count': count, 'compounds': compounds } )
result
dfFilt.query('node1_name == "LCN2"')[["output_id","output_name"]].drop_duplicates()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then import the relevant modules
Step2: Step 1
Step3: Step 2
Step4: The df object contains the full output from BioThings Explorer. Each row shows one path that joins the input node (Parkinson's disease) to an intermediate node (a gene or protein) to an ending node (a chemical compound). The data frame includes a set of columns with additional details on each node and edge (including human-readable labels, identifiers, and sources). Let's remove all examples where the output_name (the compound label) is None, and specifically focus on paths with specific mechanistic predicates causedBy and targetedBy.
Step5: Let's examine how many unique Parkinsons - GENE - DRUG paths there are
Step6: Results
Step7: Within this list, we found a number of genes associated with therapies for Parkinson's Disease that are either approved or in development, including DRD2, COMT, LRRK2, and SNCA. We can also view the many chemical compounds that are known to modulate those genes using the query below
Step8: The other genes in that list all have some link to Parkinson's Disease and have known chemical modulators. In some cases, these genes reflect relatively recent findings that may represent novel therapeutic candidates. For example, BTE found a link between the gene LCN2 and Parkinson's Disease based on both on upregulation in the brains of PD patients, and based on genetic evidence in mouse models (PMID
|
15,627 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from sympy.interactive import printing
printing.init_printing()
from frame import *
import sympy as sp
import numpy as np
import scipy.sparse as sparse
import scipy.sparse.linalg as linalg
class Frame_Buckling(LinearFrame):
def N_local_stress(self,element):
Returns the normal forces of an element.
Ke= self.K_local()
Ue= self.U_e_local_coord(element)
F=Ke*Ue
N_local = F[3]
return N_local
def N_local_stress_tot(self):
Returns the normal force of all elements.
Ns=[self.N_local_stress(e)for e in range (self.nelements)]
return Ns
def G_local(self):
Returns the global geometric stiffness matrix
L = sp.Symbol('L')
s = sp.Symbol('s')
S=self.S()
Ge=sp.Matrix([[sp.integrate(S[1,i_local].diff(s)*S[1,j_local].diff(s),(s,0,L) )for i_local in range(6)] for j_local in range(6)])
return Ge
def G_local_rotated(self):
Gives the analytical expression the local geometric stiffness matrix in the global coordinate system
as a function of the orientation angle alpha
alpha = sp.Symbol("alpha")
R = self.rotation_matrix(alpha)
Ge = R.transpose()*self.G_local()*R
return Ge
def assemble_G(self):
Returns the global stiffness matrix
Ge = self.G_local_rotated()
G = np.zeros([self.ndof,self.ndof])
N0=self.N_local_stress_tot()
for e in range(self.nelements):
Gen = -N0[e].subs({'EI': self.EI[e], 'ES': self.ES[e], 'L': self.Ls[e], 'alpha': self.angles[e]})*Ge.subs({'EI': self.EI[e], 'ES': self.ES[e], 'L': self.Ls[e], 'alpha': self.angles[e]})
for i_local in range(6):
for j_local in range(6):
G[self.dof_map(e, i_local),self.dof_map(e, j_local)] += Gen[i_local,j_local]
return G
def bc_apply_G(self,G,blocked_dof):
for (dof) in enumerate(blocked_dof):
Gbc = G
Gbc[dof, :] = 0
Gbc[:, dof] = 0
Gbc[dof, dof] = 1
return Gbc
def full_power_method(A, niterations_max=50, tol=1e-15):
xn = np.zeros((len(A), niterations_max+1))
xn[:, 0] = np.ones((len(A),)) + 1e-7*np.random.rand(len(A))
rn = np.ones((niterations_max+1,))
for k in range(niterations_max):
xn[:,k] = xn[:,k] / np.linalg.norm(xn[:,k])
xn[:,k+1] = np.dot(A, xn[:,k])
rn[k+1] = np.sum(xn[:,k+1])/np.sum(xn[:,k])
if (abs(rn[k+1]-rn[k]) < tol):
break
if k < niterations_max:
rn[k+2:] = rn[k+1] # This ensures the later values are set to something sensible.
return (rn[k+1], rn, xn[:,k+1]/ np.linalg.norm(xn[:,k+1]))
def inverse_power_method(A, niterations_max=50, tol=1e-15):
xn = np.zeros((len(A), niterations_max+1))
xn[:, 0] = np.ones((len(A),)) + 1e-7*np.random.rand(len(A))
rn = np.ones((niterations_max+1,))
for k in range(niterations_max):
xn[:,k] = xn[:,k] / np.linalg.norm(xn[:,k])
xn[:,k+1] = np.linalg.solve(A, xn[:,k])
rn[k+1] = np.sum(xn[:,k+1])/np.sum(xn[:,k])
if (abs(rn[k+1]-rn[k]) < tol):
break
if k < niterations_max:
rn[k+2:] = rn[k+1] # This ensures the later values are set to something sensible.
return (1.0/rn[k+1], 1.0/rn, xn[:,k+1]/ np.linalg.norm(xn[:,k+1]))
E=1.3 #en MPa
h=7.5 #en mm
b=20. #en mm
Lx=55. #en mm
Lyh=60. #en mm
Lyb=45. #en mm
I=b*(h**3)/12 #en mm^4
S=b*h #en mm^2
eps=10**(-3)
g=9.81 #en m.s^(-2)
m=1 #en kg
n_elements = 10
xnodes = np.linspace(0,1000,n_elements + 1)
ynodes = np.linspace(0,0,n_elements + 1)
nodes = np.array([xnodes,ynodes]).transpose()
n_nodes = xnodes.size
elements=np.array([[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,9],[9,10]])
frame= Frame_Buckling(nodes,elements)
frame.plot_with_label()
ne = frame.nelements
ndof = frame.ndof
EI = np.ones(ne)*E*I
ES = np.ones(ne)*E*S
f_x = 0*np.ones(ne)
f_y = 0*np.ones(ne)
frame.set_distributed_loads(f_x, f_y)
frame.set_stiffness(EI, ES)
blocked_dof = np.array([0, 1, ndof-2])
bc_values = np.array([0, 0, 0])
K = frame.assemble_K()
F=frame.assemble_F()
#F[12]=F[12]-.5*EI[0]*np.pi**2
F[ndof-3]=F[ndof-3]-1.
Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values)
Usol = np.linalg.solve(Kbc,Fbc)
Usol
frame.set_displacement(Usol)
frame.plot_with_label()
frame.plot_displaced()
Gbc=frame.assemble_G()
G=frame.bc_apply_G(Gbc,blocked_dof)
Ks = sparse.csr_matrix(K)
Gs = sparse.csr_matrix(G)
val, vect = linalg.eigsh(Ks, 5, Gs, which = 'LA', sigma =4.)
print(val)
frame.set_displacement(10*vect[:,0])
frame.plot_with_label()
frame.plot_displaced()
E*I*np.pi**2/1000**2
E=1.3 #en MPa
h=7.5 #en mm
b=20. #en mm
Lx=55. #en mm
Lyh=60. #en mm
Lyb=45. #en mm
I=b*(h**3)/12 #en mm^4
S=b*h #en mm^2
eps=10**(-3)
g=9.81 #en m.s^(-2)
m=0.05 #en kg
nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx,0.]])
elements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[5,6]])
frame= Frame_Buckling(nodes,elements)
frame.plot_with_label()
ne = frame.nelements
ndof = frame.ndof
EI = np.ones(ne)*E*I
ES = np.ones(ne)*E*S
EI[1]=100*E*I;EI[3]=100*E*I;EI[4]=100*E*I
ES[1]=100*E*S;ES[3]=100*E*S;ES[4]=100*E*S
f_x = 0*np.ones(7)
f_y = 0*np.ones(7)
frame.set_distributed_loads(f_x, f_y)
frame.set_stiffness(EI, ES)
blocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1])
bc_values = np.array([0, 0, 0, 0, 0, 0])
K = frame.assemble_K()
F=frame.assemble_F()
#F[10]=F[10]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2
F[10]=F[10]-1.
Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values)
Usol = np.linalg.solve(Kbc,Fbc)
Usol
frame.set_displacement(Usol)
Ge=frame.N_local_stress_tot()
Gbc=frame.assemble_G()
G=frame.bc_apply_G(Gbc,blocked_dof)
Ks = sparse.csr_matrix(K)
Gs = sparse.csr_matrix(G)
val, vect = linalg.eigsh(Ks, 3, Gs, which = 'LA', sigma = 3.)
print(val)
print(vect[:,0])
frame.set_displacement(1*vect[:,0])
frame.plot_with_label()
frame.plot_displaced()
nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx/2,Lyh/2+Lyb],[Lx,0.]])
elements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[1,6],[2,6],[4,6],[5,6],[5,7]])
frame= Frame_Buckling(nodes,elements)
frame.plot_with_label()
ne = frame.nelements
ndof = frame.ndof
EI = np.ones(ne)*E*I
ES = np.ones(ne)*E*S
EI[1]=100*E*I;EI[3]=100*E*I;EI[4]=100*E*I
ES[1]=100*E*S;ES[3]=100*E*S;ES[4]=100*E*S
f_x = 0*np.ones(ne)
f_y = 0*np.ones(ne)
frame.set_distributed_loads(f_x, f_y)
frame.set_stiffness(EI, ES)
blocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1])
bc_values = np.array([0, 0, 0, 0, 0, 0])
K = frame.assemble_K()
F=frame.assemble_F()
#F[10]=F[10]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2
F[10]=F[10]-1.
Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values)
Usol = np.linalg.solve(Kbc,Fbc)
frame.set_displacement(Usol)
Gbc=frame.assemble_G()
G=frame.bc_apply_G(Gbc,blocked_dof)
Ks = sparse.csr_matrix(K)
Gs = sparse.csr_matrix(G)
val, vect = linalg.eigsh(Ks, 3, Gs, which = 'LA', sigma =4.)
print(val)
print(vect[:,1])
frame.set_displacement(1*vect[:,0])
frame.plot_with_label()
frame.plot_displaced()
nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx,0.]])
elements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[5,6]])
frame= Frame_Buckling(nodes,elements)
frame.plot_with_label()
ne = frame.nelements
ndof = frame.ndof
EI = np.ones(ne)*E*I
ES = np.ones(ne)*E*S
f_x = 0*np.ones(7)
f_y = 0*np.ones(7)
frame.set_distributed_loads(f_x, f_y)
frame.set_stiffness(EI, ES)
blocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1])
bc_values = np.array([0, 0, 0, 0, 0, 0])
K = frame.assemble_K()
F=frame.assemble_F()
#F[7]=F[7]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2
#F[13]=F[13]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2
F[7]=F[7]-1.
F[13]=F[13]-1.
Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values)
Usol = np.linalg.solve(Kbc,Fbc)
Usol
frame.set_displacement(Usol)
Gbc=frame.assemble_G()
G=frame.bc_apply_G(Gbc,blocked_dof)
Ks = sparse.csr_matrix(K)
Gs = sparse.csr_matrix(G)
val, vect = linalg.eigsh(Ks, 6, Gs, which = 'LA', sigma =1.2)
print(val)
print(vect[:,0])
frame.set_displacement(1*vect[:,0])
frame.plot_with_label()
frame.plot_displaced()
nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx,0.]])
elements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[2,5],[0,5],[5,6]])
frame= Frame_Buckling(nodes,elements)
frame.plot_with_label()
ne = frame.nelements
ndof = frame.ndof
EI = np.ones(ne)*E*I
ES = np.ones(ne)*E*S
f_x = 0*np.ones(ne)
f_y = 0*np.ones(ne)
frame.set_distributed_loads(f_x, f_y)
frame.set_stiffness(EI, ES)
blocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1])
bc_values = np.array([0, 0, 0, 0, 0, 0])
K = frame.assemble_K()
F=frame.assemble_F()
#F[7]=F[7]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2
#F[13]=F[13]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2
F[10]=F[10]-1.
Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values)
Usol = np.linalg.solve(Kbc,Fbc)
Usol
frame.set_displacement(Usol)
Gbc=frame.assemble_G()
G=frame.bc_apply_G(Gbc,blocked_dof)
Ks = sparse.csr_matrix(K)
Gs = sparse.csr_matrix(G)
val, vect = linalg.eigsh(Ks, 3, Gs, which = 'LA', sigma =9.)
print(val)
frame.set_displacement(1*vect[:,0])
frame.plot_with_label()
frame.plot_displaced()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Project on frame buckling
Step6: Essai sur exercice 2
Step7: Essai sur d'autres structures
Step8: Structure avec deux forces et sans étage renforcé
Step9: Structure croisée
|
15,628 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint # This is the numerical solver
def rhs(Y,t,omega): # this is the function of the right hand side of the ODE
y,ydot = Y
return ydot,-omega*omega*y
t_arr=np.linspace(0,2*np.pi,101)
y_init =[1,0]
omega = 2.0
y_arr=odeint(rhs,y_init,t_arr, args=(omega,))
y,ydot = y_arr[:,0],y_arr[:,1]
plt.ion()
plt.plot(t_arr,y,t_arr,ydot)
# Let's draw a phase portrait, plotting y and ydot together
plt.plot(y,ydot)
plt.title("Solution curve when omega = %4g" % omega)
plt.xlabel("y values")
plt.ylabel("ydot values")
t_arr=np.linspace(0,2*np.pi,101)
y_init =[1,0]
omega = 2.0
y_exact = y_init[0]*np.cos(omega*t_arr) + y_init[1]*np.sin(omega*t_arr)/omega
ydot_exact = -omega*y_init[0]*np.sin(omega*t_arr) + y_init[1]*np.cos(omega*t_arr)
y_arr=odeint(rhs,y_init,t_arr, args=(omega,))
y,ydot = y_arr[:,0],y_arr[:,1]
plt.plot(t_arr,y,t_arr,y_exact)
# We plot the difference
plt.plot(t_arr,y-y_exact)
numsteps=1000001 # adjust this parameter
y_init =[1,0]
omega = 2.0
t_arr=np.linspace(0,2*np.pi,numsteps)
y_exact = y_init[0]*np.cos(omega*t_arr) + y_init[1]*np.sin(omega*t_arr)/omega
ydot_exact = -omega*y_init[0]*np.sin(omega*t_arr) + y_init[1]*np.cos(omega*t_arr)
y_arr=odeint(rhs,y_init,t_arr, args=(omega,))
y,ydot = y_arr[:,0],y_arr[:,1]
plt.plot(t_arr,y-y_exact)
numsteps=100001 # adjust this parameter
y_init =[1,0]
omega = 2.0
t_arr=np.linspace(0,2*1000*np.pi,numsteps)
y_exact = y_init[0]*np.cos(omega*t_arr) + y_init[1]*np.sin(omega*t_arr)/omega
ydot_exact = -omega*y_init[0]*np.sin(omega*t_arr) + y_init[1]*np.cos(omega*t_arr)
y_arr=odeint(rhs,y_init,t_arr, args=(omega,))
y,ydot = y_arr[:,0],y_arr[:,1]
plt.plot(t_arr,y-y_exact)
p1=np.size(t_arr)-1
p0=p1-100
plt.plot(t_arr[p0:p1],y[p0:p1],t_arr[p0:p1],y_exact[p0:p1])
plt.plot(t_arr[p0:p1],y[p0:p1]-y_exact[p0:p1])
plt.plot(t_arr[p0:p1],y_exact[p0:p1],t_arr[p0:p1],3000*(y[p0:p1]-y_exact[p0:p1]))
def rhsSIN(Y,t,omega): # this is the function of the right hand side of the ODE
y,ydot = Y
return ydot,-omega*omega*np.sin(y)
omega = .1 # basic frequency
epsilon = .5 # initial displacement, in radians
t_arr=np.linspace(0,2*100*np.pi,1001)
y_init =[epsilon,0]
# we first set up the exact solution for the linear oscillator
y_exact = y_init[0]*np.cos(omega*t_arr) + y_init[1]*np.sin(omega*t_arr)/omega
ydot_exact = -omega*y_init[0]*np.sin(omega*t_arr) + y_init[1]*np.cos(omega*t_arr)
y_arr=odeint(rhsSIN,y_init,t_arr, args=(omega,))
y,ydot = y_arr[:,0],y_arr[:,1]
plt.ion()
plt.plot(t_arr,y,t_arr,y_exact)
def rhsLZ(u,t,beta,rho,sigma):
x,y,z = u
return [sigma*(y-x), rho*x-y-x*z, x*y-beta*z]
sigma = 10.0
beta = 8.0/3.0
rho1 = 29.0
rho2 = 28.8 # two close values for rho give two very different curves
u01=[1.0,1.0,1.0]
u02=[1.0,1.0,1.0]
t=np.linspace(0.0,50.0,10001)
u1=odeint(rhsLZ,u01,t,args=(beta,rho1,sigma))
u2=odeint(rhsLZ,u02,t,args=(beta,rho2,sigma))
x1,y1,z1=u1[:,0],u1[:,1],u1[:,2]
x2,y2,z2=u2[:,0],u2[:,1],u2[:,2]
from mpl_toolkits.mplot3d import Axes3D
plt.ion()
fig=plt.figure()
ax=Axes3D(fig)
ax.plot(x1,y1,z1,'b-')
ax.plot(x2,y2,z2,'r:')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.set_title('Lorenz equations with rho = %g, %g' % (rho1,rho2))
fig=plt.figure()
ax=Axes3D(fig)
ax.plot(x1,y1,z1)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
fig=plt.figure()
ax=Axes3D(fig)
ax.plot(x2,y2,z2)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
def rhsVDP(y,t,mu):
return [ y[1], mu*(1-y[0]**2)*y[1] - y[0]]
def jac(y,t,mu):
return [ [0,1], [-2*mu*y[0]*y[1]-1, mu*(1-y[0]**2)]]
mu=100
t=np.linspace(0,300,10001)
y0=np.array([2.0,0.0])
y,info=odeint(rhsVDP,y0,t,args=(mu,),Dfun=jac,full_output=True)
print("mu = %g, number of Jacobian calls is %d", mu, info['nje'][-1])
plt.plot(t,y)
plt.plot(t,y[:,0])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We start with a second order linear equation, that has the usual harmonic oscillator solutions.
Step2: Now, I would like to test how accurate this numerical code is, by comparing the exact solution with the numerical solution. The exact solution is given by the initial values of y_init, and omega, and involves cosines and sines.
Step3: So, in the test I did above, we see an error that oscillates and grows with time, getting to about size 2x 10^(-7). Which is single precision accuray.
Step4: Okay, I went up to one million steps, and the error only reduced to about 1.0x10^(-7). Not much of an improvement.
Step5: Interesting. My little test show the error grow linearly with the length of time. In the first time, 2x10^(-7). For 10 times longer, 2x10^(-6). For 100 times longer, 2x10^(-5). And so one.
Step6: Ah ha! This looks like the negative derivative of the solution, which indicates we have a phase error. Because with phase error, we see the difference
Step7: Looking at the above, we see they don't quite line up. So a bit of phase error, a bit of amplitude error.
Step8: With epsilon = 0.1 (radians, which is about 5.7 degrees), it is hard to see a period shift. With epsilon = 0.5 (radians, which just under 30 degrees), we clearly see a shift after ten cycles of oscillation.
Step9: You should play around with the time interval (in the definition of varible t) to observe the predictable, followed by chaotic behaviour. ANd play with other parameters.
Step10: Try playing with the mu parameter. mu=0 gives the harmonic oscillator. mu=10 starts giving pointy spikes. For mu big, you might want to increase the range of to values, from [0,30] to a larger interval like [0,100]. Etc.
|
15,629 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
|
15,630 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
titanic_data = pd.read_csv('train.csv')
titanic_data.head(5)
titanic_data.info()
titanic_data.Age = titanic_data.Age.fillna(np.mean(titanic_data.Age))
titanic_data.info()
survivors = titanic_data[titanic_data.Survived == 1]
survivor_prob = (len(survivors) / len(titanic_data))
print("There is a " + str(survivor_prob) + " percent chance of survival.")
male_passenger = titanic_data[titanic_data.Sex == 'male']
prob_male = (len(male_passenger) / len(titanic_data))
print("There is a " + str(prob_male) + " percent probability that a passenger was male.")
c_port = survivors[survivors.Embarked == 'C']
prob_c = (len(c_port) / len(survivors))
print("There is a " + str(prob_c) + " percent probability that a survivor was from Cherbourg.")
import matplotlib.pyplot as plt
%matplotlib inline
all_ages = []
age_mean = np.mean(titanic_data.Age)
for i, k in enumerate(titanic_data.Age):
if round(k, 3) != round(age_mean, 3):
all_ages.append(k)
H, edges = np.histogram(all_ages, bins=25)
ax = plt.subplot(111)
ax.bar(edges[:-1], H / float(sum(H)), width=edges[1] - edges[0])
ax.set_xlabel("Passenger Age")
ax.set_ylabel("Frequency of Being on Board")
ax.minorticks_on()
plt.show()
less_then_ten = []
for i in all_ages:
if i < 10:
less_then_ten.append(i)
prob_less_then_ten = (len(less_then_ten) / len(all_ages))
print("There is a " + str(round(prob_less_then_ten, 3)) + " probabililty that a passenger was less then 10 year old.")
from scipy.stats import binom
binom.pmf(42, 100, survivor_prob)
1 - binom.cdf(42, 100, survivor_prob)
from scipy.stats import ttest_ind
survivors_male = survivors[(survivors.Sex == 'male') & (round(survivors.Age,3) != round(age_mean, 3)) ]
survivors_female = survivors[(survivors.Sex == 'female') & (round(survivors.Age, 3) != round(age_mean, 3))]
t_stat, p_value = ttest_ind(survivors_male.Age, survivors_female.Age)
print("Results:\n\tt-statistic: %.5f\n\tp-value: %.5f" % (t_stat, p_value))
plt.figure(figsize=(10, 4))
opacity = 0.5
plt.hist(survivors_male.Age, bins=np.arange(0, 90, 5), alpha=opacity, label="Males")
plt.hist(survivors_female.Age, bins=np.arange(0, 90, 5), alpha=opacity, label="Females")
plt.legend()
plt.title("Age Distribution of Female and Male Survivors")
plt.xlabel("Ages")
plt.ylabel("Survival")
plt.show()
from scipy.stats import ttest_ind
fare_from_q = titanic_data[titanic_data.Embarked == 'Q']
fare_from_c = titanic_data[titanic_data.Embarked == 'C']
t_stat, p_value = ttest_ind(fare_from_q.Fare, fare_from_c.Fare)
print("Results:\n\tt-statistic: %.5f\n\tp-value: %g" % (t_stat, p_value))
plt.figure(figsize=(10, 4))
opacity = 0.5
plt.hist(fare_from_q.Fare, bins=np.arange(0, 90, 5), alpha=opacity, label="Queenstown")
plt.hist(fare_from_c.Fare, bins=np.arange(0, 90, 5), alpha=opacity, label="Cherbourg")
plt.legend()
plt.title("Fare Distribution from Queenstown to Cherbourg")
plt.xlabel("Fare Price")
plt.ylabel("Number of Passengers")
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cleaning and filling data
Step2: Filling all NaN ages with the mean of all the ages and confirming with .info() method. We later compensate for this with a functions that remove the mean. This will be pointed out with a 'COMPENSATION
Step3: Question 1
Step4: Question 2
Step5: Our second choice of question is to find the probablity that a survivor was from Cherbourg.
Step6: Question 3
Step7: Question 4
Step8: Question 5
Step9: There is a 0.0613 probability that exactly 42 passenger survive out of 100. See above 'Out' for a more precise probability.
Step10: There is a 0.198 probability that at least 42 of those 100 passenger survive. See above 'Out' for a more precise probability.
Step11: There is no significance between the age of female and male survivors. This is because the p-value is greater than 0.05.
Step12: Question 7
Step13: There is statistical difference in the fares paid between the passengers at Queentown and Cherboug. This is indicated by the p-value that is less than 0.01.
|
15,631 | <ASSISTANT_TASK:>
Python Code:
import pkg_resources
import sys
import os
import time
from urllib.request import urlretrieve
import indra.util.get_version
import indra.java_vm # make sure INDRA is in charge of the JVM
import pybel
import pybel_tools
from pybel_tools.visualization import to_jupyter
%%bash
java -showversion 2>&1 | head -n 4
%%bash
locale
print(sys.version)
print(time.asctime())
os.environ['CLASSPATH']
pybel.utils.get_version()
pybel_tools.utils.get_version()
pkg_resources.get_distribution("jnius-indra").version
indra.util.get_version.get_version()
apoptosis_url = 'https://www.wikipathways.org//wpi/wpi.php?action=downloadFile&type=owl&pwTitle=Pathway:WP254&oldid=95107'
apoptosis_path = os.path.join(os.path.expanduser('~'), 'Downloads', 'WP254_95107.owl')
if not os.path.exists(apoptosis_path):
urlretrieve(apoptosis_url, apoptosis_path)
graph = pybel.from_biopax(apoptosis_path)
to_jupyter(graph)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Environment
Step2: Dependencies
Step3: Data
Step4: Conversion
|
15,632 | <ASSISTANT_TASK:>
Python Code:
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
from time import time
# this python library provides generic shallow (copy) and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# import injector lattice
from ocelot.test.workshop.injector_lattice import *
# load beam distribution
# this function convert Astra beam distribution to Ocelot format - ParticleArray. ParticleArray is designed for tracking.
# in order to work with converters we have to import specific module from ocelot.adaptors
from ocelot.adaptors.astra2ocelot import *
phi1=18.7268
V1=18.50662e-3/np.cos(phi1*pi/180)
C_A1_1_1_I1.v = V1; C_A1_1_1_I1.phi = phi1
C_A1_1_2_I1.v = V1; C_A1_1_2_I1.phi = phi1
C_A1_1_3_I1.v = V1; C_A1_1_3_I1.phi = phi1
C_A1_1_4_I1.v = V1; C_A1_1_4_I1.phi = phi1
C_A1_1_5_I1.v = V1; C_A1_1_5_I1.phi = phi1
C_A1_1_6_I1.v = V1; C_A1_1_6_I1.phi = phi1
C_A1_1_7_I1.v = V1; C_A1_1_7_I1.phi = phi1
C_A1_1_8_I1.v = V1; C_A1_1_8_I1.phi = phi1
phi13=180
V13=-20.2E-3/8/np.cos(phi13*pi/180)
C3_AH1_1_1_I1.v=V13; C3_AH1_1_1_I1.phi=phi13
C3_AH1_1_2_I1.v=V13; C3_AH1_1_2_I1.phi=phi13
C3_AH1_1_3_I1.v=V13; C3_AH1_1_3_I1.phi=phi13
C3_AH1_1_4_I1.v=V13; C3_AH1_1_4_I1.phi=phi13
C3_AH1_1_5_I1.v=V13; C3_AH1_1_5_I1.phi=phi13
C3_AH1_1_6_I1.v=V13; C3_AH1_1_6_I1.phi=phi13
C3_AH1_1_7_I1.v=V13; C3_AH1_1_7_I1.phi=phi13
C3_AH1_1_8_I1.v=V13; C3_AH1_1_8_I1.phi=phi13
p_array_init = astraBeam2particleArray(filename='beam_6MeV.ast')
bins_start, hist_start = get_current(p_array_init, charge=p_array_init.q_array[0], num_bins=200)
plt.title("current: end")
plt.plot(bins_start*1000, hist_start)
plt.xlabel("s, mm")
plt.ylabel("I, A")
plt.grid(True)
plt.show()
# initialization of tracking method
method = MethodTM()
# for second order tracking we have to choose SecondTM
method.global_method = SecondTM
# for first order tracking uncomment next line
# method.global_method = TransferMap
# we will start simulation from point 3.2 from the gun. For this purpose marker was created (start_sim=Marker())
# and placed in 3.2 m after gun
# Q_38_I1 is quadrupole between RF cavities 1.3 GHz and 3.9 GHz
# C3_AH1_1_8_I1 is the last section of the 3.9 GHz cavity
lat = MagneticLattice(cell, start=start_sim, stop=Q_38_I1, method=method)
sc1 = SpaceCharge()
sc1.nmesh_xyz = [63, 63, 63]
sc1.low_order_kick = False
sc1.step = 1
sc5 = SpaceCharge()
sc5.nmesh_xyz = [63, 63, 63]
sc5.step = 5
sc5.low_order_kick = False
navi = Navigator(lat)
# add physics processes from the first element to the last of the lattice
navi.add_physics_proc(sc1, lat.sequence[0], C_A1_1_2_I1)
navi.add_physics_proc(sc5, C_A1_1_2_I1, lat.sequence[-1])
# definiing of unit step in [m]
navi.unit_step = 0.02
# deep copy of the initial beam distribution
p_array = deepcopy(p_array_init)
start = time()
tws_track, p_array = track(lat, p_array, navi)
print("time exec: ", time() - start, "sec")
# you can change top_plot argument, for example top_plot=["alpha_x", "alpha_y"]
plot_opt_func(lat, tws_track, top_plot=["E"], fig_name=0, legend=False)
plt.show()
sa, bx_sc, by_sc, bx_wo_sc, by_wo_sc = np.loadtxt("astra_sim.txt", usecols=(0, 1, 2, 3, 4), unpack=True)
s = [tw.s for tw in tws_track]
bx = [tw.beta_x for tw in tws_track]
by = [tw.beta_y for tw in tws_track]
ax = plot_API(lat, legend=False)
ax.plot(s, bx, "r", label="Ocelot, bx")
ax.plot(sa-3.2, bx_sc, "b-",label="ASTRA, bx")
ax.plot(s, by, "r", label="Ocelot, by")
ax.plot(sa-3.2, by_sc, "b-",label="ASTRA, by")
ax.legend()
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Change RF parameters for the comparison with ASTRA
Step2: Initializing SpaceCharge
Step3: Comparison with ASTRA
|
15,633 | <ASSISTANT_TASK:>
Python Code:
def print_full(x):
pd.set_option('display.max_rows', len(x))
print(x)
pd.reset_option('display.max_rows')
def get_energy():
import pandas as pd
import numpy as np
energy = pd.read_excel('Energy Indicators.xls', skiprows=16, skip_footer=38, usecols=range(2,6), names=['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable'])
energy.drop(energy.index[0], inplace=True)
energy.replace('...', np.nan, inplace=True)
energy['Energy Supply'] = energy['Energy Supply'] * 1000000
energy['Country'] = energy['Country'].str.replace('\d+', '')
energy['Country'] = energy['Country'].str.replace(r"\s\(.*\)","")
di = {"Republic of Korea": "South Korea", "United States of America": "United States", "United Kingdom of Great Britain and Northern Ireland": "United Kingdom", "China, Hong Kong Special Administrative Region": "Hong Kong"}
energy["Country"].replace(di, inplace=True)
return energy
def get_gdp():
import pandas as pd
import numpy as np
GDP = pd.read_csv('world_bank.csv', header=4, usecols=['Country Name', 'Country Code', 'Indicator Code', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014','2015'])
di = {"Korea, Rep.": "South Korea", "Iran, Islamic Rep.": "Iran", "Hong Kong SAR, China": "Hong Kong"}
GDP["Country Name"].replace(di, inplace=True)
GDP.rename(columns={'Country Name':'Country'}, inplace=True)
return GDP
def get_ScimEn():
import pandas as pd
import numpy as np
ScimEn = pd.read_excel('scimagojr-3.xlsx')
return ScimEn
def answer_one():
import pandas as pd
import numpy as np
energy = get_energy()
GDP = get_gdp()
ScimEn = get_ScimEn()
base = pd.merge(ScimEn.head(16), energy, how='inner')
answer = pd.merge(base, GDP)
answer.set_index('Country', inplace=True)
del answer['Country Code']
del answer['Indicator Code']
return answer
answer_one()
%%HTML
<svg width="800" height="300">
<circle cx="150" cy="180" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="blue" />
<circle cx="200" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="red" />
<circle cx="100" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="green" />
<line x1="150" y1="125" x2="300" y2="150" stroke="black" stroke-width="2" fill="black" stroke-dasharray="5,3"/>
<text x="300" y="165" font-family="Verdana" font-size="35">Everything but this!</text>
</svg>
def answer_two():
#227
import pandas as pd
import numpy as np
energy = get_energy()
GDP = get_gdp()
ScimEn = get_ScimEn()
base = pd.merge(ScimEn, energy, how='inner')
answer = pd.merge(base, GDP, how='inner')
base1 = pd.merge(ScimEn, energy, how='outer')
answer1 = pd.merge(base, GDP, how='outer')
return (len(answer1) - len(answer))
def answer_three():
Top15 = answer_one()
Top15['avgGDP'] = Top15[['2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']].mean(axis=1)
return Top15['avgGDP'].sort_values(ascending=False)
def answer_four():
Top15 = answer_one()
answer = Top15['2015'].iloc[3] - Top15['2006'].iloc[3]
return answer
answer_four()
def answer_five():
Top15 = answer_one()
return Top15['Energy Supply per Capita'].mean()
def answer_six():
Top15 = answer_one()
answer = (Top15['% Renewable'].idxmax(), Top15['% Renewable'].max())
return answer
answer_six()
def answer_seven():
Top15 = answer_one()
Top15['Cita_ratio'] = Top15['Self-citations']/Top15['Citations']
answer = (Top15['Cita_ratio'].idxmax(), Top15['Cita_ratio'].max())
return answer
answer_seven()
def answer_eight():
Top15 = answer_one()
Top15['Pop_est'] = Top15['Energy Supply']/Top15['Energy Supply per Capita']
answer = Top15['Pop_est'].sort_values(ascending=False)
return answer.index[2]
answer_eight()
def answer_nine():
Top15 = answer_one()
Top15['Pop_est'] = Top15['Energy Supply']/Top15['Energy Supply per Capita']
Top15['Est_cite_doc'] = Top15['Citable documents']/Top15['Pop_est']
answer = Top15.corr().iloc[8,-1]
return answer
answer_nine()
#
#plot9() # Be sure to comment out plot9() before submitting the assignment!
def answer_ten():
import pandas as pd
import numpy as np
Top15 = answer_one()
Top15['HighRenew'] = np.where(Top15['% Renewable'] >= (Top15['% Renewable'].median()), 1, 0)
return Top15['HighRenew']
answer_ten()
#df = df.set_index('STNAME').groupby(level=0)['CENSUS2010POP'].agg({'avg': np.average})
#pd.cut(df['avg'],5, labels =['Tiny', 'Small', 'Medium', 'Large', 'Heouge'] )
def answer_eleven():
Top15 = answer_one()
return "ANSWER"
def answer_twelve():
Top15 = answer_one()
return "ANSWER"
def answer_thirteen():
Top15 = answer_one()
Top15['PopEst'] = Top15['Energy Supply']/Top15['Energy Supply per Capita']
answer = Top15['PopEst'].sort_values(ascending=False)
PopEst = answer.map('{:,}'.format)
return PopEst
#answer.astype('str')
answer_thirteen()
def plot_optional():
import matplotlib as plt
%matplotlib inline
Top15 = answer_one()
ax = Top15.plot(x='Rank', y='% Renewable', kind='scatter',
c=['#e41a1c','#377eb8','#e41a1c','#4daf4a','#4daf4a','#377eb8','#4daf4a','#e41a1c',
'#4daf4a','#e41a1c','#4daf4a','#4daf4a','#e41a1c','#dede00','#ff7f00'],
xticks=range(1,16), s=6*Top15['2014']/10**10, alpha=.75, figsize=[16,6]);
for i, txt in enumerate(Top15.index):
ax.annotate(txt, [Top15['Rank'][i], Top15['% Renewable'][i]], ha='center')
print("This is an example of a visualization that can be created to help understand the data. \
This is a bubble chart showing % Renewable vs. Rank. The size of the bubble corresponds to the countries' \
2014 GDP, and the color corresponds to the continent.")
#plot_optional() # Be sure to comment out plot_optional() before submitting the assignment!
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 2 (6.6%)
Step2: <br>
Step3: Question 4 (6.6%)
Step4: Question 5 (6.6%)
Step5: Question 6 (6.6%)
Step6: Question 7 (6.6%)
Step7: Question 8 (6.6%)
Step8: Question 9 (6.6%)
Step9: Question 10 (6.6%)
Step10: Question 11 (6.6%)
Step11: Question 12 (6.6%)
Step12: Question 13 (6.6%)
Step13: Optional
|
15,634 | <ASSISTANT_TASK:>
Python Code:
# Preview first 5 edges
list(g.edges(data=True))[0:5]
# Preview first 10 nodes
list(g.nodes(data=True))[0:10]
## Summary Stats
print('# of edges: {}'.format(g.number_of_edges()))
print('# of nodes: {}'.format(g.number_of_nodes()))
# Define node positions data structure (dict) for plotting
node_positions = {node[0]: (node[1]['X'], -node[1]['Y']) for node in g.nodes(data=True)}
# Preview of node_positions with a bit of hack (there is no head/slice method for dictionaries).
dict(list(node_positions.items())[0:5])
# Define data structure (list) of edge colors for plotting
edge_colors = [e[2]['attr_dict']['color'] for e in g.edges(data=True)]
# Preview first 10
edge_colors[0:10]
plt.figure(figsize=(8, 6))
nx.draw(g, pos=node_positions, edge_color=edge_colors, node_size=10, node_color='black')
plt.title('Graph Representation of Sleeping Giant Trail Map', size=15)
plt.show()
list(g.nodes(data=True))
# Calculate list of nodes with odd degree
nodes_odd_degree = [v for v, d in g.degree() if d % 2 ==1]
# Preview
(nodes_odd_degree[0:5])
print('Number of nodes of odd degree: {}'.format(len(nodes_odd_degree)))
print('Number of total nodes: {}'.format(len(g.nodes())))
# Compute all pairs of odd nodes. in a list of tuples
odd_node_pairs = list(itertools.combinations(nodes_odd_degree, 2))
# Preview pairs of odd degree nodes
odd_node_pairs[0:10]
print('Number of pairs: {}'.format(len(odd_node_pairs)))
def get_shortest_paths_distances(graph, pairs, edge_weight_name):
Compute shortest distance between each pair of nodes in a graph. Return a dictionary keyed on node pairs (tuples).
distances = {}
for pair in pairs:
distances[pair] = nx.dijkstra_path_length(graph, pair[0], pair[1], weight=edge_weight_name)
return distances
# Compute shortest paths. Return a dictionary with node pairs keys and a single value equal to shortest path distance.
odd_node_pairs_shortest_paths = get_shortest_paths_distances(g, odd_node_pairs, 'distance')
# Preview with a bit of hack (there is no head/slice method for dictionaries).
dict(list(odd_node_pairs_shortest_paths.items())[0:10])
def create_complete_graph(pair_weights, flip_weights=True):
Create a completely connected graph using a list of vertex pairs and the shortest path distances between them
Parameters:
pair_weights: list[tuple] from the output of get_shortest_paths_distances
flip_weights: Boolean. Should we negate the edge attribute in pair_weights?
g = nx.Graph()
for k, v in pair_weights.items():
wt_i = - v if flip_weights else v
g.add_edge(k[0], k[1], attr_dict={'distance': v, 'weight': wt_i})
return g
# Generate the complete graph
g_odd_complete = create_complete_graph(odd_node_pairs_shortest_paths, flip_weights=True)
# Counts
print('Number of nodes: {}'.format(len(g_odd_complete.nodes())))
print('Number of edges: {}'.format(len(g_odd_complete.edges())))
# Plot the complete graph of odd-degree nodes
plt.figure(figsize=(8, 6))
pos_random = nx.random_layout(g_odd_complete)
nx.draw_networkx_nodes(g_odd_complete, node_positions, node_size=20, node_color="red")
nx.draw_networkx_edges(g_odd_complete, node_positions, alpha=0.1)
plt.axis('off')
plt.title('Complete Graph of Odd-degree Nodes')
plt.show()
# Compute min weight matching.
# Note: max_weight_matching uses the 'weight' attribute by default as the attribute to maximize.
odd_matching_dupes = nx.algorithms.max_weight_matching(g_odd_complete, True)
print('Number of edges in matching: {}'.format(len(odd_matching_dupes)))
odd_matching_dupes
list(odd_matching_dupes)
# Convert matching to list of deduped tuples
odd_matching = list(odd_matching_dupes)
# Counts
print('Number of edges in matching (deduped): {}'.format(len(odd_matching)))
plt.figure(figsize=(8, 6))
# Plot the complete graph of odd-degree nodes
nx.draw(g_odd_complete, pos=node_positions, node_size=20, alpha=0.05)
# Create a new graph to overlay on g_odd_complete with just the edges from the min weight matching
g_odd_complete_min_edges = nx.Graph(odd_matching)
nx.draw(g_odd_complete_min_edges, pos=node_positions, node_size=20, edge_color='blue', node_color='red')
plt.title('Min Weight Matching on Complete Graph')
plt.show()
plt.figure(figsize=(8, 6))
# Plot the original trail map graph
nx.draw(g, pos=node_positions, node_size=20, alpha=0.1, node_color='black')
# Plot graph to overlay with just the edges from the min weight matching
nx.draw(g_odd_complete_min_edges, pos=node_positions, node_size=20, alpha=1, node_color='red', edge_color='blue')
plt.title('Min Weight Matching on Orginal Graph')
plt.show()
def add_augmenting_path_to_graph(graph, min_weight_pairs):
Add the min weight matching edges to the original graph
Parameters:
graph: NetworkX graph (original graph from trailmap)
min_weight_pairs: list[tuples] of node pairs from min weight matching
Returns:
augmented NetworkX graph
# We need to make the augmented graph a MultiGraph so we can add parallel edges
graph_aug = nx.MultiGraph(graph.copy())
for pair in min_weight_pairs:
graph_aug.add_edge(pair[0],
pair[1],
attr_dict={'distance': nx.dijkstra_path_length(graph, pair[0], pair[1]),
'trail': 'augmented'}
)
return graph_aug
# Create augmented graph: add the min weight matching edges to g
g_aug = add_augmenting_path_to_graph(g, odd_matching)
# Counts
print('Number of edges in original graph: {}'.format(len(g.edges())))
print('Number of edges in augmented graph: {}'.format(len(g_aug.edges())))
naive_euler_circuit = list(nx.eulerian_circuit(g_aug, source='b_end_east'))
print('Length of eulerian circuit: {}'.format(len(naive_euler_circuit)))
naive_euler_circuit[0:10]
def create_eulerian_circuit(graph_augmented, graph_original, starting_node=None):
Create the eulerian path using only edges from the original graph.
euler_circuit = []
naive_circuit = list(nx.eulerian_circuit(graph_augmented, source=starting_node))
for edge in naive_circuit:
edge_data = graph_augmented.get_edge_data(edge[0], edge[1])
#print(edge_data[0])
if edge_data[0]['attr_dict']['trail'] != 'augmented':
# If `edge` exists in original graph, grab the edge attributes and add to eulerian circuit.
edge_att = graph_original[edge[0]][edge[1]]
euler_circuit.append((edge[0], edge[1], edge_att))
else:
aug_path = nx.shortest_path(graph_original, edge[0], edge[1], weight='distance')
aug_path_pairs = list(zip(aug_path[:-1], aug_path[1:]))
print('Filling in edges for augmented edge: {}'.format(edge))
print('Augmenting path: {}'.format(' => '.join(aug_path)))
print('Augmenting path pairs: {}\n'.format(aug_path_pairs))
# If `edge` does not exist in original graph, find the shortest path between its nodes and
# add the edge attributes for each link in the shortest path.
for edge_aug in aug_path_pairs:
edge_aug_att = graph_original[edge_aug[0]][edge_aug[1]]
euler_circuit.append((edge_aug[0], edge_aug[1], edge_aug_att))
return euler_circuit
# Create the Eulerian circuit
euler_circuit = create_eulerian_circuit(g_aug, g, 'b_end_east')
print('Length of Eulerian circuit: {}'.format(len(euler_circuit)))
## CPP Solution
# Preview first 20 directions of CPP solution
for i, edge in enumerate(euler_circuit[0:20]):
print(i, edge)
# Computing some stats
total_mileage_of_circuit = sum([edge[2]['attr_dict']['distance'] for edge in euler_circuit])
total_mileage_on_orig_trail_map = sum(nx.get_edge_attributes(g, 'distance').values())
_vcn = pd.value_counts(pd.value_counts([(e[0]) for e in euler_circuit]), sort=False)
node_visits = pd.DataFrame({'n_visits': _vcn.index, 'n_nodes': _vcn.values})
_vce = pd.value_counts(pd.value_counts([sorted(e)[0] + sorted(e)[1] for e in nx.MultiDiGraph(euler_circuit).edges()]))
edge_visits = pd.DataFrame({'n_visits': _vce.index, 'n_edges': _vce.values})
# Printing stats
print('Mileage of circuit: {0:.2f}'.format(total_mileage_of_circuit))
print('Mileage on original trail map: {0:.2f}'.format(total_mileage_on_orig_trail_map))
print('Mileage retracing edges: {0:.2f}'.format(total_mileage_of_circuit-total_mileage_on_orig_trail_map))
#print('Percent of mileage retraced: {0:.2f}%\n'.format((1-total_mileage_of_circuit/total_mileage_on_orig_trail_map)*-100))
print('Number of edges in circuit: {}'.format(len(euler_circuit)))
print('Number of edges in original graph: {}'.format(len(g.edges())))
print('Number of nodes in original graph: {}\n'.format(len(g.nodes())))
print('Number of edges traversed more than once: {}\n'.format(len(euler_circuit)-len(g.edges())))
print('Number of times visiting each node:')
print(node_visits.to_string(index=False))
print('\nNumber of times visiting each edge:')
print(edge_visits.to_string(index=False))
def create_cpp_edgelist(euler_circuit):
Create the edgelist without parallel edge for the visualization
Combine duplicate edges and keep track of their sequence and # of walks
Parameters:
euler_circuit: list[tuple] from create_eulerian_circuit
cpp_edgelist = {}
for i, e in enumerate(euler_circuit):
edge = frozenset([e[0], e[1]])
if edge in cpp_edgelist:
cpp_edgelist[edge][2]['sequence'] += ', ' + str(i)
cpp_edgelist[edge][2]['visits'] += 1
else:
cpp_edgelist[edge] = e
cpp_edgelist[edge][2]['sequence'] = str(i)
cpp_edgelist[edge][2]['visits'] = 1
return list(cpp_edgelist.values())
cpp_edgelist = create_cpp_edgelist(euler_circuit)
print('Number of edges in CPP edge list: {}'.format(len(cpp_edgelist)))
cpp_edgelist[0:3]
g_cpp = nx.Graph(cpp_edgelist)
plt.figure(figsize=(14, 10))
visit_colors = {1:'lightgray', 2:'blue', 3: 'red', 4 : 'black', 5 : 'green'}
edge_colors = [visit_colors[e[2]['visits']] for e in g_cpp.edges(data=True)]
node_colors = ['red' if node in nodes_odd_degree else 'lightgray' for node in g_cpp.nodes()]
nx.draw_networkx(g_cpp, pos=node_positions, node_size=20, node_color=node_colors, edge_color=edge_colors, with_labels=False)
plt.axis('off')
plt.show()
plt.figure(figsize=(14, 10))
edge_colors = [e[2]['attr_dict']['color'] for e in g_cpp.edges(data=True)]
nx.draw_networkx(g_cpp, pos=node_positions, node_size=10, node_color='black', edge_color=edge_colors, with_labels=False, alpha=0.5)
bbox = {'ec':[1,1,1,0], 'fc':[1,1,1,0]} # hack to label edges over line (rather than breaking up line)
edge_labels = nx.get_edge_attributes(g_cpp, 'sequence')
nx.draw_networkx_edge_labels(g_cpp, pos=node_positions, edge_labels=edge_labels, bbox=bbox, font_size=6)
plt.axis('off')
plt.show()
visit_colors = {1:'lightgray', 2:'blue', 3: 'red', 4 : 'black', 5 : 'green'}
edge_cnter = {}
g_i_edge_colors = []
for i, e in enumerate(euler_circuit, start=1):
edge = frozenset([e[0], e[1]])
if edge in edge_cnter:
edge_cnter[edge] += 1
else:
edge_cnter[edge] = 1
# Full graph (faded in background)
nx.draw_networkx(g_cpp, pos=node_positions, node_size=6, node_color='gray', with_labels=False, alpha=0.07)
# Edges walked as of iteration i
euler_circuit_i = copy.deepcopy(euler_circuit[0:i])
for i in range(len(euler_circuit_i)):
edge_i = frozenset([euler_circuit_i[i][0], euler_circuit_i[i][1]])
euler_circuit_i[i][2]['visits_i'] = edge_cnter[edge_i]
g_i = nx.Graph(euler_circuit_i)
g_i_edge_colors = [visit_colors[e[2]['visits_i']] for e in g_i.edges(data=True)]
nx.draw_networkx_nodes(g_i, pos=node_positions, node_size=6, alpha=0.6, node_color='lightgray', with_labels=False, linewidths=0.1)
nx.draw_networkx_edges(g_i, pos=node_positions, edge_color=g_i_edge_colors, alpha=0.8)
plt.axis('off')
plt.savefig('img{}.png'.format(i), dpi=120, bbox_inches='tight')
plt.close()
import glob
import numpy as np
import imageio
import os
def make_circuit_video(image_path, movie_filename, fps=7):
# sorting filenames in order
filenames = glob.glob(image_path + 'img*.png')
filenames_sort_indices = np.argsort([int(os.path.basename(filename).split('.')[0][3:]) for filename in filenames])
filenames = [filenames[i] for i in filenames_sort_indices]
# make movie
with imageio.get_writer(movie_filename, mode='I', fps=fps) as writer:
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
make_circuit_video('', 'cpp_route_animation.gif', fps=3)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nodes
Step2: Visualize
Step3: Colors
Step4: Solving the Chinese Postman Problem is quite simple conceptually
Step6: CPP Step 2
Step8: Step 2.3
Step9: For a visual prop, the fully connected graph of odd degree node pairs is plotted below. Note that you preserve the X, Y coordinates of each node, but the edges do not necessarily represent actual trails. For example, two nodes could be connected by a single edge in this graph, but the shortest path between them could be 5 hops through even degree nodes (not shown here).
Step10: Step 2.4
Step11: The matching output (odd_matching_dupes) is a dictionary. Although there are 36 edges in this matching, you only want 18. Each edge-pair occurs twice (once with node 1 as the key and a second time with node 2 as the key of the dictionary).
Step12: To illustrate how this fits in with the original graph, you plot the same min weight pairs (blue lines), but over the trail map (faded) instead of the complete graph. Again, note that the blue lines are the bushwhacking route (as the crow flies edges, not actual trails). You still have a little bit of work to do to find the edges that comprise the shortest route between each pair in Step 3.
Step14: Step 2.5
Step15: CPP Step 3
Step17: Correct Circuit
Step18: Stats
Step20: Create CPP Graph
|
15,635 | <ASSISTANT_TASK:>
Python Code:
permutation = np.random.permutation(len(iris_target))
iris_data = np.take(iris_data, permutation, axis=0)
iris_target = np.take(iris_target, permutation)
# Function to plot data
def plot_data(colors, names, data, target):
plt.figure(figsize=(8, 8))
N = len(names)
for color, i, target_name in zip(colors, list(range(N)), names):
plt.scatter(data[target == i, 0],
data[target == i, 1],
color=color, lw=2, label=target_name)
plt.title("Incremental PCA of iris dataset")
plt.show()
ipca = IncrementalPCA(n_components=2)
# Colors used in the plot. Each color represents a target.
colors = ['navy', 'turquoise', 'darkorange']
WINDOW_SIZE = 50
for step_number in range(3):
data = iris_data[
WINDOW_SIZE*step_number: WINDOW_SIZE*(step_number+1), :]
iris_data_principal_components = ipca.partial_fit(data)
transformed_data = iris_data_principal_components.transform(
iris_data[:WINDOW_SIZE*(step_number+1)])
target = iris_target[:WINDOW_SIZE*(step_number+1)]
plot_data(colors, iris_names, transformed_data, target)
import sys
sys.path.append("../")
from IoTPy.core.stream import Stream, StreamArray, run
from IoTPy.agent_types.merge import merge_window, zip_stream
from IoTPy.core.helper_control import _multivalue
from IoTPy.helper_functions.recent_values import recent_values
ipca = IncrementalPCA(n_components=2)
data_stream = StreamArray(name='data_stream', dimension=4, dtype=float)
target_stream = StreamArray(name='target_stream', dtype=int)
transformed_stream = StreamArray(name='transformed_stream',
dimension=2, dtype=float)
def f(windows, state):
data, target = windows
saved_data, saved_target = state
iris_data_principal_components = ipca.partial_fit(data)
saved_data = (data if saved_data is None
else np.concatenate((saved_data, data), axis=0))
saved_target = (target if saved_target is None
else np.concatenate((saved_target, target), axis=0))
next_state = saved_data, saved_target
transformed_data = iris_data_principal_components.transform(
saved_data)
output_data = iris_data_principal_components.transform(data)
plot_data(colors, iris_names, transformed_data, saved_target)
return _multivalue(output_data), next_state
merge_window(func=f, in_streams=[data_stream, target_stream],
out_stream=transformed_stream, state=(None, None),
window_size=50, step_size=50)
#file_stream = StreamArray(name='file_stream', dimension=3, dtype=float)
file_stream = Stream(name='file_stream')
zip_stream([transformed_stream, target_stream], file_stream)
data_stream.extend(iris_data)
target_stream.extend(iris_target)
run()
print (recent_values(file_stream))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create an incremental PCA object
Step2: IoTPy
|
15,636 | <ASSISTANT_TASK:>
Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
print loss
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
pass
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Step6: Train the network
Step8: Load the data
Step9: Train a network
Step10: Debug the training
Step11: Tune your hyperparameters
Step12: Run on the test set
|
15,637 | <ASSISTANT_TASK:>
Python Code:
year = 2015
print(year)
print(year)
year = 2016
%reset
print(year)
import cv2
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
# another magic function, this allows you to view plots in the notebook
%matplotlib inline
first = ["tags/first0.png", "tags/first1.png", "tags/first2.png", "tags/first3.png", "tags/first4.png", "tags/first5.png"]
second = ["tags/second0.png", "tags/second1.png", "tags/second2.png", "tags/second3.png"]
queen = ["tags/queen0.png", "tags/queen1.png", "tags/queen2.png"]
mixed = []
for image_location in [first[0], second[0], queen[0]]:
image = cv2.imread(image_location, 1)
mixed.append(image)
for image in mixed:
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
plt.figure(figsize = (27, 15))
plt.imshow(image, cmap = cm.Greys_r)
for image in mixed:
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
smoothed = cv2.blur(gray_image, (25, 25))
ret, thresh = cv2.threshold(smoothed, 90, 255, cv2.THRESH_BINARY)
plt.figure(figsize = (27, 15))
plt.imshow(smoothed, cmap = cm.Greys_r)
plt.figure(figsize = (27, 15))
plt.imshow(thresh, cmap = cm.Greys_r)
lowThreshold = 50
ratio = 3
for image in mixed:
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
smoothed = cv2.GaussianBlur(gray_image, (9, 9), 0)
edges = cv2.Canny(smoothed, lowThreshold, lowThreshold * ratio, apertureSize=3)
ret, thresh = cv2.threshold(edges, 100, 255, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
drawn_image = image.copy()
cv2.drawContours(drawn_image, contours, -1, (255,0,0), 3)
plt.figure(figsize = (27, 15))
plt.imshow(drawn_image)
for image in mixed:
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#smoothed = cv2.GaussianBlur(gray_image, (7, 7), 0)
#smoothed = cv2.GaussianBlur(gray_image, (9, 9), 0)
smoothed = cv2.medianBlur(gray_image, 5)
#smoothed = cv2.blur(gray_image, (5,5))
edges = cv2.Laplacian(smoothed, cv2.CV_16S, ksize=3, scale=1, delta=0)
edges = cv2.convertScaleAbs(edges)
plt.figure(figsize = (27, 15))
plt.imshow(edges, cmap = cm.Greys_r)
#ret, thresh = cv2.threshold(edges, 10, 255, cv2.THRESH_BINARY)
#plt.figure(figsize = (27, 15))
#plt.imshow(thresh, cmap = cm.Greys_r)
#opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_RECT,(5, 5)))
#opening = cv2.dilate(opening,cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(17, 17)), 1)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
#kernel = np.ones((13,13),np.uint8)
#opening = cv2.dilate(opening,cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(11, 11)), 1)
drawn_image = image.copy()
for i, cnt in enumerate(contours):
area = cv2.contourArea(contours[i])
if area > 200:
cv2.drawContours(drawn_image, contours, i, (255,0,0), 5)
#plt.figure(figsize = (27, 15))
#plt.imshow(image)
plt.figure(figsize = (27, 15))
plt.imshow(drawn_image)
print("\n")
for image in mixed:
grad_x = cv2.Sobel(image, cv2.CV_16S, 1, 0, ksize=3, scale=3, delta=20, borderType=cv2.BORDER_DEFAULT)
grad_y = cv2.Sobel(image, cv2.CV_16S, 0, 1, ksize=3, scale=3, delta=20, borderType=cv2.BORDER_DEFAULT)
abs_grad_x = cv2.convertScaleAbs(grad_x)
abs_grad_y = cv2.convertScaleAbs(grad_y)
edges = cv2.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0)
plt.figure(figsize = (27, 15))
plt.imshow(image, cmap = cm.Greys_r)
plt.figure(figsize = (27, 15))
plt.imshow(edges, cmap = cm.Greys_r)
for image in mixed:
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
smoothed = cv2.GaussianBlur(gray_image, (9, 9), 0)
#smoothed = cv2.medianBlur(gray_image, 3)
#smoothed = cv2.blur(gray_image, (5,5))
plt.figure(figsize = (27, 15))
plt.imshow(smoothed, cmap = cm.Greys_r)
grad_x = cv2.Scharr(smoothed, cv2.CV_16S, 1, 0, scale=1.5, delta=1, borderType=cv2.BORDER_DEFAULT)
grad_y = cv2.Scharr(smoothed, cv2.CV_16S, 0, 1, scale=1.5, delta=1, borderType=cv2.BORDER_DEFAULT)
abs_grad_x = cv2.convertScaleAbs(grad_x)
abs_grad_y = cv2.convertScaleAbs(grad_y)
edges = cv2.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0)
plt.figure(figsize = (27, 15))
plt.imshow(edges, cmap = cm.Greys_r)
edges = cv2.blur(edges, (11,11))
ret, thresh = cv2.threshold(edges, 140, 255, cv2.THRESH_BINARY)
plt.figure(figsize = (27, 15))
plt.imshow(thresh, cmap = cm.Greys_r)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5, 5)))
contours, hierarchy = cv2.findContours(opening, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
drawn_image = image.copy()
for i, cnt in enumerate(contours):
area = cv2.contourArea(contours[i])
if area > 100:
cv2.drawContours(drawn_image, contours, i, (255,0,0), 5)
plt.figure(figsize = (27, 15))
plt.imshow(drawn_image)
print("\n")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The notebook allows you to prototype code and plots quickly without having to reload data in each time. Can can be useful if you're experimenting with your data and it takes several minutes to load into Python each time. However, you have to be careful when changing variables and the order you execute cells in, otherwise you may end up with different values than you expect.
Step2: Handy Notebook Tips
Step3: How I've been using the notebook
Step4: Read Images
Step5: I now have three 4K images loaded into the notebook. So long as I do not overwrite them, or change the original variables, I no longer have to repeatedly read these images in to memory when I run my program.
Step6: Canny
Step7: Laplacian
Step8: Sobel
Step9: Scharr
|
15,638 | <ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'auth_write':'service', # Authorization used for writing data.
'auth_read':'service', # Authorization for reading GA360.
'view':'service', # View Id
'recipe_slug':'', # Name of Google BigQuery dataset to create.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dataset':{
'description':'Create a dataset for bigquery tables.',
'hour':[
4
],
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be created in BigQuery.'}}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing function.'}},
'function':'Pearson Significance Test',
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}
}
}
},
{
'ga':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'service','description':'Authorization for reading GA360.'}},
'kwargs':{
'reportRequests':[
{
'viewId':{'field':{'name':'view','kind':'string','order':2,'default':'service','description':'View Id'}},
'dateRanges':[
{
'startDate':'90daysAgo',
'endDate':'today'
}
],
'dimensions':[
{
'name':'ga:userType'
},
{
'name':'ga:userDefinedValue'
},
{
'name':'ga:latitude'
},
{
'name':'ga:longitude'
}
],
'metrics':[
{
'expression':'ga:users'
},
{
'expression':'ga:sessionsPerUser'
},
{
'expression':'ga:bounces'
},
{
'expression':'ga:timeOnPage'
},
{
'expression':'ga:pageviews'
}
]
}
],
'useResourceQuotas':False
},
'out':{
'bigquery':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'GA360_KPI'
}
}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'from':{
'query':'WITH GA360_SUM AS ( SELECT A.Dimensions.userType AS User_Type, A.Dimensions.userDefinedValue AS User_Value, B.zip_code AS Zip, SUM(Metrics.users) AS Users, SUM(Metrics.sessionsPerUser) AS Sessions, SUM(Metrics.timeOnPage) AS Time_On_Site, SUM(Metrics.bounces) AS Bounces, SUM(Metrics.pageviews) AS Page_Views FROM `{dataset}.GA360_KPI` AS A JOIN `bigquery-public-data.geo_us_boundaries.zip_codes` AS B ON ST_WITHIN(ST_GEOGPOINT(A.Dimensions.longitude, A.Dimensions.latitude), B.zip_code_geom) GROUP BY 1,2,3 ) SELECT User_Type, User_Value, Zip, Users, SAFE_DIVIDE(Users, SUM(Users) OVER()) AS User_Percent, SAFE_DIVIDE(Sessions, SUM(Sessions) OVER()) AS Impression_Percent, SAFE_DIVIDE(Time_On_Site, SUM(Time_On_Site) OVER()) AS Time_On_Site_Percent, SAFE_DIVIDE(Bounces, SUM(Bounces) OVER()) AS Bounce_Percent, SAFE_DIVIDE(Page_Views, SUM(Page_Views) OVER()) AS Page_View_Percent FROM GA360_SUM ',
'parameters':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be created in BigQuery.'}}
},
'legacy':False
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be written in BigQuery.'}},
'view':'GA360_KPI_Normalized'
}
}
},
{
'census':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'normalize':{
'census_geography':'zip_codes',
'census_year':'2018',
'census_span':'5yr'
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'type':'view'
}
}
},
{
'census':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'correlate':{
'join':'Zip',
'pass':[
'User_Type',
'User_Value'
],
'sum':[
'Users'
],
'correlate':[
'User_Percent',
'Impression_Percent',
'Time_On_Site_Percent',
'Bounce_Percent',
'Page_View_Percent'
],
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'GA360_KPI_Normalized',
'significance':80
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'type':'view'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter GA360 Segmentology Recipe Parameters
Step3: 4. Execute GA360 Segmentology
|
15,639 | <ASSISTANT_TASK:>
Python Code:
# Python 2 only:
print 'Hello'
# Python 2 and 3:
print('Hello')
# Python 2 only:
print 'Hello', 'Guido'
# Python 2 and 3:
from __future__ import print_function # (at top of module)
print('Hello', 'Guido')
# Python 2 only:
print >> sys.stderr, 'Hello'
# Python 2 and 3:
from __future__ import print_function
print('Hello', file=sys.stderr)
# Python 2 only:
print 'Hello',
# Python 2 and 3:
from __future__ import print_function
print('Hello', end='')
# Python 2 only:
raise ValueError, "dodgy value"
# Python 2 and 3:
raise ValueError("dodgy value")
# Python 2 only:
traceback = sys.exc_info()[2]
raise ValueError, "dodgy value", traceback
# Python 3 only:
raise ValueError("dodgy value").with_traceback()
# Python 2 and 3: option 1
from six import reraise as raise_
# or
from future.utils import raise_
traceback = sys.exc_info()[2]
raise_(ValueError, "dodgy value", traceback)
# Python 2 and 3: option 2
from future.utils import raise_with_traceback
raise_with_traceback(ValueError("dodgy value"))
# Setup:
class DatabaseError(Exception):
pass
# Python 3 only
class FileDatabase:
def __init__(self, filename):
try:
self.file = open(filename)
except IOError as exc:
raise DatabaseError('failed to open') from exc
# Python 2 and 3:
from future.utils import raise_from
class FileDatabase:
def __init__(self, filename):
try:
self.file = open(filename)
except IOError as exc:
raise_from(DatabaseError('failed to open'), exc)
# Testing the above:
try:
fd = FileDatabase('non_existent_file.txt')
except Exception as e:
assert isinstance(e.__cause__, IOError) # FileNotFoundError on Py3.3+ inherits from IOError
# Python 2 only:
try:
...
except ValueError, e:
...
# Python 2 and 3:
try:
...
except ValueError as e:
...
# Python 2 only:
assert 2 / 3 == 0
# Python 2 and 3:
assert 2 // 3 == 0
# Python 3 only:
assert 3 / 2 == 1.5
# Python 2 and 3:
from __future__ import division # (at top of module)
assert 3 / 2 == 1.5
# Python 2 only:
a = b / c # with any types
# Python 2 and 3:
from past.utils import old_div
a = old_div(b, c) # always same as / on Py2
# Python 2 only
k = 9223372036854775808L
# Python 2 and 3:
k = 9223372036854775808
# Python 2 only
bigint = 1L
# Python 2 and 3
from builtins import int
bigint = int(1)
# Python 2 only:
if isinstance(x, (int, long)):
...
# Python 3 only:
if isinstance(x, int):
...
# Python 2 and 3: option 1
from builtins import int # subclass of long on Py2
if isinstance(x, int): # matches both int and long on Py2
...
# Python 2 and 3: option 2
from past.builtins import long
if isinstance(x, (int, long)):
...
0644 # Python 2 only
0o644 # Python 2 and 3
`x` # Python 2 only
repr(x) # Python 2 and 3
class BaseForm(object):
pass
class FormType(type):
pass
# Python 2 only:
class Form(BaseForm):
__metaclass__ = FormType
pass
# Python 3 only:
class Form(BaseForm, metaclass=FormType):
pass
# Python 2 and 3:
from six import with_metaclass
# or
from future.utils import with_metaclass
class Form(with_metaclass(FormType, BaseForm)):
pass
# Python 2 only
s1 = 'The Zen of Python'
s2 = u'きたないのよりきれいな方がいい\n'
# Python 2 and 3
s1 = u'The Zen of Python'
s2 = u'きたないのよりきれいな方がいい\n'
# Python 2 and 3
from __future__ import unicode_literals # at top of module
s1 = 'The Zen of Python'
s2 = 'きたないのよりきれいな方がいい\n'
# Python 2 only
s = 'This must be a byte-string'
# Python 2 and 3
s = b'This must be a byte-string'
# Python 2 only:
for bytechar in 'byte-string with high-bit chars like \xf9':
...
# Python 3 only:
for myint in b'byte-string with high-bit chars like \xf9':
bytechar = bytes([myint])
# Python 2 and 3:
from builtins import bytes
for myint in bytes(b'byte-string with high-bit chars like \xf9'):
bytechar = bytes([myint])
# Python 3 only:
for myint in b'byte-string with high-bit chars like \xf9':
char = chr(myint) # returns a unicode string
bytechar = char.encode('latin-1')
# Python 2 and 3:
from builtins import bytes, chr
for myint in bytes(b'byte-string with high-bit chars like \xf9'):
char = chr(myint) # returns a unicode string
bytechar = char.encode('latin-1') # forces returning a byte str
# Python 2 only:
a = u'abc'
b = 'def'
assert (isinstance(a, basestring) and isinstance(b, basestring))
# Python 2 and 3: alternative 1
from past.builtins import basestring # pip install future
a = u'abc'
b = b'def'
assert (isinstance(a, basestring) and isinstance(b, basestring))
# Python 2 and 3: alternative 2: refactor the code to avoid considering
# byte-strings as strings.
from builtins import str
a = u'abc'
b = b'def'
c = b.decode()
assert isinstance(a, str) and isinstance(c, str)
# ...
# Python 2 only:
templates = [u"blog/blog_post_detail_%s.html" % unicode(slug)]
# Python 2 and 3: alternative 1
from builtins import str
templates = [u"blog/blog_post_detail_%s.html" % str(slug)]
# Python 2 and 3: alternative 2
from builtins import str as text
templates = [u"blog/blog_post_detail_%s.html" % text(slug)]
# Python 2 only:
from StringIO import StringIO
# or:
from cStringIO import StringIO
# Python 2 and 3:
from io import BytesIO # for handling byte strings
from io import StringIO # for handling unicode strings
# Python 2 only:
import submodule2
# Python 2 and 3:
from . import submodule2
# Python 2 and 3:
# To make Py2 code safer (more like Py3) by preventing
# implicit relative imports, you can also add this to the top:
from __future__ import absolute_import
heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}
# Python 2 only:
for key in heights.iterkeys():
...
# Python 2 and 3:
for key in heights:
...
# Python 2 only:
for value in heights.itervalues():
...
# Idiomatic Python 3
for value in heights.values(): # extra memory overhead on Py2
...
# Python 2 and 3: option 1
from builtins import dict
heights = dict(Fred=175, Anne=166, Joe=192)
for key in heights.values(): # efficient on Py2 and Py3
...
# Python 2 and 3: option 2
from builtins import itervalues
# or
from six import itervalues
for key in itervalues(heights):
...
# Python 2 only:
for (key, value) in heights.iteritems():
...
# Python 2 and 3: option 1
for (key, value) in heights.items(): # inefficient on Py2
...
# Python 2 and 3: option 2
from future.utils import viewitems
for (key, value) in viewitems(heights): # also behaves like a set
...
# Python 2 and 3: option 3
from future.utils import iteritems
# or
from six import iteritems
for (key, value) in iteritems(heights):
...
# Python 2 only:
keylist = heights.keys()
assert isinstance(keylist, list)
# Python 2 and 3:
keylist = list(heights)
assert isinstance(keylist, list)
# Python 2 only:
heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}
valuelist = heights.values()
assert isinstance(valuelist, list)
# Python 2 and 3: option 1
valuelist = list(heights.values()) # inefficient on Py2
# Python 2 and 3: option 2
from builtins import dict
heights = dict(Fred=175, Anne=166, Joe=192)
valuelist = list(heights.values())
# Python 2 and 3: option 3
from future.utils import listvalues
valuelist = listvalues(heights)
# Python 2 and 3: option 4
from future.utils import itervalues
# or
from six import itervalues
valuelist = list(itervalues(heights))
# Python 2 and 3: option 1
itemlist = list(heights.items()) # inefficient on Py2
# Python 2 and 3: option 2
from future.utils import listitems
itemlist = listitems(heights)
# Python 2 and 3: option 3
from future.utils import iteritems
# or
from six import iteritems
itemlist = list(iteritems(heights))
# Python 2 only
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def next(self): # Py2-style
return self._iter.next().upper()
def __iter__(self):
return self
itr = Upper('hello')
assert itr.next() == 'H' # Py2-style
assert list(itr) == list('ELLO')
# Python 2 and 3: option 1
from builtins import object
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def __next__(self): # Py3-style iterator interface
return next(self._iter).upper() # builtin next() function calls
def __iter__(self):
return self
itr = Upper('hello')
assert next(itr) == 'H' # compatible style
assert list(itr) == list('ELLO')
# Python 2 and 3: option 2
from future.utils import implements_iterator
@implements_iterator
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def __next__(self): # Py3-style iterator interface
return next(self._iter).upper() # builtin next() function calls
def __iter__(self):
return self
itr = Upper('hello')
assert next(itr) == 'H'
assert list(itr) == list('ELLO')
# Python 2 only:
class MyClass(object):
def __unicode__(self):
return 'Unicode string: \u5b54\u5b50'
def __str__(self):
return unicode(self).encode('utf-8')
a = MyClass()
print(a) # prints encoded string
# Python 2 and 3:
from future.utils import python_2_unicode_compatible
@python_2_unicode_compatible
class MyClass(object):
def __str__(self):
return u'Unicode string: \u5b54\u5b50'
a = MyClass()
print(a) # prints string encoded as utf-8 on Py2
# Python 2 only:
class AllOrNothing(object):
def __init__(self, l):
self.l = l
def __nonzero__(self):
return all(self.l)
container = AllOrNothing([0, 100, 200])
assert not bool(container)
# Python 2 and 3:
from builtins import object
class AllOrNothing(object):
def __init__(self, l):
self.l = l
def __bool__(self):
return all(self.l)
container = AllOrNothing([0, 100, 200])
assert not bool(container)
# Python 2 only:
for i in xrange(10**8):
...
# Python 2 and 3: forward-compatible
from builtins import range
for i in range(10**8):
...
# Python 2 and 3: backward-compatible
from past.builtins import xrange
for i in xrange(10**8):
...
# Python 2 only
mylist = range(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: forward-compatible: option 1
mylist = list(range(5)) # copies memory on Py2
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: forward-compatible: option 2
from builtins import range
mylist = list(range(5))
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: option 3
from future.utils import lrange
mylist = lrange(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: backward compatible
from past.builtins import range
mylist = range(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 only:
mynewlist = map(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 1
# Idiomatic Py3, but inefficient on Py2
mynewlist = list(map(f, myoldlist))
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 2
from builtins import map
mynewlist = list(map(f, myoldlist))
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 3
try:
import itertools.imap as map
except ImportError:
pass
mynewlist = list(map(f, myoldlist)) # inefficient on Py2
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 4
from future.utils import lmap
mynewlist = lmap(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 5
from past.builtins import map
mynewlist = map(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 only:
from itertools import imap
myiter = imap(func, myoldlist)
assert isinstance(myiter, iter)
# Python 3 only:
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 and 3: option 1
from builtins import map
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 and 3: option 2
try:
import itertools.imap as map
except ImportError:
pass
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 only
f = open('myfile.txt')
data = f.read() # as a byte string
text = data.decode('utf-8')
# Python 2 and 3: alternative 1
from io import open
f = open('myfile.txt', 'rb')
data = f.read() # as bytes
text = data.decode('utf-8') # unicode, not bytes
# Python 2 and 3: alternative 2
from io import open
f = open('myfile.txt', encoding='utf-8')
text = f.read() # unicode, not bytes
# Python 2 only:
assert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5
# Python 2 and 3:
from functools import reduce
assert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5
# Python 2 only:
name = raw_input('What is your name? ')
assert isinstance(name, str) # native str
# Python 2 and 3:
from builtins import input
name = input('What is your name? ')
assert isinstance(name, str) # native str on Py2 and Py3
# Python 2 only:
input("Type something safe please: ")
# Python 2 and 3
from builtins import input
eval(input("Type something safe please: "))
# Python 2 only:
f = file(pathname)
# Python 2 and 3:
f = open(pathname)
# But preferably, use this:
from io import open
f = open(pathname, 'rb') # if f.read() should return bytes
# or
f = open(pathname, 'rt') # if f.read() should return unicode text
# Python 2 only:
exec 'x = 10'
# Python 2 and 3:
exec('x = 10')
# Python 2 only:
g = globals()
exec 'x = 10' in g
# Python 2 and 3:
g = globals()
exec('x = 10', g)
# Python 2 only:
l = locals()
exec 'x = 10' in g, l
# Python 2 and 3:
exec('x = 10', g, l)
# Python 2 only:
execfile('myfile.py')
# Python 2 and 3: alternative 1
from past.builtins import execfile
execfile('myfile.py')
# Python 2 and 3: alternative 2
exec(compile(open('myfile.py').read()))
# This can sometimes cause this:
# SyntaxError: function ... uses import * and bare exec ...
# See https://github.com/PythonCharmers/python-future/issues/37
# Python 2 only:
assert unichr(8364) == '€'
# Python 3 only:
assert chr(8364) == '€'
# Python 2 and 3:
from builtins import chr
assert chr(8364) == '€'
# Python 2 only:
intern('mystring')
# Python 3 only:
from sys import intern
intern('mystring')
# Python 2 and 3: alternative 1
from past.builtins import intern
intern('mystring')
# Python 2 and 3: alternative 2
from six.moves import intern
intern('mystring')
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from sys import intern
intern('mystring')
# Python 2 and 3: alternative 2
try:
from sys import intern
except ImportError:
pass
intern('mystring')
args = ('a', 'b')
kwargs = {'kwarg1': True}
# Python 2 only:
apply(f, args, kwargs)
# Python 2 and 3: alternative 1
f(*args, **kwargs)
# Python 2 and 3: alternative 2
from past.builtins import apply
apply(f, args, kwargs)
# Python 2 only:
assert chr(64) == b'@'
assert chr(200) == b'\xc8'
# Python 3 only: option 1
assert chr(64).encode('latin-1') == b'@'
assert chr(0xc8).encode('latin-1') == b'\xc8'
# Python 2 and 3: option 1
from builtins import chr
assert chr(64).encode('latin-1') == b'@'
assert chr(0xc8).encode('latin-1') == b'\xc8'
# Python 3 only: option 2
assert bytes([64]) == b'@'
assert bytes([0xc8]) == b'\xc8'
# Python 2 and 3: option 2
from builtins import bytes
assert bytes([64]) == b'@'
assert bytes([0xc8]) == b'\xc8'
# Python 2 only:
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 and 3: alternative 1
from past.builtins import cmp
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 and 3: alternative 2
cmp = lambda(x, y): (x > y) - (x < y)
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 only:
reload(mymodule)
# Python 2 and 3
from imp import reload
reload(mymodule)
# Python 2 only
import anydbm
import whichdb
import dbm
import dumbdbm
import gdbm
# Python 2 and 3: alternative 1
from future import standard_library
standard_library.install_aliases()
import dbm
import dbm.ndbm
import dbm.dumb
import dbm.gnu
# Python 2 and 3: alternative 2
from future.moves import dbm
from future.moves.dbm import dumb
from future.moves.dbm import ndbm
from future.moves.dbm import gnu
# Python 2 and 3: alternative 3
from six.moves import dbm_gnu
# (others not supported)
# Python 2 only
from commands import getoutput, getstatusoutput
# Python 2 and 3
from future import standard_library
standard_library.install_aliases()
from subprocess import getoutput, getstatusoutput
# Python 2.7 and above
from subprocess import check_output
# Python 2.6 and above: alternative 1
from future.moves.subprocess import check_output
# Python 2.6 and above: alternative 2
from future import standard_library
standard_library.install_aliases()
from subprocess import check_output
# Python 2.7 and above
from collections import Counter, OrderedDict, ChainMap
# Python 2.6 and above: alternative 1
from future.backports import Counter, OrderedDict, ChainMap
# Python 2.6 and above: alternative 2
from future import standard_library
standard_library.install_aliases()
from collections import Counter, OrderedDict, ChainMap
# Python 2 only
from StringIO import StringIO
from cStringIO import StringIO
# Python 2 and 3
from io import BytesIO
# and refactor StringIO() calls to BytesIO() if passing byte-strings
# Python 2 only:
import httplib
import Cookie
import cookielib
import BaseHTTPServer
import SimpleHTTPServer
import CGIHttpServer
# Python 2 and 3 (after ``pip install future``):
import http.client
import http.cookies
import http.cookiejar
import http.server
# Python 2 only:
import DocXMLRPCServer
import SimpleXMLRPCServer
# Python 2 and 3 (after ``pip install future``):
import xmlrpc.server
# Python 2 only:
import xmlrpclib
# Python 2 and 3 (after ``pip install future``):
import xmlrpc.client
# Python 2 and 3:
from cgi import escape
# Safer (Python 2 and 3, after ``pip install future``):
from html import escape
# Python 2 only:
from htmlentitydefs import codepoint2name, entitydefs, name2codepoint
# Python 2 and 3 (after ``pip install future``):
from html.entities import codepoint2name, entitydefs, name2codepoint
# Python 2 only:
from HTMLParser import HTMLParser
# Python 2 and 3 (after ``pip install future``)
from html.parser import HTMLParser
# Python 2 and 3 (alternative 2):
from future.moves.html.parser import HTMLParser
# Python 2 only:
from urlparse import urlparse
from urllib import urlencode
from urllib2 import urlopen, Request, HTTPError
# Python 3 only:
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: easiest option
from future.standard_library import install_aliases
install_aliases()
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: alternative 2
from future.standard_library import hooks
with hooks():
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: alternative 3
from future.moves.urllib.parse import urlparse, urlencode
from future.moves.urllib.request import urlopen, Request
from future.moves.urllib.error import HTTPError
# or
from six.moves.urllib.parse import urlparse, urlencode
from six.moves.urllib.request import urlopen
from six.moves.urllib.error import HTTPError
# Python 2 and 3: alternative 4
try:
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
except ImportError:
from urlparse import urlparse
from urllib import urlencode
from urllib2 import urlopen, Request, HTTPError
# Python 2 only:
import Tkinter
import Dialog
import FileDialog
import ScrolledText
import SimpleDialog
import Tix
import Tkconstants
import Tkdnd
import tkColorChooser
import tkCommonDialog
import tkFileDialog
import tkFont
import tkMessageBox
import tkSimpleDialog
import ttk
# Python 2 and 3 (after ``pip install future``):
import tkinter
import tkinter.dialog
import tkinter.filedialog
import tkinter.scrolledtext
import tkinter.simpledialog
import tkinter.tix
import tkinter.constants
import tkinter.dnd
import tkinter.colorchooser
import tkinter.commondialog
import tkinter.filedialog
import tkinter.font
import tkinter.messagebox
import tkinter.simpledialog
import tkinter.ttk
# Python 2 only:
import SocketServer
# Python 2 and 3 (after ``pip install future``):
import socketserver
# Python 2 only:
import copy_reg
# Python 2 and 3 (after ``pip install future``):
import copyreg
# Python 2 only:
from ConfigParser import ConfigParser
# Python 2 and 3 (after ``pip install future``):
from configparser import ConfigParser
# Python 2 only:
from Queue import Queue, heapq, deque
# Python 2 and 3 (after ``pip install future``):
from queue import Queue, heapq, deque
# Python 2 only:
from repr import aRepr, repr
# Python 2 and 3 (after ``pip install future``):
from reprlib import aRepr, repr
# Python 2 only:
from UserDict import UserDict
from UserList import UserList
from UserString import UserString
# Python 3 only:
from collections import UserDict, UserList, UserString
# Python 2 and 3: alternative 1
from future.moves.collections import UserDict, UserList, UserString
# Python 2 and 3: alternative 2
from six.moves import UserDict, UserList, UserString
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from collections import UserDict, UserList, UserString
# Python 2 only:
from itertools import ifilterfalse, izip_longest
# Python 3 only:
from itertools import filterfalse, zip_longest
# Python 2 and 3: alternative 1
from future.moves.itertools import filterfalse, zip_longest
# Python 2 and 3: alternative 2
from six.moves import filterfalse, zip_longest
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from itertools import filterfalse, zip_longest
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To print multiple strings, import print_function to prevent Py2 from interpreting it as a tuple
Step2: Raising exceptions
Step3: Raising exceptions with a traceback
Step4: Exception chaining (PEP 3134)
Step5: Catching exceptions
Step6: Division
Step7: "True division" (float division)
Step8: "Old division" (i.e. compatible with Py2 behaviour)
Step9: Long integers
Step10: To test whether a value is an integer (of any kind)
Step11: Octal constants
Step12: Backtick repr
Step13: Metaclasses
Step14: Strings and bytes
Step15: The futurize and python-modernize tools do not currently offer an option to do this automatically.
Step16: See http
Step17: To loop over a byte-string with possible high-bit characters, obtaining each character as a byte-string of length 1
Step18: As an alternative, chr() and .encode('latin-1') can be used to convert an int into a 1-char byte string
Step19: basestring
Step20: unicode
Step21: StringIO
Step22: Imports relative to a package
Step23: Dictionaries
Step24: Iterating through dict keys/values/items
Step25: Iterable dict values
Step26: Iterable dict items
Step27: dict keys/values/items as a list
Step28: dict values as a list
Step29: dict items as a list
Step30: Custom class behaviour
Step31: Custom __str__ methods
Step32: Custom __nonzero__ vs __bool__ method
Step33: Lists versus iterators
Step34: range
Step35: map
Step36: imap
Step37: zip, izip
Step38: reduce()
Step39: raw_input()
Step40: input()
Step41: Warning
Step42: exec
Step43: But note that Py3's exec() is less powerful (and less dangerous) than Py2's exec statement.
Step44: unichr()
Step45: intern()
Step46: apply()
Step47: chr()
Step48: cmp()
Step49: reload()
Step50: Standard library
Step51: commands / subprocess modules
Step52: subprocess.check_output()
Step53: collections
Step54: StringIO module
Step55: http module
Step56: xmlrpc module
Step57: html escaping and entities
Step58: html parsing
Step59: urllib module
Step60: Tkinter
Step61: socketserver
Step62: copy_reg, copyreg
Step63: configparser
Step64: queue
Step65: repr, reprlib
Step66: UserDict, UserList, UserString
Step67: itertools
|
15,640 | <ASSISTANT_TASK:>
Python Code:
import time
import sqlite3 as sql
import os
import sys
sys.path.append("/Users/kurner/Documents/classroom_labs")
class NoConnection(Exception):
pass
class SQL_DB: # a database
# class level parameters
backend = 'sqlite3'
user_initials = 'KTU'
timezone = int(time.strftime("%z", time.localtime()))
target_path = "/Users/kurner/Documents/classroom_labs" # current directory
db_name = os.path.join(target_path, 'glossary.db')
@staticmethod
def mod_date():
return time.mktime(time.gmtime()) # GMT time
@classmethod
def connect(cls):
try:
if cls.backend == 'sqlite3':
cls.conn = sql.connect(cls.db_name) # connection
cls.curs = cls.conn.cursor() # cursor
elif cls.backend == 'mysql': # not using this, gives idea
cls.conn = sql.connect(host='localhost',
user='root', port='8889')
cls.curs = cls.conn.cursor()
except Exception:
raise NoConnection
@classmethod
def disconnect(cls):
cls.conn.close()
class DBcontext:
Generic parent class for connecting and disconnecting
def __init__(self, db):
self.db = db # references database class
def __enter__(self):
self.db.connect()
return self # <-- for use inside with scope
def __exit__(self, *stuff_happens):
self.db.disconnect()
if stuff_happens[0]:
print("Exception raised!")
print(stuff_happens)
return True # <-- if considered handled, otherwise False
return True
class Glossary(DBcontext):
Subclass with custom methods for this particular database
def create_table(self):
# https://www.sqlite.org/lang_droptable.html
self.db.curs.execute(DROP TABLE IF EXISTS Glossary)
self.db.curs.execute(CREATE TABLE Glossary
(gl_term text PRIMARY KEY,
gl_definition text,
updated_at int,
updated_by text))
def save_term(self, *the_data):
query = ("INSERT INTO Glossary "
"(gl_term, gl_definition, updated_at, updated_by) "
"VALUES ('{}', '{}', {}, '{}')".format(*the_data))
# print(query)
self.db.curs.execute(query)
self.db.conn.commit()
with Glossary(SQL_DB) as dbx: # <--- dbx returned by __enter__
# for testing __exit__ in case of an exception
# raise NoConnection
dbx.create_table()
FILE = os.path.join(dbx.db.target_path, "glossary.txt")
with open(FILE, 'r', encoding='UTF-8') as gloss:
lines = gloss.readlines()
for line in lines:
if len(line.strip()) == 0:
continue
term, definition = line.split(":", 1)
right_now = dbx.db.mod_date()
dbx.save_term(term[2:].strip(), definition.strip(), right_now, dbx.db.user_initials)
with Glossary(SQL_DB) as dbx:
query = "SELECT gl_term, gl_definition FROM Glossary ORDER BY gl_term"
dbx.db.curs.execute(query) # gets the data
print("{:^80}".format("GLOSSARY OF TERMS"))
print("-" * 80)
print("Term |Abbreviated Definition " )
print("-" * 80)
for term in dbx.db.curs.fetchmany(10): # fetchone(), fetchmany(n), fetchall()
print("{:35} | {:45}".format(term[0], term[1][:45]))
import chem_stuff
# modify database class to point to a different database file
SQL_DB.db_name = os.path.join(SQL_DB.target_path, 'periodic_table.db')
class ChemContext(DBcontext):
Subclass with custom methods for this particular database
def create_table(self):
# https://www.sqlite.org/lang_droptable.html
self.db.curs.execute(DROP TABLE IF EXISTS Elements)
self.db.curs.execute(CREATE TABLE Elements
(elem_protons int PRIMARY KEY,
elem_symbol text,
elem_long_name text,
elem_mass float,
elem_series text,
updated_at int,
updated_by text))
def save_term(self, *the_data):
query = ("INSERT INTO Elements "
"(elem_protons, elem_symbol, elem_long_name, elem_mass, elem_series,"
"updated_at, updated_by) "
"VALUES ({}, '{}', '{}', {}, '{}', {}, '{}')".format(*the_data))
# print(query)
self.db.curs.execute(query)
self.db.conn.commit()
with ChemContext(SQL_DB) as dbx:
dbx.create_table()
FILE = os.path.join(dbx.db.target_path, "periodic_table.json")
chem_stuff.load_elements(FILE) # uses imported module to read JSON
for atom in chem_stuff.all_elements.values():
right_now = dbx.db.mod_date()
dbx.save_term(atom.protons, atom.symbol, atom.long_name, atom.mass, atom.series,
right_now, dbx.db.user_initials)
with DBcontext(SQL_DB) as dbx: # <--- dbx returned by __enter__
query = ("SELECT elem_symbol, elem_long_name, elem_protons, elem_mass, elem_series"
" FROM Elements ORDER BY elem_protons")
dbx.db.curs.execute(query)
print("{:^70}".format("PERIODIC TABLE OF THE ELEMENTS"))
print("-" * 70)
print("Symbol |Long Name |Protons |Mass |Series " )
print("-" * 70)
for the_atom in dbx.db.curs.fetchall():
print("{:6} | {:20} | {:6} | {:5.2f} | {:15}".format(the_atom[0],
the_atom[1],
the_atom[2],
the_atom[3],
the_atom[4]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: So far, this class hasn't done any work. We'll use the context manager to make the actual connection, using this DB object.
Step6: In the "with scope" below, we create a new table by calling a method saved inside the context manager itself. The a text file is read in and each line gets saved to a row in the Glossary table. Again, the context manager has a method for inserting rows.
Step7: We now have a resuable context that will connect and disconnect from our database.
Step11: OK, that was fun. We were able to obtain a partial listing of what's in the Glossary table. Connecting and disconnecting was handled behind the scenes.
Step12: To test that our data table has indeed been populated, lets connect again and run a select query. We'll be able to print the rows directly. We're not bothering with an Atom class this time, just publishing row data directly from tuples.
|
15,641 | <ASSISTANT_TASK:>
Python Code:
import sys
import matplotlib.pyplot as plt
import datetime as dt
import numpy as np
from mpl_toolkits.basemap import Basemap
import pandas as pd
import seaborn as sns
from scipy.stats.stats import pearsonr
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
# url1: Pakistan latitude and longitude data for map visualization of drone strikes
# https://docs.google.com/spreadsheets/d/1P1TSWNwr1j-0pX022Q2iLEZd2-IGoswD08unYSvCYaU/edit#gid=9
# save as csv onto desktop
path = '/Users/anna/Desktop/pakistan.csv' # read in Pakistan drone strike data
df = pd.read_csv(path) # read into Python from desktop
df=df.replace(to_replace=['Unreported'], value=[np.nan]) # data cleaning for NaN values
df["Date"] = pd.to_datetime(df["Date"]) # change this column to datetime
df["Date"]=df["Date"].dt.year # change datetimes to years
# url2: More in-depth Pakistan drone strike data (casualty rates/summary tables, etc.) for plots
# https://docs.google.com/spreadsheets/d/1NAfjFonM-Tn7fziqiv33HlGt09wgLZDSCP-BQaux51w/edit#gid=694046452
# save as excel file onto desktop
drones=pd.read_excel('/Users/anna/Desktop/CIAdrones.xlsx',
sheetname="Summary tables and casualty rates", skip_footer=25)
drones["Year"]=drones["Year"].dt.year # change this column from datetimes to years
drones=drones.set_index('Year') # read in drone strike summary table and set the index to year
casualtyrates=pd.read_excel('/Users/anna/Desktop/CIAdrones.xlsx',
sheetname="Summary tables and casualty rates", skiprows=21)
casualtyrates=casualtyrates.drop([12,13,14,15,16])
casualtyrates["Year"]=casualtyrates["Year"].dt.year # change this column from datetimes to years
casualtyrates=casualtyrates.set_index('Year') # read in drone strike casualty rate table and set the index to year
casualtyrates=casualtyrates.dropna(axis=1)
# url3: Global Terrorism data
# http://www.start.umd.edu/gtd/contact/
# save as excel files onto desktop
gtdto11= pd.read_excel('/Users/anna/Desktop/gtd_92to11_0616dist.xlsx') # read in Global Terrorism Data 1992-2011
pakistan=gtdto11['country_txt'] == 'Pakistan' # create a Boolean variable for country name Pakistan
year=gtdto11['iyear']>2003 # create a Boolean variable for years after 2003
gtdto11count=gtdto11[pakistan & year] # create a new dataframe containing only Pakistan data and years after 2003
gtdto15=pd.read_excel('/Users/anna/Desktop/gtd_12to15_0616dist.xlsx') # read in Global Terrorism Data 2011-2015
pakistan2=gtdto15['country_txt']=='Pakistan' # create a dataframe with only Pakistan data
gtdto15count=gtdto15[pakistan2] # create a series
gtd=gtdto11count.append(gtdto15count, ignore_index=True) # append dataframes from 2004-2011 and 2011-2015
numgtd=gtd.groupby('iyear').count() # group global terrorism data by year and set to variable numgtd
numter=numgtd['eventid'] # set new variable numter as number of terrorist attacks by year
numdrones=drones['CIA Drone Strikes'] # set new variable numdrones as number of drone strikes by year
numdrones=numdrones.drop(numdrones.index[[12]]) # drop year 2016 using the index
chartdf=pd.concat([numter, numdrones], axis=1) # create new chart with only numter and numgtd
chartdf=chartdf.dropna() # clean the chart of NaN
eventid=chartdf['eventid']
chartdf['eventid']=eventid.astype(int) # convert number of terrorist attacks from float to integer
sns.set_style('darkgrid') # set darkgrid style using Seaborn
fig, (ax1, ax2)=plt.subplots(2, 1, figsize=(8,7)) # create figure with two axes
chartdf['eventid'].plot(ax=ax2, color='r', title='Terrorist Attacks Per Year',
y='Number of Attacks', kind='bar', alpha=0.6)
chartdf['CIA Drone Strikes'].plot(ax=ax1, title='Drone Strikes Per Year',
y="Number of Strikes", kind='bar', alpha=0.5)
ax2.set_ylabel('Number of Attacks')
ax1.set_ylabel('Number of Strikes')
ax2.set_xlabel('Year')
ax1.set_xlabel('Year')
plt.tight_layout()
plt.show()
print('Correlation:',pearsonr(chartdf['eventid'],chartdf['CIA Drone Strikes'])) #Pearson's correlation and p value
# below is a long process of converting date columns in the Global Terrorism database to datetime:
year=list(gtd['iyear'])
month=list(gtd['imonth'])
day=list(gtd['iday'])
date=(zip(year, month, day))
date=list(date)
date=[str(x)for x in date]
date=[x.replace(', ','-') for x in date]
date=[x.replace('(', '') for x in date]
date=[x.replace(')', '') for x in date]
gtd['idate']=date
gtd["idate"] = pd.to_datetime(gtd["idate"], format='%Y-%m-%d', errors='coerce') # change this column to datetime
per2=gtd.idate.dt.to_period("M") # convert to monthly data
ter2=gtd.groupby(per2) # group by month
ter2=ter2.count() # count number of attacks per month
termonths=ter2['eventid'] # save in new variable
cia='/Users/anna/Desktop/pakistan.csv' # read in Pakistan drone strike data one more time in order to adjust "Date"
cia = pd.read_csv(cia)
cia["Date"] = pd.to_datetime(cia["Date"]) # change this column to datetime
per=cia.Date.dt.to_period("M") # convert to monthly data
cia=cia.groupby(per) #group by month
cia=cia.count() #count number of strikes in each month
dronemonths=cia['Strike'] # save in new variable
totalmonths=pd.concat([termonths, dronemonths], axis=1) # create new dataframe for plotting
totalmonths=totalmonths.dropna() # clean the data frame
totalmonths=totalmonths.rename(columns={'eventid': 'Attacks', 'Strike':'Strikes'})
totalmonths.plot()
plt.title('Strikes/Attacks by Month')
plt.show()
print('Correlation:',pearsonr(totalmonths['Attacks'],totalmonths['Strikes'])) # Pearson's correlation and p value
x=np.array(numgtd.index) # create x array (years, taken from Global Terrorism Data)
y=np.array(numdrones) # create y array (number of drone attacks)
y=y/y.max().astype(np.float64) # normalize drone strike data on a scale of 0-1 and convert to float
z=np.array(numter) # create x array (number of terrorist attacks)
z=z/z.max().astype(np.float64) # normalize terrorist attack data on a scale of 0-1 and convert to float
plt.figure(figsize=(15,6)) # create a figure of size 15, 6
plt.scatter(x,y, zorder=2, label="Drone Strikes") # plot x and y arrays as a scatter plot (Drone Strikes)
plt.plot(x, y, zorder=1, lw=3) # connect dots with line
plt.bar(x,z, color='red', alpha=0.6, label='Terrorist Attacks', align='center') # plot x and z arrays as a bar chart
plt.title('Drone Strikes vs. Terrorist Attacks: Normalized', fontsize=15)
plt.xlim(2003,2016) # set x upper and lower limits
plt.xlabel("Year", fontsize=12)
plt.ylabel("Strikes/Attacks", fontsize=12)
plt.tick_params(axis='y', labelleft='off', labelright='off') # turn off y axis labels because data is normalized
plt.ylim(0,1) # set y upper and lower limits
plt.legend(loc='best', fontsize='large')
plt.show()
print('Correlation:',pearsonr(y,z)) # Pearson's correlation and p value
attacks=list(chartdf['eventid'])
diff=[100*(y - x)/x for x, y in zip(attacks[::1], attacks[1::1])] # percent change of terrorist attacks
diff=pd.Series(diff) # turn into series
diff=pd.Series.to_frame(diff) # turn into dataframe
years=['2004-2005', '2005-2006', '2006-2007', '2007-2008', '2008-2009', '2009-2010',
'2010-2011', '2011-2012', '2012-2013', '2013-2014', '2014-2015']
years=pd.Series(years) # convert list object to series
years=pd.Series.to_frame(years) # convert series to dataframe
years=years.rename(columns={0: "Year"})
diff=pd.concat([diff,years], axis=1) # create a dataframe with terrorism % change and years
diff=diff.set_index('Year')
diff=diff.rename(columns={0:'Percent Change of Terrorist Attacks'})
strikes=list(chartdf['CIA Drone Strikes'])
dronediff=[100*(y - x)/x for x, y in zip(strikes[::1], strikes[1::1])] # percent change of terrorist attacks
dronediff=pd.Series(dronediff) # turn into series
dronediff=pd.Series.to_frame(dronediff) # turn into dataframe
years=['2004-2005', '2005-2006', '2006-2007', '2007-2008', '2008-2009', '2009-2010',
'2010-2011', '2011-2012', '2012-2013', '2013-2014', '2014-2015']
years=pd.Series(years) # convert list object to series
years=pd.Series.to_frame(years) # convert series to dataframe
years=years.rename(columns={0: "Year"})
dronediff=pd.concat([dronediff,years], axis=1) # create a dataframe with years and drone strike % change
dronediff=dronediff.set_index('Year')
dronediff=dronediff.rename(columns={0:'Percent Change of Drone Strikes'})
combined=pd.concat([dronediff, diff], axis=1) # create a dataframe with drone % change and terrorism % change
combined.plot()
plt.show()
dcasualties=drones["Minimum people killed"]
dcivilians=drones["Minimum civilians killed"]
dchildren=drones['Minimum children killed']
civcas=pd.concat([dcasualties, dcivilians], axis=1)
dronedeaths=pd.concat([civcas, dchildren], axis=1)
dronedeaths=dronedeaths.drop([2016]) # new Dataframe for total, civilian and child casualty rates during drone strikes
dronedeaths.plot.area(stacked=False)
plt.title('Drone Strike Casualties')
plt.ylabel("Number Killed", fontsize=12)
plt.xlabel("Year")
plt.ylabel("Number Killed", fontsize=12)
plt.xlabel("Year")
plt.title('Drone Strike Casualties')
plt.show() # plot the new Dataframe
casualtyrates.plot.bar()
plt.title('Drone Strike Casualty Rates')
plt.show()
dcasualties=dcasualties.drop([2016]) # drop 2016 from casualty because it is not included in numter
x=np.array(numgtd.index) # create x array (years, taken from Global Terrorism Data)
y=np.array(dcasualties) # create y array (casualty rates)
y=y/y.max().astype(np.float64) # normalize casualty rate data on a scale of 0-1 and convert to float
z=np.array(numter) # create x array (number of terrorist attacks)
z=z/z.max().astype(np.float64) # normalize terrorist attack data on a scale of 0-1 and convert to float
plt.figure(figsize=(15,6)) # create a figure of size 15, 6
plt.scatter(x,y, zorder=2,
label="Drone Casualties") # plot x and y arrays as a scatter plot (casualty rates)
plt.plot(x, y, zorder=1, lw=3) # connect dots with line
plt.bar(x,z, color='red', alpha=0.6, label='Terrorist Attacks', align='center') # plot x and z arrays as a bar chart
plt.title('Drone Casualties vs. Terrorist Attacks: Normalized', fontsize=15)
plt.xlim(2003.5,2015.5) # set x upper and lower limits
plt.xlabel("Year", fontsize=12)
plt.ylabel("Casualties/Attacks", fontsize=12)
plt.tick_params(axis='y', labelleft='off', labelright='off') # turn off y axis labels because data is normalized
plt.ylim(0,1) # set y upper and lower limits
plt.legend(loc='best', fontsize='large')
plt.show()
print('Correlation:',pearsonr(y,z)) # Pearson's correlation and p value
lat=df["Latitude"]
long=df["Longitude"]
coordinates=pd.concat([lat, long], axis=1) # new DataFrame for latitude and longitude of drone strikes
lat2=gtd["latitude"]
long2=gtd["longitude"]
coordinates2=pd.concat([lat2, long2], axis=1) # new DataFrame for latitude and longitude of terrorist attacks
fig = plt.figure(figsize=(10,5)) # create a figure with size 10,5
map = Basemap(projection='gall', # make the figure a Basemap map (Gall projection)
resolution = 'h', # high image resolution, with latitude and longitude upper and lower bounds
area_thresh = 0.1,
llcrnrlon=60, llcrnrlat=24,
urcrnrlon=80, urcrnrlat=40,
lat_0=30, lon_0=70)
map.drawcoastlines()
map.drawcountries()
map.fillcontinents(color = 'tan')
map.drawmapboundary(fill_color='#f4f4f4')
x,y = map(list(coordinates['Longitude']), list(coordinates['Latitude'])) # set my coordinates df as x and y values
map.plot(x, y, 'ro', markersize=4) # plot my coordinates df points onto the map with point size 4 and color red
plt.title('Drone Strikes 2004-2013')
plt.show()
fig = plt.figure(figsize=(10,5)) # create a figure with size 10,5
map = Basemap(projection='gall', # make the figure a Basemap map
resolution = 'h', # high resolution, with latitude and longitude upper and lower bounds
area_thresh = 0.1,
llcrnrlon=60, llcrnrlat=24,
urcrnrlon=80, urcrnrlat=40,
lat_0=30, lon_0=70)
map.drawcoastlines()
map.drawcountries()
map.fillcontinents(color = 'tan')
map.drawmapboundary(fill_color='#f4f4f4')
x,y = map(list(coordinates2['longitude']), list(coordinates2['latitude'])) # set my coordinates gtd as x and y values
map.plot(x, y, 'ro', markersize=4) # plot my coordinates gtd points onto the map
plt.title('Terrorist Attacks 2004-2015')
plt.show()
terattacks=gtd.set_index(['iyear'])
ter2004=terattacks.loc[2004] # terrorist attacks in 2004
ter2015=terattacks.loc[2015] # terrorist attacks in 2015
lat2004=ter2004['latitude']
long2004=ter2004['longitude']
terfinal2004=pd.concat([lat2004, long2004], axis=1) # new Dataframe for coordinates of 2004 attacks
lat2015=ter2015['latitude']
long2015=ter2015['longitude']
terfinal2015=pd.concat([lat2015, long2015], axis=1) # new Dataframe for coordinates of 2015 attackss
fig = plt.figure(figsize=(10,5)) # create a figure with size 10,5
map = Basemap(projection='gall', # make the figure a Basemap map (Gall projection)
resolution = 'h', # high image resolution, with latitude and longitude upper and lower bounds
area_thresh = 0.1,
llcrnrlon=60, llcrnrlat=24,
urcrnrlon=80, urcrnrlat=40,
lat_0=30, lon_0=70)
map.drawcoastlines()
map.drawcountries()
map.fillcontinents(color = 'tan')
map.drawmapboundary(fill_color='#f4f4f4')
x,y = map(list(terfinal2004['longitude']), list(terfinal2004['latitude'])) # set my coordinates as x and y values
map.plot(x, y, 'ro', markersize=4) # plot my coordinates 2004 points onto the map with point size 4 and color red
plt.title('Terrorist Attack Locations 2004')
plt.show()
fig = plt.figure(figsize=(10,5)) # create a figure with size 10,5
map = Basemap(projection='gall', # make the figure a Basemap map
resolution = 'h', # high resolution, with latitude and longitude upper and lower bounds
area_thresh = 0.1,
llcrnrlon=60, llcrnrlat=24,
urcrnrlon=80, urcrnrlat=40,
lat_0=30, lon_0=70)
map.drawcoastlines()
map.drawcountries()
map.fillcontinents(color = 'tan')
map.drawmapboundary(fill_color='#f4f4f4')
x,y = map(list(terfinal2015['longitude']), list(terfinal2015['latitude'])) # set my coordinates as x and y values
map.plot(x, y, 'ro', markersize=4) # plot my coordinates 2015 points onto the map
plt.title('Terrorist Attack Locations 2015')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading in the Data/Data Cleaning
Step2: Plot 1
Step3: Plot 2
Step4: Plot 3
Step5: Plot 4
Step6: Plot 5
Step7: Plot 6
Step8: Appendix
Step9: Change of Location?
|
15,642 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
t = np.linspace(0, 10.0, 100)
plt.plot(t, np.sin(t))
plt.xlabel('Time')
plt.ylabel('Signal')
plt.title('My Plot'); # supress text output
f = plt.figure(figsize=(9,6)) # 9" x 6", default is 8" x 5.5"
plt.plot(t, np.sin(t), 'r.');
plt.xlabel('x')
plt.ylabel('y')
from matplotlib import lines
lines.lineStyles.keys()
from matplotlib import markers
markers.MarkerStyle.markers.keys()
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(-1.0, 11.0)
plt.ylim(-1.0, 1.0)
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(0.0, 10.0)
plt.ylim(-1.0, 1.0)
plt.xticks([0,5,10], ['zero','five','10'])
plt.tick_params(axis='y', direction='inout', length=10) #modifies parameters of actual tick marks
plt.plot(np.random.rand(100), 'b-')
plt.grid(True)
plt.box(False)
plt.plot(t, np.sin(t), label='sin(t)')
plt.plot(t, np.cos(t), label='cos(t)')
plt.xlabel('t')
plt.ylabel('Signal(t)')
plt.ylim(-1.5, 1.5)
plt.xlim(right=12.0)
plt.legend()
plt.subplot(2,1,1) # 2 rows x 1 col, plot 1
plt.plot(t, np.exp(0.1*t))
plt.ylabel('Exponential')
plt.subplot(2,1,2) # 2 rows x 1 col, plot 2
plt.plot(t, t**2)
plt.ylabel('Quadratic')
plt.xlabel('x')
plt.tight_layout()
# f, ax = plt.subplots(2, 2)
# for i in range(2):
# for j in range(2):
# plt.sca(ax[i,j])
# plt.plot(np.random.rand(20))
# plt.xlabel('x')
# plt.ylabel('y')
# plt.tight_layout()
print (year[0,99])
f, ax = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(6,6))
for i in range(2):
for j in range(2):
plt.sca(ax[i,j])
plt.plot(np.random.rand(20))
if i==1:
plt.xlabel('x')
if j==0:
plt.ylabel('y')
plt.tight_layout()
plt.plot(t, np.sin(t), marker='o', color='darkblue',
linestyle='--', alpha=0.3, markersize=10)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overview
Step2: Basic plot modification
Step3: Here is a list of the single character color strings
Step4: To change the plot's limits, use xlim and ylim
Step5: You can change the ticks along a given axis by using xticks, yticks and tick_params
Step6: Box and grid
Step7: Multiple series
Step8: Subplots
Step9: In many cases, it is easier to use the subplots function, which creates a new Figure along with an array of Axes objects that can be indexed in a rational manner
Step10: The subplots function also makes it easy to pass arguments to Figure and to share axes
Step11: More marker and line styling
|
15,643 | <ASSISTANT_TASK:>
Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ensembles"
def image_path(fig_id):
return os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id)
def save_fig(fig_id, tight_layout=True):
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(image_path(fig_id) + ".png", format='png', dpi=300)
heads_proba = 0.51
coin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)
cumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)
plt.figure(figsize=(8,3.5))
plt.plot(cumulative_heads_ratio)
plt.plot([0, 10000], [0.51, 0.51], "k--", linewidth=2, label="51%")
plt.plot([0, 10000], [0.5, 0.5], "k-", label="50%")
plt.xlabel("Number of coin tosses")
plt.ylabel("Heads ratio")
plt.legend(loc="lower right")
plt.axis([0, 10000, 0.42, 0.58])
save_fig("law_of_large_numbers_plot")
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression(random_state=42)
rnd_clf = RandomForestClassifier(random_state=42)
svm_clf = SVC(random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='hard')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
log_clf = LogisticRegression(random_state=42)
rnd_clf = RandomForestClassifier(random_state=42)
svm_clf = SVC(probability=True, random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
max_samples=100, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
print(accuracy_score(y_test, y_pred_tree))
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap, linewidth=10)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.subplot(122)
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
bag_clf = BaggingClassifier(
DecisionTreeClassifier(splitter="random", max_leaf_nodes=16, random_state=42),
n_estimators=500, max_samples=1.0, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, n_jobs=-1, random_state=42)
rnd_clf.fit(X_train, y_train)
y_pred_rf = rnd_clf.predict(X_test)
np.sum(y_pred == y_pred_rf) / len(y_pred) # almost identical predictions
from sklearn.datasets import load_iris
iris = load_iris()
rnd_clf = RandomForestClassifier(n_estimators=500, n_jobs=-1, random_state=42)
rnd_clf.fit(iris["data"], iris["target"])
for name, score in zip(iris["feature_names"], rnd_clf.feature_importances_):
print(name, score)
rnd_clf.feature_importances_
plt.figure(figsize=(6, 4))
for i in range(15):
tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)
indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))
tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])
plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.02, contour=False)
plt.show()
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
bootstrap=True, n_jobs=-1, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
bag_clf.oob_decision_function_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
rnd_clf = RandomForestClassifier(random_state=42)
rnd_clf.fit(mnist["data"], mnist["target"])
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = matplotlib.cm.hot,
interpolation="nearest")
plt.axis("off")
plot_digit(rnd_clf.feature_importances_)
cbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])
cbar.ax.set_yticklabels(['Not important', 'Very important'])
save_fig("mnist_feature_importance_plot")
plt.show()
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
m = len(X_train)
plt.figure(figsize=(11, 4))
for subplot, learning_rate in ((121, 1), (122, 0.5)):
sample_weights = np.ones(m)
for i in range(5):
plt.subplot(subplot)
svm_clf = SVC(kernel="rbf", C=0.05, random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights)
y_pred = svm_clf.predict(X_train)
sample_weights[y_pred != y_train] *= (1 + learning_rate)
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
plt.subplot(121)
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
save_fig("boosting_plot")
plt.show()
list(m for m in dir(ada_clf) if not m.startswith("_") and m.endswith("_"))
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
X_new = np.array([[0.8]])
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))
y_pred
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)
gbrt_slow.fit(X, y)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
save_fig("gbrt_learning_rate_plot")
plt.show()
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors)
gbrt_best = GradientBoostingRegressor(max_depth=2,n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
min_error = np.min(errors)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
save_fig("early_stopping_gbrt_plot")
plt.show()
gbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)
min_val_error = float("inf")
error_going_up = 0
for n_estimators in range(1, 120):
gbrt.n_estimators = n_estimators
gbrt.fit(X_train, y_train)
y_pred = gbrt.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
if val_error < min_val_error:
min_val_error = val_error
error_going_up = 0
else:
error_going_up += 1
if error_going_up == 5:
break # early stopping
print(gbrt.n_estimators)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Voting classifiers
Step2: Bagging ensembles
Step3: Random Forests
Step4: Out-of-Bag evaluation
Step5: Feature importance
Step6: AdaBoost
Step7: Gradient Boosting
Step8: Gradient Boosting with Early stopping
|
15,644 | <ASSISTANT_TASK:>
Python Code:
import nixio as nix
import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from utils.notebook import print_stats
from utils.video_player import Playback
nix_file = nix.File.open('data/tracking_data.h5', nix.FileMode.ReadOnly)
print_stats(nix_file.blocks)
b = nix_file.blocks[0]
print_stats(b.data_arrays)
print_stats(b.multi_tags)
video = [a for a in b.data_arrays if a.name == "video"][0]
fig = plt.figure(facecolor='white', figsize=(1024 / 90, 768 / 90), dpi=90)
pb = Playback(fig,video)
pb.start()
# get the tag linking tracking and video data
tag = [t for t in b.multi_tags if t.name == "tracking"][0]
fig = plt.figure(facecolor='white', figsize=(1024 / 90, 768 / 90), dpi=90)
pb = Playback(fig, video, tracking_tag=tag)
pb.start()
fig = plt.figure(facecolor='white', figsize=(1024 / 90, 768 / 90), dpi=90)
pb = Playback(fig, video, tracking_tag=tag, show_orientation=True)
pb.start()
nix_file.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Storing of video data
Step2: Tracking data
Step3: Addtional Information
|
15,645 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# To generate the vector fields
import dolfin as df
import mshr
import numpy as np
import plot_vtk_matplotlib as pvm
# Matplotlib parameters can be tuned with rc.Params
# This library has modified values. For example:
# matplotlib.rcParams['font.size'] = 22
mesh = mshr.Rectangle(df.Point(-3, -3), df.Point(3, 3))
mesh = mshr.generate_mesh(mesh, 10)
vspace = df.VectorFunctionSpace(mesh, 'Lagrange', degree=1, dim=2)
vfunction = df.Expression(("sin(x[1])", "cos(x[0])"))
vfunction = df.interpolate(vfunction, vspace)
# df.plot(vfunction, interactive=True)
_file = df.File('dolfin_vector_field_2d.pvd')
_file << vfunction
# Load our skyrmionic state
vf_plot = pvm.plot_vtk_matplotlib('./dolfin_vector_field_2d000000.vtu',
z_max=1, z_min=-1,
)
# Extract the data from the file
vf_plot.extract_data(# vtkfiletype=XMLStructuredGrid
)
# Plot the y component and arrows in black
vf_plot.plot_vtk(-3, 3,
-3, 3,
# nx=50, ny=50,
v_component='vy',
hsv_map='2d',
# figsize=(10, 10),
# savefig='hsv_map.pdf',
colorbar=True,
quiver_type='raw_colour',
)
# Plot the y component and arrows in black
vf_plot.plot_vtk(-3, 3,
-3, 3,
nx=50, ny=50,
v_component='vx',
# cmap='RdBu',
interpolator='natgrid',
interpolator_method='linear',
# figsize=(10, 10),
colorbar=True,
# quiver_type='raw_colour',
)
# Quiver plot
vf_plot.plot_quiver(quiver_type='interpolated_cmap',
x_min=-3, x_max=3,
y_min=-3, y_max=3,
linewidth=0.8,
scale=1 / 0.06,
width=0.007,
alpha=0.8,
v_component='vx',
# pivot='tail'
# savefig='vector_field.png'
)
mesh = mshr.Sphere(df.Point(0, 0), 5)
mesh = mshr.generate_mesh(mesh, 10)
vspace = df.VectorFunctionSpace(mesh, 'Lagrange', degree=1, dim=3)
vfunction = df.Expression(("sin(x[1])", "cos(x[0])", "sin(x[2])"))
vfunction = df.interpolate(vfunction, vspace)
# We can plot the field using Dolfin
# df.plot(vfunction, interactive=True)
# We can plot the mesh
# df.plot(mesh, interactive=True)
_file = df.File('dolfin_sphere_vector_field_3d.pvd')
_file << vfunction
# Load our skyrmionic state
vf_plot = pvm.plot_vtk_matplotlib('./dolfin_sphere_vector_field_3d000000.vtu',
z_max=0.8, z_min=-0.8)
# Extract the data from the file
vf_plot.extract_data()
# plot the x component and arrows in black
vf_plot.plot_vtk(-5, 5,
-5, 5,
nx=100, ny=100,
v_component='vz',
hsv_map='3d',
# figsize=(10, 8),
colorbar=True,
quiver_type='raw_colour',
)
# Quiver plot
vf_plot.plot_quiver(quiver_type='interpolated_cmap',
x_min=-5, x_max=5,
y_min=-5, y_max=5,
linewidth=0.8,
scale=1 / 0.06,
width=0.007,
alpha=0.8,
v_component='vz'
)
vf_plot.z_min, vf_plot.z_max = 4, 5
# Quiver plot
vf_plot.plot_vtk(-5, 5,
-5, 5,
nx=100, ny=100,
v_component='vy',
hsv_map='2d',
colorbar=True,
quiver_type=None,
)
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(18, 6))
comps = ['vx', 'vy', 'vz']
for i in range(3):
ax = fig.add_subplot(1, 3, i + 1)
vf_plot.plot_quiver(quiver_type='interpolated_cmap',
x_min=-5, x_max=5,
y_min=-5, y_max=5,
linewidth=0.8,
scale=1 / 0.06,
width=0.007,
alpha=0.8,
cmap='RdYlBu',
v_component=comps[i],
predefined_axis=ax,
)
ax.set_title(comps[i])
# ax.axis('off')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate 2D Vector Field using Dolfin
Step2: Now we can save the data in a VTK file. By default, Fenics saves XML files (instead of binary) using an unstructured grid, with the .vtu extension
Step3: Consequently, we initialise the plotting library passing the path to the VTK file. When loading the data, we must specify the range of spatial z values from where a slice of data is going to be plotted.
Step4: We now start exprimenting with the options. One of the most interesting functions is plot_vtk which shows an interpolated colormap using the data in the slice specified when extracting the information.
Step5: The default option is to use a colormap for the interpolated data. We can also specify to use natgrid instead of scipy for interpolating the data. The linear method is better than the default 'nn' (natural neighbours) but still a bad approximation, compared with SciPy.
Step6: We can also use a quiver plot, which can also be interpolated if necessary. Extra arguments can be passed to control the arrow definitions. The default interpolation is made with SciPy and a linear method. These options can be changed with interpolator and interpolator_method respectively.
Step7: 3D Vector Field
Step8: We save the data as before
Step9: Now we load the function and specify a slice in the middle of the sphere
Step10: If we plot the slice with a 3D HSV mapping, darker regions indicate the lower $z$ magnitudes of the vector field, while the brighter regions indicate the highest $z$ values.
Step11: We can interpolate this slice and get a quiver plot from the vector field
Step12: We can also redefine the range of $z$ values where the data is extracted, but this is not very accurate for a mesh in finite elements since many points are not exactly in the slice plane, thus the approximation can be bad. The quiver plot on top can be disabled making the quiver_type as None or False.
Step13: Extending with Matplotlib
|
15,646 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow import keras
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning.exercise_8 import *
print("Setup Complete")
img_rows, img_cols = 28, 28
num_classes = 10
def prep_data(raw):
y = raw[:, 0]
out_y = keras.utils.to_categorical(y, num_classes)
x = raw[:,1:]
num_images = raw.shape[0]
out_x = x.reshape(num_images, img_rows, img_cols, 1)
out_x = out_x / 255
return out_x, out_y
fashion_file = "../input/fashionmnist/fashion-mnist_train.csv"
fashion_data = np.loadtxt(fashion_file, skiprows=1, delimiter=',')
x, y = prep_data(fashion_data)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D, Dropout
batch_size = 16
fashion_model = Sequential()
fashion_model.add(Conv2D(16, kernel_size=(3, 3),
activation='relu',
input_shape=(img_rows, img_cols, 1)))
fashion_model.add(Conv2D(16, (3, 3), activation='relu'))
fashion_model.add(Flatten())
fashion_model.add(Dense(128, activation='relu'))
fashion_model.add(Dense(num_classes, activation='softmax'))
fashion_model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
fashion_model.fit(x, y,
batch_size=batch_size,
epochs=3,
validation_split = 0.2)
fashion_model_1 = Sequential()
fashion_model_1.add(Conv2D(16, kernel_size=(3, 3),
activation='relu',
input_shape=(img_rows, img_cols, 1)))
fashion_model_1.add(Conv2D(16, (3, 3), activation='relu', strides=2))
fashion_model_1.add(Flatten())
fashion_model_1.add(Dense(128, activation='relu'))
fashion_model_1.add(Dense(num_classes, activation='softmax'))
fashion_model_1.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
fashion_model_1.fit(x, y,
batch_size=batch_size,
epochs=3,
validation_split = 0.2)
# Check your answer
q_1.check()
#_COMMENT_IF(PROD)_
q_1.solution()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) Increasing Stride Size in A Layer
Step2: You have the same code in the cell below, but the model is now called fashion_model_1. Change the specification of fashion_model_1 so the second convolutional layer has a stride length of 2.
Step3: For the solution, uncomment and run the cell below
|
15,647 | <ASSISTANT_TASK:>
Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
PROJECTNUMBER = '663413318684'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['PROJECTNUMBER'] = PROJECTNUMBER
os.environ['REGION'] = REGION
%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%bash
# Pandas will use this privatekey to access BigQuery on our behalf.
# Do NOT check in the private key into git!!!
# if you get a JWT grant error when using this key, create the key via gcp web console in IAM > Service Accounts section
KEYFILE=babyweight/trainer/privatekey.json
if [ ! -f $KEYFILE ]; then
gcloud iam service-accounts keys create \
--iam-account ${PROJECTNUMBER}-compute@developer.gserviceaccount.com \
$KEYFILE
fi
KEYDIR='babyweight/trainer'
#%writefile babyweight/trainer/model.py
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#%writefile -a babyweight/trainer/model.py
def create_queries():
query_all =
WITH with_ultrasound AS (
SELECT
weight_pounds AS label,
CAST(is_male AS STRING) AS is_male,
mother_age,
CAST(plurality AS STRING) AS plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
),
without_ultrasound AS (
SELECT
weight_pounds AS label,
'Unknown' AS is_male,
mother_age,
IF(plurality > 1, 'Multiple', 'Single') AS plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
),
preprocessed AS (
SELECT * from with_ultrasound
UNION ALL
SELECT * from without_ultrasound
)
SELECT
label,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
preprocessed
train_query = "{} WHERE ABS(MOD(hashmonth, 4)) < 3".format(query_all)
eval_query = "{} WHERE ABS(MOD(hashmonth, 4)) = 3".format(query_all)
return train_query, eval_query
print create_queries()[0]
#%writefile -a babyweight/trainer/model.py
def query_to_dataframe(query):
import pandas as pd
import pkgutil
privatekey = pkgutil.get_data(KEYDIR, 'privatekey.json')
print(privatekey[:200])
return pd.read_gbq(query,
project_id=PROJECT,
dialect='standard',
private_key=privatekey)
def create_dataframes(frac):
# small dataset for testing
if frac > 0 and frac < 1:
sample = " AND RAND() < {}".format(frac)
else:
sample = ""
train_query, eval_query = create_queries()
train_query = "{} {}".format(train_query, sample)
eval_query = "{} {}".format(eval_query, sample)
train_df = query_to_dataframe(train_query)
eval_df = query_to_dataframe(eval_query)
return train_df, eval_df
train_df, eval_df = create_dataframes(0.001)
train_df.describe()
eval_df.head()
#%writefile -a babyweight/trainer/model.py
def input_fn(indf):
import copy
import pandas as pd
df = copy.deepcopy(indf)
# one-hot encode the categorical columns
df["plurality"] = df["plurality"].astype(pd.api.types.CategoricalDtype(
categories=["Single","Multiple","1","2","3","4","5"]))
df["is_male"] = df["is_male"].astype(pd.api.types.CategoricalDtype(
categories=["Unknown","false","true"]))
# features, label
label = df['label']
del df['label']
features = pd.get_dummies(df)
return features, label
train_x, train_y = input_fn(train_df)
print(train_x[:5])
print(train_y[:5])
from sklearn.ensemble import RandomForestRegressor
estimator = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
estimator.fit(train_x, train_y)
import numpy as np
eval_x, eval_y = input_fn(eval_df)
eval_pred = estimator.predict(eval_x)
print(eval_pred[1000:1005])
print(eval_y[1000:1005])
print(np.sqrt(np.mean((eval_pred-eval_y)*(eval_pred-eval_y))))
#%writefile -a babyweight/trainer/model.py
def train_and_evaluate(frac, max_depth=5, n_estimators=100):
import numpy as np
# get data
train_df, eval_df = create_dataframes(frac)
train_x, train_y = input_fn(train_df)
# train
from sklearn.ensemble import RandomForestRegressor
estimator = RandomForestRegressor(max_depth=max_depth, n_estimators=n_estimators, random_state=0)
estimator.fit(train_x, train_y)
# evaluate
eval_x, eval_y = input_fn(eval_df)
eval_pred = estimator.predict(eval_x)
rmse = np.sqrt(np.mean((eval_pred-eval_y)*(eval_pred-eval_y)))
print("Eval rmse={}".format(rmse))
return estimator, rmse
#%writefile -a babyweight/trainer/model.py
def save_model(estimator, gcspath, name):
from sklearn.externals import joblib
import os, subprocess, datetime
model = 'model.joblib'
joblib.dump(estimator, model)
model_path = os.path.join(gcspath, datetime.datetime.now().strftime(
'export_%Y%m%d_%H%M%S'), model)
subprocess.check_call(['gsutil', 'cp', model, model_path])
return model_path
saved = save_model(estimator, 'gs://{}/babyweight/sklearn'.format(BUCKET), 'babyweight')
print saved
%writefile babyweight/trainer/task.py
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import hypertune
import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--bucket',
help = 'GCS path to output.',
required = True
)
parser.add_argument(
'--frac',
help = 'Fraction of input to process',
type = float,
required = True
)
parser.add_argument(
'--maxDepth',
help = 'Depth of trees',
type = int,
default = 5
)
parser.add_argument(
'--numTrees',
help = 'Number of trees',
type = int,
default = 100
)
parser.add_argument(
'--projectId',
help = 'ID (not name) of your project',
required = True
)
parser.add_argument(
'--job-dir',
help = 'output directory for model, automatically provided by gcloud',
required = True
)
args = parser.parse_args()
arguments = args.__dict__
model.PROJECT = arguments['projectId']
model.KEYDIR = 'trainer'
estimator, rmse = model.train_and_evaluate(arguments['frac'],
arguments['maxDepth'],
arguments['numTrees']
)
loc = model.save_model(estimator,
arguments['job_dir'], 'babyweight')
print("Saved model to {}".format(loc))
# this is for hyperparameter tuning
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='rmse',
metric_value=rmse,
global_step=0)
# done
!pip freeze | grep pandas
%writefile babyweight/setup.py
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from setuptools import setup
setup(name='trainer',
version='1.0',
description='Natality, with sklearn',
url='http://github.com/GoogleCloudPlatform/training-data-analyst',
author='Google',
author_email='nobody@google.com',
license='Apache2',
packages=['trainer'],
## WARNING! Do not upload this package to PyPI
## BECAUSE it contains a private key
package_data={'': ['privatekey.json']},
install_requires=[
'pandas-gbq==0.3.0',
'urllib3',
'google-cloud-bigquery==0.29.0',
'cloudml-hypertune'
],
zip_safe=False)
%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python -m trainer.task \
--bucket=${BUCKET} --frac=0.001 --job-dir=gs://${BUCKET}/babyweight/sklearn --projectId $PROJECT
%bash
RUNTIME_VERSION="1.8"
PYTHON_VERSION="2.7"
JOB_NAME=babyweight_skl_$(date +"%Y%m%d_%H%M%S")
JOB_DIR="gs://$BUCKET/babyweight/sklearn/${JOBNAME}"
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir $JOB_DIR \
--package-path $(pwd)/babyweight/trainer \
--module-name trainer.task \
--region us-central1 \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
-- \
--bucket=${BUCKET} --frac=0.1 --projectId $PROJECT
%bash
gsutil ls gs://${BUCKET}/babyweight/sklearn/ | tail -1
%bash
MODEL_NAME="babyweight"
MODEL_VERSION="skl"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/sklearn/ | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud alpha ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} \
--framework SCIKIT_LEARN --runtime-version 1.8 --python-version=2.7
data = []
for i in range(2):
data.append([])
for col in eval_x:
# convert from numpy integers to standard integers
data[i].append(int(np.uint64(eval_x[col][i]).item()))
print(eval_x.columns)
print(json.dumps(data))
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials)
request_data = {'instances':
# [u'mother_age', u'gestation_weeks', u'is_male_Unknown', u'is_male_0',
# u'is_male_1', u'plurality_Single', u'plurality_Multiple',
# u'plurality_1', u'plurality_2', u'plurality_3', u'plurality_4',
# u'plurality_5']
[[24, 38, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[34, 39, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0]]
}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'babyweight', 'skl')
response = api.projects().predict(body=request_data, name=parent).execute()
print "response={0}".format(response)
%writefile hyperparam.yaml
trainingInput:
hyperparameters:
goal: MINIMIZE
maxTrials: 100
maxParallelTrials: 5
hyperparameterMetricTag: rmse
params:
- parameterName: maxDepth
type: INTEGER
minValue: 2
maxValue: 8
scaleType: UNIT_LINEAR_SCALE
- parameterName: numTrees
type: INTEGER
minValue: 50
maxValue: 150
scaleType: UNIT_LINEAR_SCALE
%bash
RUNTIME_VERSION="1.8"
PYTHON_VERSION="2.7"
JOB_NAME=babyweight_skl_$(date +"%Y%m%d_%H%M%S")
JOB_DIR="gs://$BUCKET/babyweight/sklearn/${JOBNAME}"
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir $JOB_DIR \
--package-path $(pwd)/babyweight/trainer \
--module-name trainer.task \
--region us-central1 \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
--config=hyperparam.yaml \
-- \
--bucket=${BUCKET} --frac=0.01 --projectId $PROJECT
%writefile largemachine.yaml
trainingInput:
scaleTier: CUSTOM
masterType: large_model
%bash
RUNTIME_VERSION="1.8"
PYTHON_VERSION="2.7"
JOB_NAME=babyweight_skl_$(date +"%Y%m%d_%H%M%S")
JOB_DIR="gs://$BUCKET/babyweight/sklearn/${JOBNAME}"
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir $JOB_DIR \
--package-path $(pwd)/babyweight/trainer \
--module-name trainer.task \
--region us-central1 \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
--scale-tier=CUSTOM \
--config=largemachine.yaml \
-- \
--bucket=${BUCKET} --frac=1 --projectId $PROJECT --maxDepth 8 --numTrees 90
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring dataset
Step3: <h2> Creating a ML dataset using BigQuery </h2>
Step4: <h2> Creating a scikit-learn model using random forests </h2>
Step5: Packaging up as a Python package
Step6: Try out the package on a subset of the data.
Step7: <h2> Training on Cloud ML Engine </h2>
Step8: The training finished in 20 minutes with a RMSE of 1.05 lbs.
Step9: <h2> Using the model to predict </h2>
Step10: As long as you send in the data in that order, it will work
Step11: Hyperparameter tuning
Step12: If you go to the GCP console and click on the job, you will see the trial information start to populating, with the lowest rmse trial listed first. I got the best performance with these settings
|
15,648 | <ASSISTANT_TASK:>
Python Code:
import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(8,5))
# how many time steps/data pts are in one batch of data
seq_length = 20
# generate evenly spaced data pts
time_steps = np.linspace(start=0, stop=np.pi, num=seq_length + 1)
data = np.sin(time_steps)
# Size becomes (seq_length+1, 1), adds an input_size dimension
data.resize((seq_length + 1, 1))
# Input X becomes all but the last piece of data
x = data[:-1]
# Target Y becomes all but the first piece of data
y = data[1:]
# display the data
plt.plot(time_steps[1:], x, 'r.', label='input, x')
plt.plot(time_steps[1:], y, 'b.', label='target, y')
plt.legend(loc='best')
plt.show()
class RNN(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(RNN, self).__init__()
self.hidden_dim=hidden_dim
# Define an RNN with specified parameters
# Batch_first means that the first dim of the input and output will be the batch_size
self.rnn = nn.RNN(input_size=input_size,
hidden_size=hidden_dim,
num_layers=n_layers,
batch_first=True)
# last, fully-connected layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x, hidden):
# x (batch_size, seq_length, input_size)
# hidden (n_layers, batch_size, hidden_dim)
# r_out (batch_size, time_step, hidden_size)
batch_size = x.size(0)
# Get RNN outputs
r_out, hidden = self.rnn(x, hidden)
# Shape output to be (batch_size*seq_length, hidden_dim)
r_out = r_out.view(-1, self.hidden_dim)
# Get final output
output = self.fc(r_out)
return output, hidden
# Test that dimensions are as expected
test_rnn = RNN(input_size=1,
output_size=1,
hidden_dim=10,
n_layers=2)
# Generate evenly spaced, test data pts
time_steps = np.linspace(0, np.pi, seq_length)
data = np.sin(time_steps)
data.resize((seq_length, 1))
# Give it a batch_size of 1 as first dimension
print('Input size before: ', torch.Tensor(data).size())
test_input = torch.Tensor(data).unsqueeze(0)
print('Input size after: ', test_input.size())
# Test out rnn sizes
test_out, test_h = test_rnn(test_input, None)
print('Output size: ', test_out.size())
print('Hidden state size: ', test_h.size())
# Decide on hyperparameters
input_size=1
output_size=1
hidden_dim=32
n_layers=1
# Instantiate an RNN
rnn = RNN(input_size, output_size, hidden_dim, n_layers)
print(rnn)
# MSE loss and Adam optimizer with a learning rate of 0.01
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(params=rnn.parameters(),
lr=0.01)
# Train the RNN
def train(rnn, n_steps, print_every):
# Initialize the hidden state
hidden = None
for batch_i, step in enumerate(range(n_steps)):
# Defining the training data
time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # input_size=1
x = data[:-1]
y = data[1:]
# Convert data into Tensors
# unsqueeze gives a 1, batch_size dimension
x_tensor = torch.Tensor(x).unsqueeze(0)
y_tensor = torch.Tensor(y)
# Outputs from the rnn
prediction, hidden = rnn(x_tensor, hidden)
## Representing Memory ##
# Make a new variable for hidden and detach the hidden state from its history
# this way, we don't backpropagate through the entire history
hidden = hidden.data
# Calculate the loss
loss = criterion(prediction, y_tensor)
# Zero gradients
optimizer.zero_grad()
# Perform backprop and update weights
loss.backward()
optimizer.step()
# Display loss and predictions
if batch_i % print_every == 0:
print('Loss: ', loss.item())
# Plot input
plt.plot(time_steps[1:], x, 'r.')
# Plot predictions
plt.plot(time_steps[1:], prediction.data.numpy().flatten(), 'b.') # predictions
plt.show()
return rnn
# Train the rnn and monitor results
n_steps = 75
print_every = 15
trained_rnn = train(rnn, n_steps, print_every)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define the RNN
Step2: Check the input and output dimensions
Step3: Training the RNN
Step4: Loss and Optimization
Step5: Defining the training function
|
15,649 | <ASSISTANT_TASK:>
Python Code:
from crpropa import *
import numpy as np
import matplotlib.pyplot as plt
# define densities
FER = Ferriere()
NAK = Nakanishi()
COR = Cordes()
R = np.linspace(0, 30*kpc, 300)
phi = np.linspace(0, 2*np.pi, 180)
n_FER_HI = np.zeros((R.shape[0],phi.shape[0]))
n_FER_HII = np.zeros((R.shape[0],phi.shape[0]))
n_FER_H2 = np.zeros((R.shape[0],phi.shape[0]))
n_FER_tot = np.zeros((R.shape[0],phi.shape[0]))
n_FER_nucl = np.zeros((R.shape[0],phi.shape[0]))
# get densitys
pos = Vector3d(0.)
for ir, r in enumerate(R):
for ip, p in enumerate(phi):
pos.x = r*np.cos(p)
pos.y = r*np.sin(p)
n_FER_HI[ir,ip]=FER.getHIDensity(pos)
n_FER_HII[ir,ip]=FER.getHIIDensity(pos)
n_FER_H2[ir,ip]=FER.getH2Density(pos)
n_FER_tot[ir,ip]=FER.getDensity(pos)
n_FER_nucl[ir,ip]=FER.getNucleonDensity(pos)
# plot radial
plt.figure()
plt.plot(R/kpc, n_FER_HI.mean(axis=1)*ccm, linestyle = '--',alpha = .7, color='red', label= 'atomic hydrogyn (HI)')
plt.plot(R/kpc, n_FER_HII.mean(axis=1)*ccm, linestyle = ':',alpha = .7, color='blue', label = 'ionised hydrogyn (HII)')
plt.plot(R/kpc, n_FER_H2.mean(axis=1)*ccm, linestyle = '-.',alpha = .7, color='orange', label= 'molecular hydrogen (H2)')
plt.plot(R/kpc, n_FER_tot.mean(axis=1)*ccm, color = 'black',alpha = .7, label = 'total density (HI + HII + H2)')
plt.plot(R/kpc, n_FER_nucl.mean(axis=1)*ccm, color ='green',alpha = .7, label = 'nucleon density (HI + HII + 2*H2)')
plt.xlabel('galactic Radius in kpc')
plt.ylabel('density in 1/cm^-3')
plt.yscale('log')
plt.axis([0,30,10**-3,10**2])
plt.legend()
plt.show()
n_COR_R= np.zeros(R.shape)
pos = Vector3d(0.)
for ir, r in enumerate(R):
pos.x = r
n_COR_R[ir]= COR.getDensity(pos)
plt.figure()
plt.plot(R/kpc, n_COR_R*ccm, label = 'HII Cordes')
plt.xlabel('galactic Radius in kpc')
plt.ylabel('density in 1/cm^-3')
plt.yscale('log')
plt.axis([0,30,10**-3,10**2])
plt.legend()
plt.show()
n_NAK_HI = np.zeros(R.shape)
n_NAK_H2 = np.zeros(R.shape)
n_NAK_tot = np.zeros(R.shape)
n_NAK_nucl= np.zeros(R.shape)
pos = Vector3d(0.)
for ir, r in enumerate(R):
pos.x=r
n_NAK_HI[ir]=NAK.getHIDensity(pos)
n_NAK_H2[ir]=NAK.getH2Density(pos)
n_NAK_tot[ir]=NAK.getDensity(pos)
n_NAK_nucl[ir]=NAK.getNucleonDensity(pos)
# plot radial
plt.figure()
plt.plot(R/kpc, n_NAK_HI*ccm, linestyle = '--',alpha = .7, color='red', label= 'atomic hydrogyn (HI)')
plt.plot(R/kpc, n_NAK_H2*ccm, linestyle = '-.',alpha = .7, color='orange', label= 'molecular hydrogen (H2)')
plt.plot(R/kpc, n_NAK_tot*ccm, color = 'black',alpha = .7, label = 'total density (HI + H2)')
plt.plot(R/kpc, n_NAK_nucl*ccm, color ='green',alpha = .7, label = 'nucleon density (HI + 2*H2)')
plt.xlabel('galactic radius in kpc')
plt.ylabel('density in 1/cm^-3')
plt.yscale('log')
plt.axis([0,22,10**-3,10**2])
plt.legend()
plt.show()
DL = DensityList()
FER.setIsForHII(False)
FER.setIsForH2(False)
DL.addDensity(FER) #only the active HI is added
DL.addDensity(COR) # only the active HII is added, contains no other type
NAK.setIsForHI(False)
DL.addDensity(NAK)
# plot types and sum of densities (along x-axis)
n_DL_nucl = np.zeros(R.shape)
n_DL_tot = np.zeros(R.shape)
pos = Vector3d(0.)
for ir, r in enumerate(R):
pos.x = r
n_DL_tot[ir] = DL.getDensity(pos)
n_DL_nucl[ir] = DL.getDensity(pos)
plt.figure()
plt.plot(R/kpc, n_FER_HI[:,0]*ccm, label= 'HI Ferriere', linestyle =':',alpha = .7)
plt.plot(R/kpc, n_COR_R*ccm, label = 'HII Cordes', linestyle ='-.',alpha = .7)
plt.plot(R/kpc, n_NAK_H2*ccm, label='H2 Nakanishi', linestyle = '--', alpha = .7)
plt.plot(R/kpc, n_DL_tot*ccm, label= 'total', linestyle='-',alpha = .7)
plt.plot(R/kpc, n_DL_nucl*ccm, label ='nucleon', linestyle = (0, (3, 5, 1, 5, 1, 5)), alpha = .7)
plt.yscale('log')
plt.xlabel('x in kpc')
plt.ylabel('density in 1/cm^3')
plt.axis([0,30,10**-3,100])
plt.legend()
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Ferrière
Step2: Model Cordes
Step3: Model Nakanishi
Step4: Advanced use of DensityList
|
15,650 | <ASSISTANT_TASK:>
Python Code:
x = 1
y = 2
x + y
x
def add_numbers(x, y):
return x + y
add_numbers(1, 2)
def add_numbers(x,y,z=None):
if (z==None):
return x+y
else:
return x+y+z
print(add_numbers(1, 2))
print(add_numbers(1, 2, 3))
def add_numbers(x, y, z=None, flag=False):
if (flag):
print('Flag is true!')
if (z==None):
return x + y
else:
return x + y + z
print(add_numbers(1, 2, flag=True))
def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2)
type('This is a string')
type(None)
type(1)
type(1.0)
type(add_numbers)
x = (1, 'a', 2, 'b')
type(x)
x = [1, 'a', 2, 'b']
type(x)
x.append(3.3)
print(x)
for item in x:
print(item)
i=0
while( i != len(x) ):
print(x[i])
i = i + 1
[1,2] + [3,4]
[1]*3
1 in [1, 2, 3]
x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
x[-1]
x[-4:-2]
x[:3]
x[3:]
firstname = 'Christopher'
lastname = 'Brooks'
print(firstname + ' ' + lastname)
print(firstname*3)
print('Chris' in firstname)
firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname)
'Chris' + 2
'Chris' + str(2)
x = {'Christopher Brooks': 'brooksch@umich.edu', 'Bill Gates': 'billg@microsoft.com'}
x['Christopher Brooks'] # Retrieve a value by using the indexing operator
x['Kevyn Collins-Thompson'] = None
x['Kevyn Collins-Thompson']
for name in x:
print(x[name])
for email in x.values():
print(email)
for name, email in x.items():
print(name)
print(email)
x = ('Christopher', 'Brooks', 'brooksch@umich.edu')
fname, lname, email = x
fname
lname
x = ('Christopher', 'Brooks', 'brooksch@umich.edu', 'Ann Arbor')
fname, lname, email = x
print('Chris' + 2)
print('Chris' + str(2))
sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
import csv
%precision 2
with open('mpg.csv') as csvfile:
mpg = list(csv.DictReader(csvfile))
mpg[:3] # The first three dictionaries in our list.
len(mpg)
mpg[0].keys()
sum(float(d['cty']) for d in mpg) / len(mpg)
sum(float(d['hwy']) for d in mpg) / len(mpg)
cylinders = set(d['cyl'] for d in mpg)
cylinders
CtyMpgByCyl = []
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl
vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass
HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass
import datetime as dt
import time as tm
tm.time()
dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow
dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime
delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
today = dt.date.today()
today - delta # the date 100 days ago
today > today-delta # compare dates
class Person:
department = 'School of Information' #a class variable
def set_name(self, new_name): #a method
self.name = new_name
def set_location(self, new_location):
self.location = new_location
person = Person()
person.set_name('Christopher Brooks')
person.set_location('Ann Arbor, MI, USA')
print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department))
store1 = [10.00, 11.00, 12.34, 2.34]
store2 = [9.00, 11.10, 12.34, 2.01]
cheapest = map(min, store1, store2)
cheapest
for item in cheapest:
print(item)
my_function = lambda a, b, c : a + b
my_function(1, 2, 3)
my_list = []
for number in range(0, 1000):
if number % 2 == 0:
my_list.append(number)
my_list
my_list = [number for number in range(0,1000) if number % 2 == 0]
my_list
import numpy as np
mylist = [1, 2, 3]
x = np.array(mylist)
x
y = np.array([4, 5, 6])
y
m = np.array([[7, 8, 9], [10, 11, 12]])
m
m.shape
n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30
n
n = n.reshape(3, 5) # reshape array to be 3x5
n
o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4
o
o.resize(3, 3)
o
np.ones((3, 2))
np.zeros((2, 3))
np.eye(3)
np.diag(y)
np.array([1, 2, 3] * 3)
np.repeat([1, 2, 3], 3)
p = np.ones([2, 3], int)
p
np.vstack([p, 2*p])
np.hstack([p, 2*p])
print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]
print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]
print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]
print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]
print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]
x.dot(y) # dot product 1*4 + 2*5 + 3*6
z = np.array([y, y**2])
print(len(z)) # number of rows of array
z = np.array([y, y**2])
z
z.shape
z.T
z.T.shape
z.dtype
z = z.astype('f')
z.dtype
a = np.array([-4, -2, 1, 3, 5])
a.sum()
a.max()
a.min()
a.mean()
a.std()
a.argmax()
a.argmin()
s = np.arange(13)**2
s
s[0], s[4], s[-1]
s[1:5]
s[-4:]
s[-5::-2]
r = np.arange(36)
r.resize((6, 6))
r
r[2, 2]
r[3, 3:6]
r[:2, :-1]
r[-1, ::2]
r[r > 30]
r[r > 30] = 30
r
r2 = r[:3,:3]
r2
r2[:] = 0
r2
r
r_copy = r.copy()
r_copy
r_copy[:] = 10
print(r_copy, '\n')
print(r)
test = np.random.randint(0, 10, (4,3))
test
for row in test:
print(row)
for i in range(len(test)):
print(test[i])
for i, row in enumerate(test):
print('row', i, 'is', row)
test2 = test**2
test2
for i, j in zip(test, test2):
print(i,'+',j,'=',i+j)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <br>
Step2: <br>
Step3: <br>
Step4: <br>
Step5: <br>
Step6: <br>
Step7: <br>
Step8: <br>
Step9: <br>
Step10: <br>
Step11: <br>
Step12: <br>
Step13: <br>
Step14: <br>
Step15: <br>
Step16: <br>
Step17: <br>
Step18: <br>
Step19: <br>
Step20: <br>
Step21: <br>
Step22: <br>
Step23: <br>
Step24: <br>
Step25: <br>
Step26: <br>
Step27: <br>
Step28: <br>
Step29: <br>
Step30: <br>
Step31: <br>
Step32: <br>
Step33: <br>
Step34: <br>
Step35: <br>
Step36: <br>
Step37: <br>
Step38: <br>
Step39: <br>
Step40: <br>
Step41: <br>
Step42: <br>
Step43: <br>
Step44: <br>
Step45: <br>
Step46: <br>
Step47: <br>
Step48: <br>
Step49: <br>
Step50: <br>
Step51: <br>
Step52: <br>
Step53: <br>
Step54: <br>
Step55: <br>
Step56: <br>
Step57: <br>
Step58: <br>
Step59: <br>
Step60: <br>
Step61: <br>
Step62: <br>
Step63: <br>
Step64: <br>
Step65: <br>
Step66: <br>
Step67: <br>
Step68: <br>
Step69: <br>
Step70: <br>
Step71: <br>
Step72: <br>
Step73: <br>
Step74: <br>
Step75: <br>
Step76: <br>
Step77: <br>
Step78: <br>
Step79: <br>
Step80: <br>
Step81: <br>
Step82: <br>
Step83: <br>
Step84: <br>
Step85: <br>
Step86: <br>
Step87: <br>
Step88: <br>
Step89: <br>
Step90: <br>
Step91: <br>
Step92: <br>
Step93: <br>
Step94: <br>
Step95: <br>
Step96: <br>
Step97: <br>
Step98: <br>
Step99: <br>
|
15,651 | <ASSISTANT_TASK:>
Python Code:
# set up all the data for the rest of the notebook
import json
from collections import Counter
from itertools import chain
from IPython.display import HTML
def vote_table(votes):
Render a crappy HTML table for easy display. I'd use Pandas, but that seems like
complete overkill for this simple task.
base_table =
<table>
<tr><td>Position</td><td>Album</td><td>Votes</td></tr>
{}
</table>
base_row = "<tr><td>{0}</td><td>{1}</td><td>{2}</td></tr>"
vote_rows = [base_row.format(idx, name, vote) for idx, (name, vote) in enumerate(votes, 1)]
return HTML(base_table.format('\n'.join(vote_rows)))
with open('shreddit_q2_votes.json', 'r') as fh:
ballots = json.load(fh)
with open('tallied_votes.json', 'r') as fh:
tallied = Counter(json.load(fh))
equal_placement_ballots = Counter(chain.from_iterable(ballots))
vote_table(tallied.most_common(10))
vote_table(equal_placement_ballots.most_common(10))
weighted_ballot = Counter()
for ballot in ballots:
for item, weight in zip(ballot, range(5, 0, -1)):
weighted_ballot[item] += weight
sum(1 for _ in filter(lambda x: len(x) < 5, ballots)) / len(ballots)
vote_table(weighted_ballot.most_common(10))
regular_tally_spots = {name.lower(): pos for pos, (name, _) in enumerate(tallied.most_common(), 1)}
base_table =
<table>
<tr><td>Album</td><td>Regular Spot</td><td>Weighted Spot</td></tr>
{}
</table>
base_row = "<tr><td>{0}</td><td>{1}</td><td>{2}</td></tr>"
rows = [base_row.format(name, regular_tally_spots[name], pos)
for pos, (name, _) in enumerate(weighted_ballot.most_common(), 1)
# some albums didn't make it, like Arcturian D:
if name in regular_tally_spots]
HTML(base_table.format('\n'.join(rows)))
number_one = Counter([b[0] for b in ballots])
vote_table(number_one.most_common(10))
#regular tallying
vote_table(equal_placement_ballots.most_common())
#weighted ballot
vote_table(weighted_ballot.most_common())
#number one count
vote_table(number_one.most_common())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Analyzing Shreddit's Q2 Top 5 voting
Step3: Equal Placement Ballots
Step4: And here's the top ten from my computed tally
Step5: Weighted Tally Ballot
Step6: This handles the situation where a ballot may not be full (five votes), which make up a surpsingly non trival amount of the ballots
Step7: Anyways, what does a top ten for weighted votes end up looking like?
Step9: Hm, it's not actually all the different. Some bands move around a little bit, Deathhammer moves into the top ten using this method. But overall, the general spread is pretty much the same.
Step10: What album appeared at number one most often?
Step11: This paints a slightly different picture of the top ten. While the names are largely the same, Scar Sighted was thought of as the top album most often, despite being at two or three through the other methods. And Misþyrming is at four (okay, "2", again fast and loose with numbering) despite being the solid top choice for all other methods.
|
15,652 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
total_counts.most_common(10)
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)}
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
15,653 | <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../deeprl')
import gym
env = gym.make('MountainCar-v0')
print env.action_space
print env.observation_space
print env.observation_space.low
print env.observation_space.high
print env.goal_position
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# initialize the environment
obs = env.reset()
log = []
# perform 1000 steps
for step in range(1000):
# sample a random action
action = env.action_space.sample()
# step the environment and observe the results
obs,rew,terminal,_ = env.step(action)
# store state
log.append(obs)
log = np.array(log)
#plot the trajectory
plt.title('mc trajectory')
plt.xlabel('position')
plt.ylabel('velocity')
plt.plot(log[:,0],log[:,1])
import tensorflow as tf
import tflearn as nn
def create_mlp(inputs, n_out):
# input definition, don't change this line
net = nn.input_data(placeholder=inputs)
# define the hidden layers:
# this creates a standard feedforward network layer with 25 hidden units
net = nn.fully_connected(net, 25, activation='relu', weight_decay=0.0001)
# dropout layers help prevent overfitting (0.3 dropout rate)
net = nn.dropout(net, 0.3)
net = nn.fully_connected(net, 25, activation='relu', weight_decay=0.0001)
net = nn.dropout(net, 0.3)
# output Q-values, don't change
net = nn.fully_connected(net, n_out, activation='linear')
return net
from agents import DQNAgent
from experiment import Experiment
#erase previously created tf graphs
tf.reset_default_graph()
#create a DQNAgent
agent = DQNAgent(create_mlp, # network creation function
env.action_space.n,# number of actions
env.observation_space.shape, # state dimensions
alpha=0.001, # learning rate
gamma=0.99,# discount factor
min_replay_size=1000, # samples to gather before starting training
replay_size=100000, # max samples to keep
target_freq=10000, # update the target network every #target_freq steps
batch_size=64) # number of samples to use for single update
exp = Experiment(agent, env)
# run agent on envrionment for 75K steps
rewards = exp.run_epoch(75000)
# utility function to calculate mean with sliding window
def running_mean(data,window_size=10):
return np.convolve(data,np.ones(window_size),mode='valid') / window_size
plt.plot(running_mean(rewards))
#create grid
highs = env.observation_space.high
lows = env.observation_space.low
Xs, Ys = np.meshgrid(np.linspace(lows[0],highs[0],30), np.linspace(lows[1],highs[1],30))
Zs = np.zeros_like(Xs)
for idx in range(Xs.size):
s = np.zeros((1,2))
s[0,0]=Xs.flat[idx]
s[0,1]=Ys.flat[idx]
Zs.flat[idx] = np.max(agent.get_values(s)) #max qvalue at state
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(Xs, Ys, -Zs, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
obs = env.reset()
log = []
for step in range(1000):
#this is still e-greedy
action = agent.select_action(obs[np.newaxis,:])
obs,rew,terminal,_ = env.step(action)
log.append(obs)
# uncomment to visualize, might not work in notebook on all systems:
# env.render()
if terminal:
break
log = np.array(log)
#plot the trajectory
plt.title('mc trajectory')
plt.xlabel('position')
plt.ylabel('velocity')
plt.plot(log[:,0],log[:,1])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using OpenAI Gym
Step2: A gym environment contains all relevant data describing the problem. We can directly inspect the action space and the observation space of the environment. In case of mountaincar, we have a discrete action space with 3 actions and a continuous observation space with 2 variables
Step3: Using an environment is easy. Just repeatedly call the step function to perform an action from the action space and observe the results. The step function returns the next observation, the reward and a termination indicator (there is also a dict with additional info, but this is mostly unused).
Step4: Running DQN
Step5: Create a DQNAgent and run an experiment on the Mountaincar environment. The DQNAgent class implements a basic DQN. The Experiment class runs an agent on a given envrionment. The <i>run_epoch</i> method runs for a fixed number of steps and returns the cumulatie reward for each finished episode. The parameters below should give reasonable results.
Step6: now plot the rewards of the episodes performed
Step7: Since mountaincar has only 2 state variables (position and velocity) we can also plot the Q-function learnt by the neural network. Below we generate a grid of points over the state space and calculate the maximum Q-value at each state.
Step8: 3d surface plot using matplotlib
Step9: Compare with the value function obtained by Barto & Sutton
|
15,654 | <ASSISTANT_TASK:>
Python Code:
from math import pi
%run matplotlib_setup.ipy
from matplotlib import pyplot
import numpy as np
import kwant
lat=kwant.lattice.square()
L,W=30,16
def myshape(R): return (
(R[0]**2 + R[1]**2) > (L-W/2)**2 and
(R[0]**2 + R[1]**2) < (L+W/2)**2)
H=kwant.Builder()
H[lat.shape(myshape,(L,0) )]=4
H[lat.neighbors()]=1
def Aharonov_Bohm(site1,site2,phi): return np.exp(-2j*pi*phi)
for hop in H.hoppings():
if hop[0].tag[0]==1 and hop[0].tag[1]>0 and hop[1].tag[0]==0:
H[hop]=Aharonov_Bohm
sym=kwant.TranslationalSymmetry(lat.vec((1,0)))
def lead_shape(R): return abs(R[1]) < W/2 and abs(R[0]) <3
Hlead =kwant.Builder(sym)
Hlead[lat.shape(lead_shape,(0,0) )]=4
Hlead[lat.neighbors()]=1
H.attach_lead(Hlead)
H.attach_lead(Hlead.reversed())
kwant.plot(H);
Hf=H.finalized()
data = []
phis = np.linspace(0,1.,50)
for phi in phis:
smatrix = kwant.smatrix(Hf, 3.3,args=[phi])
data.append(smatrix.transmission(1, 0))
pyplot.plot(phis, data,'o');
pyplot.xlabel('$\phi = BS/(h/e)$')
pyplot.ylabel('g in unit of $(2e^2/h)$');
pyplot.title('Aharonov-Effect')
L,W=100,12
def Field(site1,site2,phi):
x1,y1=site1.pos
x2,y2=site2.pos
return -np.exp(-0.5j * phi * (x1 - x2) * (y1 + y2))
H[lat.neighbors()] = Field
phis = np.linspace(0.,0.0005,50)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We see that the Aharonov-Bohm effect contains several harmonics
Step2: Now run it, don't forget to change the x-scale of the plot.
|
15,655 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from properimage import single_image as s
%matplotlib inline
pixel = np.random.random((128,128))*5.
# Add some stars to it
star = [[35, 38, 35],
[38, 90, 39],
[35, 39, 34]]
for i in range(25):
x, y = np.random.randint(120, size=2)
pixel[x:x+3,y:y+3] = star
mask = np.random.randint(2, size=(128,128))
for i in range(10):
mask = mask & np.random.randint(2, size=(128,128))
img = s.SingleImage(pixel, mask)
print(img)
img.data
plt.figure(figsize=(6,6))
plt.imshow(img.data, cmap='Greys')
img.best_sources[['x', 'y', 'cflux']]
a_fields, psf_basis = img.get_variable_psf()
len(psf_basis), psf_basis[0].shape
a_fields
plt.imshow(psf_basis[0])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can see that the img object created automatically produces an output
Step2: If you would like to acces the data inside the object img just ask for data.
Step3: As can be seen it is a numpy masked array, with bad pixels flagged.
Step4: We can check the best sources extracted.
Step5: And also obtain the estimation of PSF.
Step6: As in our simple example we don't vary the PSF we obtain only a PSF element, and a None coefficient.
Step7: We may check the looks of the psf_basis single element.
|
15,656 | <ASSISTANT_TASK:>
Python Code:
!gvim data/SF_Si_bulk/invar.in
%cd data/SF_Si_bulk/
%run ../../../../../Code/SF/sf.py
cd ../../../
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
# plt.rcParams['figure.figsize'] = (9., 6.)
%matplotlib inline
sf_c = np.genfromtxt(
'data/SF_Si_bulk/Spfunctions/spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat')
sf_gw = np.genfromtxt(
'data/SF_Si_bulk/Spfunctions/spftot_gw_s1.0_p1.0_800ev.dat')
#!gvim spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat
plt.plot(sf_c[:,0], sf_c[:,1], label='1-pole cumulant')
plt.plot(sf_gw[:,0], sf_gw[:,1], label='GW')
plt.xlim(-50, 0)
plt.ylim(0, 300)
plt.title("Bulk Si - Spectral function - ib=1, ikpt=1")
plt.xlabel("Energy (eV)")
plt.grid(); plt.legend(loc='best')
!jupyter-nbconvert --to pdf cumulant-to-pdf.ipynb
pwd
!xpdf cumulant-to-pdf.pdf
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now I can run my script
Step2: Not very elegant, I know. It's just for demo pourposes.
Step3: I have first to import a few modules/set up a few things
Step4: Next I can read the data from a local folder
Step5: Now I can plot the stored arrays.
Step6: Creating a PDF document
|
15,657 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import colab
!pip install --upgrade pip
except:
pass
!pip install -U tfx
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import os
PIPELINE_NAME = "penguin-transform"
# Output directory to store artifacts generated from the pipeline.
PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)
# Path to a SQLite DB file to use as an MLMD storage.
METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')
# Output directory where created models from the pipeline will be exported.
SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)
from absl import logging
logging.set_verbosity(logging.INFO) # Set default logging level.
import urllib.request
import tempfile
DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory.
_data_path = 'https://storage.googleapis.com/download.tensorflow.org/data/palmer_penguins/penguins_size.csv'
_data_filepath = os.path.join(DATA_ROOT, "data.csv")
urllib.request.urlretrieve(_data_path, _data_filepath)
!head {_data_filepath}
!sed -i '/\bNA\b/d' {_data_filepath}
!head {_data_filepath}
import shutil
SCHEMA_PATH = 'schema'
_schema_uri = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/schema/raw/schema.pbtxt'
_schema_filename = 'schema.pbtxt'
_schema_filepath = os.path.join(SCHEMA_PATH, _schema_filename)
os.makedirs(SCHEMA_PATH, exist_ok=True)
urllib.request.urlretrieve(_schema_uri, _schema_filepath)
_module_file = 'penguin_utils.py'
%%writefile {_module_file}
from typing import List, Text
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_metadata.proto.v0 import schema_pb2
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
# Specify features that we will use.
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# NEW: TFX Transform will call this function.
def preprocessing_fn(inputs):
tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature.
outputs = {}
# Uses features defined in _FEATURE_KEYS only.
for key in _FEATURE_KEYS:
# tft.scale_to_z_score computes the mean and variance of the given feature
# and scales the output based on the result.
outputs[key] = tft.scale_to_z_score(inputs[key])
# For the label column we provide the mapping from string to index.
# We could instead use `tft.compute_and_apply_vocabulary()` in order to
# compute the vocabulary dynamically and perform a lookup.
# Since in this example there are only 3 possible values, we use a hard-coded
# table for simplicity.
table_keys = ['Adelie', 'Chinstrap', 'Gentoo']
initializer = tf.lookup.KeyValueTensorInitializer(
keys=table_keys,
values=tf.cast(tf.range(len(table_keys)), tf.int64),
key_dtype=tf.string,
value_dtype=tf.int64)
table = tf.lookup.StaticHashTable(initializer, default_value=-1)
outputs[_LABEL_KEY] = table.lookup(inputs[_LABEL_KEY])
return outputs
# NEW: This function will apply the same transform operation to training data
# and serving requests.
def _apply_preprocessing(raw_features, tft_layer):
transformed_features = tft_layer(raw_features)
if _LABEL_KEY in raw_features:
transformed_label = transformed_features.pop(_LABEL_KEY)
return transformed_features, transformed_label
else:
return transformed_features, None
# NEW: This function will create a handler function which gets a serialized
# tf.example, preprocess and run an inference with it.
def _get_serve_tf_examples_fn(model, tf_transform_output):
# We must save the tft_layer to the model to ensure its assets are kept and
# tracked.
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def serve_tf_examples_fn(serialized_tf_examples):
# Expected input is a string which is serialized tf.Example format.
feature_spec = tf_transform_output.raw_feature_spec()
# Because input schema includes unnecessary fields like 'species' and
# 'island', we filter feature_spec to include required keys only.
required_feature_spec = {
k: v for k, v in feature_spec.items() if k in _FEATURE_KEYS
}
parsed_features = tf.io.parse_example(serialized_tf_examples,
required_feature_spec)
# Preprocess parsed input with transform operation defined in
# preprocessing_fn().
transformed_features, _ = _apply_preprocessing(parsed_features,
model.tft_layer)
# Run inference with ML model.
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
dataset = data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(batch_size=batch_size),
schema=tf_transform_output.raw_metadata.schema)
transform_layer = tf_transform_output.transform_features_layer()
def apply_transform(raw_features):
return _apply_preprocessing(raw_features, transform_layer)
return dataset.map(apply_transform).repeat()
def _build_keras_model() -> tf.keras.Model:
Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [
keras.layers.Input(shape=(1,), name=key)
for key in _FEATURE_KEYS
]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
tf_transform_output,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
tf_transform_output,
batch_size=_EVAL_BATCH_SIZE)
model = _build_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# NEW: Save a computation graph including transform layer.
signatures = {
'serving_default': _get_serve_tf_examples_fn(model, tf_transform_output),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
schema_path: str, module_file: str, serving_model_dir: str,
metadata_path: str) -> tfx.dsl.Pipeline:
Implements the penguin pipeline with TFX.
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Computes statistics over data for visualization and example validation.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# Import the schema.
schema_importer = tfx.dsl.Importer(
source_uri=schema_path,
artifact_type=tfx.types.standard_artifacts.Schema).with_id(
'schema_importer')
# Performs anomaly detection based on statistics and data schema.
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_importer.outputs['result'])
# NEW: Transforms input data using preprocessing_fn in the 'module_file'.
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_importer.outputs['result'],
materialize=False,
module_file=module_file)
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
# NEW: Pass transform_graph to the trainer.
transform_graph=transform.outputs['transform_graph'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
statistics_gen,
schema_importer,
example_validator,
transform, # NEW: Transform component was added to the pipeline.
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
tfx.orchestration.LocalDagRunner().run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
schema_path=SCHEMA_PATH,
module_file=_module_file,
serving_model_dir=SERVING_MODEL_DIR,
metadata_path=METADATA_PATH))
# List files in created model directory.
!find {SERVING_MODEL_DIR}
!saved_model_cli show --dir {SERVING_MODEL_DIR}/$(ls -1 {SERVING_MODEL_DIR} | sort -nr | head -1) --tag_set serve --signature_def serving_default
# Find a model with the latest timestamp.
model_dirs = (item for item in os.scandir(SERVING_MODEL_DIR) if item.is_dir())
model_path = max(model_dirs, key=lambda i: int(i.name)).path
loaded_model = tf.keras.models.load_model(model_path)
inference_fn = loaded_model.signatures['serving_default']
# Prepare an example and run inference.
features = {
'culmen_length_mm': tf.train.Feature(float_list=tf.train.FloatList(value=[49.9])),
'culmen_depth_mm': tf.train.Feature(float_list=tf.train.FloatList(value=[16.1])),
'flipper_length_mm': tf.train.Feature(int64_list=tf.train.Int64List(value=[213])),
'body_mass_g': tf.train.Feature(int64_list=tf.train.Int64List(value=[5400])),
}
example_proto = tf.train.Example(features=tf.train.Features(feature=features))
examples = example_proto.SerializeToString()
result = inference_fn(examples=tf.constant([examples]))
print(result['output_0'].numpy())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Feature Engineering using TFX Pipeline and TensorFlow Transform
Step2: Install TFX
Step3: Did you restart the runtime?
Step4: Set up variables
Step5: Prepare example data
Step6: Take a quick look at what the raw data looks like.
Step7: There are some entries with missing values which are represented as NA.
Step8: You should be able to see seven features which describe penguins. We will use
Step13: This schema file was created with the same pipeline as in the previous tutorial
Step15: Now you have completed all of the preparation steps to build a TFX pipeline.
Step16: Run the pipeline
Step17: You should see "INFO
Step18: You can also check the signature of the generated model using the
Step19: Because we defined serving_default with our own serve_tf_examples_fn
|
15,658 | <ASSISTANT_TASK:>
Python Code:
import os
import math
from zipfile import ZipFile
from urllib.request import urlretrieve
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import StringLookup
import matplotlib.pyplot as plt
urlretrieve("http://files.grouplens.org/datasets/movielens/ml-1m.zip", "movielens.zip")
ZipFile("movielens.zip", "r").extractall()
ratings_data = pd.read_csv(
"ml-1m/ratings.dat",
sep="::",
names=["user_id", "movie_id", "rating", "unix_timestamp"],
)
ratings_data["movie_id"] = ratings_data["movie_id"].apply(lambda x: f"movie_{x}")
ratings_data["user_id"] = ratings_data["user_id"].apply(lambda x: f"user_{x}")
ratings_data["rating"] = ratings_data["rating"].apply(lambda x: float(x))
del ratings_data["unix_timestamp"]
print(f"Number of users: {len(ratings_data.user_id.unique())}")
print(f"Number of movies: {len(ratings_data.movie_id.unique())}")
print(f"Number of ratings: {len(ratings_data.index)}")
random_selection = np.random.rand(len(ratings_data.index)) <= 0.85
train_data = ratings_data[random_selection]
eval_data = ratings_data[~random_selection]
train_data.to_csv("train_data.csv", index=False, sep="|", header=False)
eval_data.to_csv("eval_data.csv", index=False, sep="|", header=False)
print(f"Train data split: {len(train_data.index)}")
print(f"Eval data split: {len(eval_data.index)}")
print("Train and eval data files are saved.")
csv_header = list(ratings_data.columns)
user_vocabulary = list(ratings_data.user_id.unique())
movie_vocabulary = list(ratings_data.movie_id.unique())
target_feature_name = "rating"
learning_rate = 0.001
batch_size = 128
num_epochs = 3
base_embedding_dim = 64
def get_dataset_from_csv(csv_file_path, batch_size=128, shuffle=True):
return tf.data.experimental.make_csv_dataset(
csv_file_path,
batch_size=batch_size,
column_names=csv_header,
label_name=target_feature_name,
num_epochs=1,
header=False,
field_delim="|",
shuffle=shuffle,
)
def run_experiment(model):
# Compile the model.
model.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=tf.keras.losses.MeanSquaredError(),
metrics=[keras.metrics.MeanAbsoluteError(name="mae")],
)
# Read the training data.
train_dataset = get_dataset_from_csv("train_data.csv", batch_size)
# Read the test data.
eval_dataset = get_dataset_from_csv("eval_data.csv", batch_size, shuffle=False)
# Fit the model with the training data.
history = model.fit(train_dataset, epochs=num_epochs, validation_data=eval_dataset,)
return history
def embedding_encoder(vocabulary, embedding_dim, num_oov_indices=0, name=None):
return keras.Sequential(
[
StringLookup(
vocabulary=vocabulary, mask_token=None, num_oov_indices=num_oov_indices
),
layers.Embedding(
input_dim=len(vocabulary) + num_oov_indices, output_dim=embedding_dim
),
],
name=f"{name}_embedding" if name else None,
)
def create_baseline_model():
# Receive the user as an input.
user_input = layers.Input(name="user_id", shape=(), dtype=tf.string)
# Get user embedding.
user_embedding = embedding_encoder(
vocabulary=user_vocabulary, embedding_dim=base_embedding_dim, name="user"
)(user_input)
# Receive the movie as an input.
movie_input = layers.Input(name="movie_id", shape=(), dtype=tf.string)
# Get embedding.
movie_embedding = embedding_encoder(
vocabulary=movie_vocabulary, embedding_dim=base_embedding_dim, name="movie"
)(movie_input)
# Compute dot product similarity between user and movie embeddings.
logits = layers.Dot(axes=1, name="dot_similarity")(
[user_embedding, movie_embedding]
)
# Convert to rating scale.
prediction = keras.activations.sigmoid(logits) * 5
# Create the model.
model = keras.Model(
inputs=[user_input, movie_input], outputs=prediction, name="baseline_model"
)
return model
baseline_model = create_baseline_model()
baseline_model.summary()
history = run_experiment(baseline_model)
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "eval"], loc="upper left")
plt.show()
class QREmbedding(keras.layers.Layer):
def __init__(self, vocabulary, embedding_dim, num_buckets, name=None):
super(QREmbedding, self).__init__(name=name)
self.num_buckets = num_buckets
self.index_lookup = StringLookup(
vocabulary=vocabulary, mask_token=None, num_oov_indices=0
)
self.q_embeddings = layers.Embedding(num_buckets, embedding_dim,)
self.r_embeddings = layers.Embedding(num_buckets, embedding_dim,)
def call(self, inputs):
# Get the item index.
embedding_index = self.index_lookup(inputs)
# Get the quotient index.
quotient_index = tf.math.floordiv(embedding_index, self.num_buckets)
# Get the reminder index.
remainder_index = tf.math.floormod(embedding_index, self.num_buckets)
# Lookup the quotient_embedding using the quotient_index.
quotient_embedding = self.q_embeddings(quotient_index)
# Lookup the remainder_embedding using the remainder_index.
remainder_embedding = self.r_embeddings(remainder_index)
# Use multiplication as a combiner operation
return quotient_embedding * remainder_embedding
class MDEmbedding(keras.layers.Layer):
def __init__(
self, blocks_vocabulary, blocks_embedding_dims, base_embedding_dim, name=None
):
super(MDEmbedding, self).__init__(name=name)
self.num_blocks = len(blocks_vocabulary)
# Create vocab to block lookup.
keys = []
values = []
for block_idx, block_vocab in enumerate(blocks_vocabulary):
keys.extend(block_vocab)
values.extend([block_idx] * len(block_vocab))
self.vocab_to_block = tf.lookup.StaticHashTable(
tf.lookup.KeyValueTensorInitializer(keys, values), default_value=-1
)
self.block_embedding_encoders = []
self.block_embedding_projectors = []
# Create block embedding encoders and projectors.
for idx in range(self.num_blocks):
vocabulary = blocks_vocabulary[idx]
embedding_dim = blocks_embedding_dims[idx]
block_embedding_encoder = embedding_encoder(
vocabulary, embedding_dim, num_oov_indices=1
)
self.block_embedding_encoders.append(block_embedding_encoder)
if embedding_dim == base_embedding_dim:
self.block_embedding_projectors.append(layers.Lambda(lambda x: x))
else:
self.block_embedding_projectors.append(
layers.Dense(units=base_embedding_dim)
)
def call(self, inputs):
# Get block index for each input item.
block_indicies = self.vocab_to_block.lookup(inputs)
# Initialize output embeddings to zeros.
embeddings = tf.zeros(shape=(tf.shape(inputs)[0], base_embedding_dim))
# Generate embeddings from blocks.
for idx in range(self.num_blocks):
# Lookup embeddings from the current block.
block_embeddings = self.block_embedding_encoders[idx](inputs)
# Project embeddings to base_embedding_dim.
block_embeddings = self.block_embedding_projectors[idx](block_embeddings)
# Create a mask to filter out embeddings of items that do not belong to the current block.
mask = tf.expand_dims(tf.cast(block_indicies == idx, tf.dtypes.float32), 1)
# Set the embeddings for the items not belonging to the current block to zeros.
block_embeddings = block_embeddings * mask
# Add the block embeddings to the final embeddings.
embeddings += block_embeddings
return embeddings
movie_frequencies = ratings_data["movie_id"].value_counts()
movie_frequencies.hist(bins=10)
sorted_movie_vocabulary = list(movie_frequencies.keys())
movie_blocks_vocabulary = [
sorted_movie_vocabulary[:400], # high popularity movies block
sorted_movie_vocabulary[400:1700], # normal popularity movies block
sorted_movie_vocabulary[1700:], # low popularity movies block
]
movie_blocks_embedding_dims = [64, 32, 16]
user_embedding_num_buckets = len(user_vocabulary) // 50
def create_memory_efficient_model():
# Take the user as an input.
user_input = layers.Input(name="user_id", shape=(), dtype=tf.string)
# Get user embedding.
user_embedding = QREmbedding(
vocabulary=user_vocabulary,
embedding_dim=base_embedding_dim,
num_buckets=user_embedding_num_buckets,
name="user_embedding",
)(user_input)
# Take the movie as an input.
movie_input = layers.Input(name="movie_id", shape=(), dtype=tf.string)
# Get embedding.
movie_embedding = MDEmbedding(
blocks_vocabulary=movie_blocks_vocabulary,
blocks_embedding_dims=movie_blocks_embedding_dims,
base_embedding_dim=base_embedding_dim,
name="movie_embedding",
)(movie_input)
# Compute dot product similarity between user and movie embeddings.
logits = layers.Dot(axes=1, name="dot_similarity")(
[user_embedding, movie_embedding]
)
# Convert to rating scale.
prediction = keras.activations.sigmoid(logits) * 5
# Create the model.
model = keras.Model(
inputs=[user_input, movie_input], outputs=prediction, name="baseline_model"
)
return model
memory_efficient_model = create_memory_efficient_model()
memory_efficient_model.summary()
history = run_experiment(memory_efficient_model)
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "eval"], loc="upper left")
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare the data
Step2: Create train and eval data splits
Step3: Define dataset metadata and hyperparameters
Step4: Train and evaluate the model
Step5: Experiment 1
Step6: Implement the baseline model
Step7: Notice that the number of trainable parameters is 623,744
Step8: Experiment 2
Step9: Implement Mixed Dimension embedding as a layer
Step10: Implement the memory-efficient model
Step11: You can see that we can group the movies into three blocks, and assign them 64, 32, and 16
Step12: Notice that the number of trainable parameters is 117,968, which is more than 5x less than
|
15,659 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
15,660 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors
from matplotlib.font_manager import FontProperties
%matplotlib inline
from keras.models import model_from_json
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.layers.recurrent import LSTM
from keras import optimizers
import utils
DATA_PATH = ''
test_fname_start = 'test_linear'
train_fname_start = 'train_linear'
no_files = 1
train_X, train_y_true = utils.load_data(DATA_PATH, train_fname_start, no_files)
test_X, test_y_true = utils.load_data(DATA_PATH, test_fname_start, no_files)
print(train_X.shape, test_X.shape, train_y_true.shape, test_y_true.shape)
font0 = FontProperties();
font1 = font0.copy();
font1.set_size('xx-large');
font1.set_weight('bold');
fig = plt.figure(figsize=(30,20));
cmap = colors.ListedColormap(['white', 'black']);
#rect = l,b,w,h
rect1 = 0.2, 0.1, 0.1, 0.2
rect2 = 0.4, 0.1, 0.3, 0.2
start = 2*3
ax1= fig.add_axes(rect1);
ax2 = fig.add_axes(rect2);
im = test_X[start,:,:].copy()
ax1.imshow(im.transpose(),origin='lower', cmap=cmap, interpolation = 'none',aspect='auto');
ax1.set_title('Example of noise image',fontproperties=font1);
ax1.set_xlabel('non-dim time',fontproperties=font1);
ax1.set_ylabel('non-dim range',fontproperties=font1);
ims = test_X[start:start+3,:,:].copy()
im = np.reshape(ims, (ims.shape[0]*ims.shape[1],ims.shape[2]));
ax2.imshow(im.transpose(),origin='lower', cmap=cmap, interpolation = 'none',aspect='auto');
ax2.set_title('Example of three stacked images: noise, noise+track, noise+track',fontproperties=font1);
ax2.set_xlabel('non-dim time',fontproperties=font1);
ax2.set_ylabel('non-dim range',fontproperties=font1);
ax2.set_xlim(0,30);
ax2.set_ylim(0.30);
for i in range(0,30,10):
ax2.plot([i, i],[0, 30],'r-');
fig = plt.figure(figsize=(30,20));
cmap = colors.ListedColormap(['white', 'black']);
#rect = l,b,w,h
rect1 = 0.2, 0.1, 0.1, 0.2
rect2 = 0.22, 0.11, 0.1, 0.2
rect3 = 0.25, 0.12, 0.1, 0.2
ax1= fig.add_axes(rect3);
im = test_X[start+2,:,:].copy()
ax1.imshow(im.transpose(),origin='lower', cmap=cmap, interpolation = 'none',aspect='auto');
ax2= fig.add_axes(rect2);
im = test_X[start+1,:,:].copy()
ax2.imshow(im.transpose(),origin='lower', cmap=cmap, interpolation = 'none',aspect='auto');
ax3= fig.add_axes(rect1);
im = test_X[start,:,:].copy()
ax3.imshow(im.transpose(),origin='lower', cmap=cmap, interpolation = 'none',aspect='auto');
ax3.set_xlabel('non-dim time',fontproperties=font1);
ax3.set_ylabel('non-dim range',fontproperties=font1);
keras_model_load = True # False, True
batch_size = 3
if keras_model_load:
model_name = 'keras_3k_dat_linmodel'
model_lin = utils.load_keras_model(model_name)
else:
np.random.seed(17)
input_shape = (train_X.shape[1],train_X.shape[2])
hidden_size = 16
model_lin = Sequential()
model_lin.add(LSTM(input_shape=input_shape, output_dim=hidden_size, return_sequences=True))
model_lin.add(Dense(hidden_size))
model_lin.add(Activation('relu'))
model_lin.add(Dense(output_dim=1, activation="relu"))
optimizer = optimizers.Adam(clipnorm=2)
model_lin.compile(optimizer=optimizer, loss='binary_crossentropy')
model_lin.summary()
if not keras_model_load:
y3D = utils.track_y_3D(train_y_true, n = dxn)
model_lin.fit(train_X, y3D, epochs = 100, batch_size = batch_size, verbose = 1, shuffle=True)
Y_estim_train = model_lin.predict(train_X, batch_size = batch_size)
Y_estim_test = model_lin.predict(test_X, batch_size = batch_size)
print(Y_estim_train.shape, Y_estim_test.shape)
Y_estim_train=Y_estim_train.sum(axis=1)/Y_estim_train.shape[1]
Y_estim_test=Y_estim_test.sum(axis=1)/Y_estim_test.shape[1]
Y_estim_test[Y_estim_test < 0.5]=0
Y_estim_test[Y_estim_test >= 0.5]=1
Y_estim_train[Y_estim_train < 0.5]=0
Y_estim_train[Y_estim_train >= 0.5]=1
row1_train = 60
row2_train = 90
row1_test = 100
row2_test = 150
dxn = 10
utils.plot_results(test_y_true, Y_estim_test, train_X, train_y_true, Y_estim_train, test_X,
dxn, row1_train, row2_train, row1_test, row2_test, N_plots = 7)
utils.roc_dat(Y_estim_test, test_y_true, 0.5)
utils.roc_dat(Y_estim_train, train_y_true, 0.5)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now I load the pre-generated artificial data required for the LSTM training and testing. Note that I have used 3000 and 300 images for training and testing respectively. These images were stacked into 3D matrices, each image having a size of 10 by 30 pixels. Each image was labeled as either noisy (y[i] = 0) or including track (y[i]=1).
Step2: Next, to show the structure of the data, we look into a few example images.
Step3: Above, the left image shows a typical distribution of noise marked by black rectangulas. On the right, the three images have been concatenated. From the left to the right, the first image corresponds to noise shown on the left, and the next two images show a linear track, looking as a staircase consisting from small black rectangular boxes going down through the middle and the rightmost images.
Step4: Learning to detect tracks using Long Short-Time-Memory model
Step5: Now we use the trained model to predict labels of the input images used for training as well as for testing
Step6: To estimate one label per image, we need to average over 10 labels that were generated for 10 time steps of each image.
Step7: Since the predicted output is real valued, using the threshold value of 0.5, I categorized the output into two categories
Step8: Results
Step9: Finally, I estimated the probability of detection (Pd) and the probability of false alarms (Pfa), applied to the test, and to the training data. This metrics is often used for receiver operating characterisitc estimation.
|
15,661 | <ASSISTANT_TASK:>
Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import functools
import itertools
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
from tqdm import trange
# Use the inner product between two embedding vectors as the similarity measure
def plot_correlation(labels, features):
corr = np.inner(features, features)
corr /= np.max(corr)
sns.heatmap(corr, xticklabels=labels, yticklabels=labels)
# Generate embeddings for some terms
queries = [
# Related viruses
'coronavirus', 'SARS', 'MERS',
# Regions
'Italy', 'Spain', 'Europe',
# Symptoms
'cough', 'fever', 'throat'
]
module = hub.load('https://tfhub.dev/tensorflow/cord-19/swivel-128d/3')
embeddings = module(queries)
plot_correlation(queries, embeddings)
builder = tfds.builder(name='scicite')
builder.download_and_prepare()
train_data, validation_data, test_data = builder.as_dataset(
split=('train', 'validation', 'test'),
as_supervised=True)
#@title Let's take a look at a few labeled examples from the training set
NUM_EXAMPLES = 10#@param {type:"integer"}
TEXT_FEATURE_NAME = builder.info.supervised_keys[0]
LABEL_NAME = builder.info.supervised_keys[1]
def label2str(numeric_label):
m = builder.info.features[LABEL_NAME].names
return m[numeric_label]
data = next(iter(train_data.batch(NUM_EXAMPLES)))
pd.DataFrame({
TEXT_FEATURE_NAME: [ex.numpy().decode('utf8') for ex in data[0]],
LABEL_NAME: [label2str(x) for x in data[1]]
})
#@title Hyperparameters { run: "auto" }
EMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/3' #@param {type: "string"}
TRAINABLE_MODULE = False #@param {type: "boolean"}
hub_layer = hub.KerasLayer(EMBEDDING, input_shape=[],
dtype=tf.string, trainable=TRAINABLE_MODULE)
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(3))
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 35#@param {type: "integer"}
BATCH_SIZE = 32#@param {type: "integer"}
history = model.fit(train_data.shuffle(10000).batch(BATCH_SIZE),
epochs=EPOCHS,
validation_data=validation_data.batch(BATCH_SIZE),
verbose=1)
from matplotlib import pyplot as plt
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print('%s: %.3f' % (name, value))
prediction_dataset = next(iter(test_data.batch(20)))
prediction_texts = [ex.numpy().decode('utf8') for ex in prediction_dataset[0]]
prediction_labels = [label2str(x) for x in prediction_dataset[1]]
predictions = [
label2str(x) for x in np.argmax(model.predict(prediction_texts), axis=-1)]
pd.DataFrame({
TEXT_FEATURE_NAME: prediction_texts,
LABEL_NAME: prediction_labels,
'prediction': predictions
})
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TF-Hub CORD-19 Swivel 埋め込みを探索する
Step2: 埋め込みを分析する
Step3: 埋め込みが異なる用語の意味をうまく捉えていることが分かります。それぞれの単語は所属するクラスタの他の単語に類似していますが(「コロナウイルス」は「SARS」や「MERS」と高い関連性がある)、ほかのクラスタの単語とは異なります(「SARS」と「スペイン」の類似度はゼロに近い)。
Step4: 引用の意図分類器をトレーニングする
Step5: モデルをトレーニングして評価する
Step6: モデルを評価する
Step7: 損失はすぐに減少しますが、特に精度は急速に上がることが分かります。予測と真のラベルがどのように関係しているかを確認するために、いくつかの例をプロットしてみましょう。
|
15,662 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
from netgraph import Graph
edges = [(0, 1), (1, 1)]
Graph(edges, node_color='red', node_size=4.)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from netgraph import Graph
Graph([(0, 1), (1, 2), (2, 0)],
edge_color={(0, 1) : 'g', (1, 2) : 'lightblue', (2, 0) : np.array([1, 0, 0])},
node_size={0 : 20, 1 : 4.2, 2 : np.pi},
)
plt.show()
import matplotlib.pyplot as plt; plt.ion()
from netgraph import Graph
fig, ax = plt.subplots()
g = Graph([(0, 1), (1, 2), (2, 0)], ax=ax)
plt.show()
# make some changes
g.node_artists[0].set_alpha(0.2)
g.edge_artists[(1, 2)].set_facecolor('red')
# force redraw to display changes
fig.canvas.draw()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using a dictionary mapping individual nodes or individual edges to a property
Step2: By directly manipulating the node and edge artists.
|
15,663 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
matplotlib.style.use('ggplot')
# dataframes!
import pandas
# Construct dataframe
columns = ['eggs','sausage','bacon']
indices = ['Novel A', 'Novel B', 'Novel C']
dtm = [[50,60,60],[90,10,10], [20,70,70]]
dtm_df = pandas.DataFrame(dtm, columns = columns, index = indices)
# Show dataframe
dtm_df
# Plot our points
scatter(dtm_df['eggs'], dtm_df['sausage'])
# Make the graph look good
xlim([0,100]), ylim([0,100])
xlabel('eggs'), ylabel('sausage')
# Although we want the Cosine Distance, it is mathematically
# simpler to calculate its opposite: Cosine Similarity
from sklearn.metrics.pairwise import cosine_similarity
# So we will subtract the similarities from 1
cos_sim = cosine_similarity(dtm_df)
# And we'll make it a little easier to read
np.round(cos_sim, 2)
# Turn our DTM sideways
dtm_df.T
# Find the Cosine Distances between pairs of word-vectors
cos_sim = cosine_similarity(dtm_df.T)
# In readable format
np.round(cos_sim, 2)
# Data Wrangling
import os
import numpy as np
import pandas
from scipy.spatial.distance import cosine
from sklearn.metrics import pairwise
from sklearn.manifold import MDS, TSNE
# Natural Language Processing
import gensim
import nltk
#nltk.download('punkt')
from nltk.tokenize import word_tokenize, sent_tokenize
# Custom Tokenizer for Classroom Use
def fast_tokenize(text):
# Get a list of punctuation marks
from string import punctuation
lower_case = text.lower()
# Iterate through text removing punctuation characters
no_punct = "".join([char for char in lower_case if char not in punctuation])
# Split text over whitespace into list of words
tokens = no_punct.split()
return tokens
# Import Metadata into Pandas Dataframe
meta_df = pandas.read_csv('resources/txtlab_Novel450_English.csv')
# Check Metadata
meta_df
# Set location of corpus folder
fiction_folder = 'txtlab_Novel450_English/'
# Collect the text of each file in the 'fiction_folder' on the hard drive
# Create empty list, each entry will be the string for a given novel
novel_list = []
# Iterate through filenames in 'fiction_folder'
for filename in os.listdir(fiction_folder):
# Read novel text as single string
with open(fiction_folder + filename, 'r') as file_in:
this_novel = file_in.read()
# Add novel text as single string to master list
novel_list.append(this_novel)
# Inspect first item in novel_list
novel_list[0]
# Split each novel into sentences
sentences = [sentence for novel in novel_list for sentence in sent_tokenize(novel)]
# Inspect first sentence
sentences[0]
# Split each sentence into tokens
words_by_sentence = [fast_tokenize(sentence) for sentence in sentences]
# Remove any sentences that contain zero tokens
words_by_sentence = [sentence for sentence in words_by_sentence if sentence != []]
# Inspect first sentence
words_by_sentence[0]
# Train word2vec model from txtLab corpus
model = gensim.models.Word2Vec(words_by_sentence, size=100, window=5, \
min_count=25, sg=1, alpha=0.025, iter=5, batch_words=10000)
# Return dense word vector
model['whale']
# Find cosine distance between two given word vectors
model.similarity('pride','prejudice')
# Find nearest word vectors by cosine distance
model.most_similar('pride')
# Given a list of words, we can ask which doesn't belong
# Finds mean vector of words in list
# and identifies the word further from that mean
model.doesnt_match(['pride','prejudice', 'whale'])
# Get most similar words to BANK, in order
# to get a sense for its primary meaning
model.most_similar('bank')
# Remove the sense of "river bank" from "bank" and see what is left
model.most_similar(positive=['bank'], negative=['river'])
# Get most similar words to KING, in order
# to get a sense for its primary meaning
model.most_similar('king')
# The canonic word2vec analogy: King - Man + Woman -> Queen
model.most_similar(positive=['woman', 'king'], negative=['man'])
# Feminine Vector
model.most_similar(positive=['she','her','hers','herself'], negative=['he','him','his','himself'])
# Masculine Vector
model.most_similar(positive=['he','him','his','himself'], negative=['she','her','hers','herself'])
## EX. Use the most_similar method to find the tokens nearest to 'car' in our model.
## Do the same for 'motorcar'.
## Q. What characterizes each word in our corpus? Does this make sense?
## EX. How does our model answer the analogy: MADRID is to SPAIN as PARIS is to __________
## Q. What has our model learned about nation-states?
## EX. Perform the canonic Word2Vec addition again but leave out a term:
## Try 'king' - 'man', 'woman' - 'man', 'woman' + 'king'
## Q. What do these indicate semantically?
# Dictionary of words in model
model.wv.vocab
#model.vocab # deprecated
# Visualizing the whole vocabulary would make it hard to read
len(model.wv.vocab)
#len(model.vocab) # deprecated
# For interpretability, we'll select words that already have a semantic relation
her_tokens = [token for token,weight in model.most_similar(positive=['she','her','hers','herself'], \
negative=['he','him','his','himself'], topn=50)]
# Inspect list
her_tokens
# Get the vector for each sampled word
vectors = [model[word] for word in her_tokens]
# Calculate distances among texts in vector space
dist_matrix = pairwise.pairwise_distances(vectors, metric='cosine')
# Multi-Dimensional Scaling (Project vectors into 2-D)
mds = MDS(n_components = 2, dissimilarity='precomputed')
embeddings = mds.fit_transform(dist_matrix)
# Make a pretty graph
_, ax = plt.subplots(figsize=(10,10))
ax.scatter(embeddings[:,0], embeddings[:,1], alpha=0)
for i in range(len(vectors)):
ax.annotate(her_tokens[i], ((embeddings[i,0], embeddings[i,1])))
# For comparison, here is the same graph using a masculine-pronoun vector
his_tokens = [token for token,weight in model.most_similar(positive=['he','him','his','himself'], \
negative=['she','her','hers','herself'], topn=50)]
vectors = [model[word] for word in his_tokens]
dist_matrix = pairwise.pairwise_distances(vectors, metric='cosine')
mds = MDS(n_components = 2, dissimilarity='precomputed')
embeddings = mds.fit_transform(dist_matrix)
_, ax = plt.subplots(figsize=(10,10))
ax.scatter(embeddings[:,0], embeddings[:,1], alpha=0)
for i in range(len(vectors)):
ax.annotate(his_tokens[i], ((embeddings[i,0], embeddings[i,1])))
## Q. What kinds of semantic relationships exist in the diagram above?
## Are there any words that seem out of place?
# Save current model for later use
model.wv.save_word2vec_format('resources/word2vec.txtlab_Novel150_English.txt')
#model.save_word2vec_format('resources/word2vec.txtlab_Novel150_English.txt') # deprecated
# Load up models from disk
# Model trained on Eighteenth Century Collections Online corpus (~2500 texts)
# Made available by Ryan Heuser: http://ryanheuser.org/word-vectors-1/
ecco_model = gensim.models.KeyedVectors.load_word2vec_format('resources/word2vec.ECCO-TCP.txt')
#ecco_model = gensim.models.Word2Vec.load_word2vec_format('resources/word2vec.ECCO-TCP.txt') # deprecated
# What are similar words to BANK?
ecco_model.most_similar('bank')
# What if we remove the sense of "river bank"?
ecco_model.most_similar(positive=['bank'], negative=['river'])
## EX. Heuser's blog post explores an analogy in eighteenth-century thought that
## RICHES are to VIRTUE what LEARNING is to GENIUS. How true is this in
## the ECCO-trained Word2Vec model? Is it true in the one we trained?
## Q. How might we compare word2vec models more generally?
# ECCO model: RICHES are to VIRTUE what LEARNING is to ??
# txtLab model: RICHES are to VIRTUE what LEARNING is to ??
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a DTM with a Few Pseudo-Texts
Step2: Visualize
Step3: Vectors
Step4: Vector Semantics
Step5: Word2Vec
Step6: Corpus Description
Step7: Import Corpus
Step8: Pre-Processing
Step9: 2. Word2Vec
Step10: Embeddings
Step11: Vector-Space Operations
Step12: Multiple Valences
Step13: Analogy
Step14: Gendered Vectors
Step15: Exercises
Step16: Visualization
Step17: 3. Saving/Loading Models
Step18: Exercise
|
15,664 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from cmt.components import Cem
cem = Cem()
print cem.get_output_var_names()
cem.get_input_var_names()
angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity'
print "Data type: %s" % cem.get_var_type(angle_name)
print "Units: %s" % cem.get_var_units(angle_name)
print "Grid id: %d" % cem.get_var_grid(angle_name)
print "Number of elements in grid: %d" % cem.get_grid_size(0)
print "Type of grid: %s" % cem.get_grid_type(0)
cem.initialize(None)
import numpy as np
cem.set_value("sea_surface_water_wave__height", 2.)
cem.set_value("sea_surface_water_wave__period", 7.)
cem.set_value("sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity", 0. * np.pi / 180.)
grid_id = cem.get_var_grid('sea_water__depth')
grid_type = cem.get_grid_type(grid_id)
grid_rank = cem.get_grid_rank(grid_id)
print 'Type of grid: %s (%dD)' % (grid_type, grid_rank)
spacing = np.empty((grid_rank, ), dtype=float)
shape = cem.get_grid_shape(grid_id)
cem.get_grid_spacing(grid_id, out=spacing)
print 'The grid has %d rows and %d columns' % (shape[0], shape[1])
print 'The spacing between rows is %f and between columns is %f' % (spacing[0], spacing[1])
z = np.empty(shape, dtype=float)
cem.get_value('sea_water__depth', out=z)
def plot_coast(spacing, z):
import matplotlib.pyplot as plt
xmin, xmax = 0., z.shape[1] * spacing[0] * 1e-3
ymin, ymax = 0., z.shape[0] * spacing[1] * 1e-3
plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')
plt.colorbar().ax.set_ylabel('Water Depth (m)')
plt.xlabel('Along shore (km)')
plt.ylabel('Cross shore (km)')
plot_coast(spacing, z)
qs = np.zeros_like(z)
qs[0, 100] = 1250
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')
cem.get_time_step(), cem.get_time_units(), cem.get_current_time()
for time in xrange(3000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
cem.get_current_time()
plot_coast(spacing, z)
val = np.empty((5, ), dtype=float)
cem.get_value("basin_outlet~coastal_center__x_coordinate", val)
val / 100.
qs[0, 150] = 1500
for time in xrange(3750):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
qs.fill(0.)
for time in xrange(4000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the Cem class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
Step2: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
Step3: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
Step4: OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
Step5: Before running the model, let's set a couple input parameters. These two parameters represent the wave height and wave period of the incoming waves to the coastline.
Step6: The main output variable for this model is water depth. In this case, the CSDMS Standard Name is much shorter
Step7: With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid (structured, unstructured, etc.). This grid happens to be uniform rectilinear. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars.
Step8: Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include
Step9: Allocate memory for the water depth grid and get the current values from cem.
Step10: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
Step11: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.
Step12: Right now we have waves coming in but no sediment entering the ocean. To add some discharge, we need to figure out where to put it. For now we'll put it on a cell that's next to the ocean.
Step13: The CSDMS Standard Name for this variable is
Step14: Set the bedload flux and run the model.
Step15: Let's add another sediment source with a different flux and update the model.
Step16: Here we shut off the sediment supply completely.
|
15,665 | <ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('data/test_data2.csv', encoding='latin-1')
print(len(df))
df.head()
df['Released'] = pd.to_datetime(df['Released'])
df['Year'] = pd.DatetimeIndex(df['Released']).year
df['Month'] = pd.DatetimeIndex(df['Released']).month
df.head()
import plotly.plotly as py
from plotly.tools import FigureFactory as FF
df_a = df[['Prod_Budget', 'Year', 'Month', 'Total_Torrents']]
import plotly.plotly as py
from plotly.tools import FigureFactory as FF
df_a = df[['Prod_Budget', 'Year', 'Month', 'Total_Torrents']]
fig = FF.create_scatterplotmatrix(df_a, diag='histogram', index='Prod_Budget',
height=800, width=800)
py.iplot(fig, filename='Histograms along Diagonal Subplots')
# look at current data set AFTER year cutoff
plt.rcParams['figure.figsize'] = (15, 15)
_ = pd.tools.plotting.scatter_matrix(df_yr)
# unique list of grouped genres as strings
unq_genres = df_yr['Genre'].unique()
unq_genres = unq_genres.tolist()
#print(len(unq_genres))
#print(unq_genres[:10])
# unique list of grouped genres as lists
lst_grp_genres = []
for lst in unq_genres:
temp = []
for genre in lst.split(','):
temp.append(genre)
lst_grp_genres.append(temp)
#print(len(lst_grp_genres))
#print(lst_grp_genres)
# unique list of individual genres
ind_genre = set()
for lst in unq_genres:
for genre in lst.split(','):
ind_genre.add(genre.strip())
ind_genre = sorted(ind_genre)
#print(len(ind_genre))
#print(ind_genre)
from patsy import dmatrices
patsy_formula = 'Total_Torrents ~ Prod_Budget + Year + Genre_Single'
y, x = dmatrices(patsy_formula, data=df_sub, return_type='dataframe')
import statsmodels.api as sm
model = sm.OLS(y, x)
results = model.fit()
results.summary()
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x, y)
mod_lr_score = model.score(x, y)
mod_lr_coef = model.coef_
from sklearn import cross_validation as cv
from sklearn import metrics
x_train, x_test, y_train, y_test = cv.train_test_split(x,y,test_size=0.20,random_state=1234)
model = LinearRegression().fit(x_train, y_train)
model.summary
# store results
mean_sq_err = metrics.mean_squared_error(y_train,model.predict(x_train))
cv_mod_score = model.score(x_train, y_train)
# reset x, y otherwise errors occur
y, x = dmatrices(patsy_formula, data=df_sub, return_type='dataframe')
from sklearn.cross_validation import KFold
kf = KFold(len(df_sub), n_folds=10, shuffle=True)
for train_index, test_index in kf:
x_train, x_test = x.iloc[train_index], x.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
clf2 = LinearRegression().fit(x.iloc[train_index], y.iloc[train_index])
# store results
mean_sq_errKf = metrics.mean_squared_error(y_train,model.predict(x_train))
cvKf_mod_score = clf2.score(x,y)
#NORMAL RESULTS
print('Model Linear Regression Score = {0}'.format(mod_lr_score))
print(' Mean Square Error = {0}'.format(mean_sq_err))
print(' Cross Validation Model Score = {0}'.format(cv_mod_score))
print(' Mean Squred Error K-Fold = {0}'.format(mean_sq_errKf))
print('Cross Val. K-Fold Model Score = {0}'.format(cvKf_mod_score))
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_regress_exog(results,'Prod_Budget', fig=fig)
_ = plt.plot(y, model.predict(x), 'ro')
df.columns
df_sub['log_budg']=np.log(df_sub.Prod_Budget)
#df_sub['log_year']=np.log(df_sub.Year)
#df_sub['log_run']=np.log(df_sub.Runtime)
df_sub['log_tor']=np.log(df_sub.Total_Torrents)
trans = df_sub[['log_budg', 'Year', 'log_tor']]
plt.rcParams['figure.figsize'] = (15, 15)
_ = pd.tools.plotting.scatter_matrix(trans)
log_patsy_formula = 'log_tor ~ log_budg + Year + Genre_Single'
y, x = dmatrices(log_patsy_formula, data=df_sub, return_type='dataframe')
import statsmodels.formula.api as smf
results = smf.ols(formula=log_patsy_formula, data=df_sub,).fit()
results.summary()
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x, y)
# store results
log_mod_lr_score = model.score(x,y)
from sklearn import cross_validation as cv
from sklearn import metrics
x_train, x_test, y_train, y_test = cv.train_test_split(x,y,test_size=0.20,random_state=1234)
model = LinearRegression().fit(x_train, y_train)
# store results
log_mean_sq_err = metrics.mean_squared_error(y_train,model.predict(x_train))
log_cv_mod_score = model.score(x_train, y_train)
# reset x, y otherwise errors occur
y, x = dmatrices(log_patsy_formula, data=df_sub, return_type='dataframe')
from sklearn.cross_validation import KFold
kf = KFold(len(df_sub), n_folds=10, shuffle=True)
for train_index, test_index in kf:
x_train, x_test = x.iloc[train_index], x.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
clf2 = LinearRegression().fit(x.iloc[train_index], y.iloc[train_index])
# store results
log_mean_sq_errKf = metrics.mean_squared_error(y_train,model.predict(x_train))
log_cvKf_mod_score = clf2.score(x,y)
#LOG RESULTS
print('Log Model Linear Regression Score = {0}'.format(log_mod_lr_score))
print(' Log Mean Square Error = {0}'.format(log_mean_sq_err))
print(' Log Cross Validation Model Score = {0}'.format(log_cv_mod_score))
print(' Log Mean Squred Error K-Fold = {0}'.format(log_mean_sq_errKf))
print('Log Cross Val. K-Fold Model Score = {0}'.format(log_cvKf_mod_score))
_ = plt.plot(y, model.predict(x), 'bo')
plt.figure(figsize=(25,10))
ind = np.arange(len(yr_dict))
width = 0.35
bar_year = [year for year, count in yr_lst]
bar_count = [count for year, count in yr_lst]
plt.bar(ind, bar_count, width, color='r')
plt.ylabel('Count')
plt.xlabel('Year')
plt.title('Number of Torrents per Year')
plt.xticks(ind + width/2., (bar_year), rotation='vertical')
plt.yticks(np.arange(0, 91, 5))
plt.show()
#log_tor ~ log_budg + Year + Month + log_run + Genre_Single'
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_regress_exog(results,'log_budg', fig=fig)
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_regress_exog(results,'Year', fig=fig)
#fig = plt.figure(figsize=(12,8))
#fig = sm.graphics.plot_regress_exog(results,'Month', fig=fig)
plt.figure(figsize=(15,10))
plt.axis([0, 450000000, 0, 1500])
plt.xlabel('Production Budget')
plt.ylabel('Torrents')
plt.title('Torrents vs. Production Budget - All')
plt.scatter(df_sub['Prod_Budget'], df_sub['Total_Torrents'], alpha=0.5, s=50)
import plotly.plotly as py
from plotly.tools import FigureFactory as FF
df_yr = df[['Prod_Budget', 'Year', 'Month', 'Total_Torrents']]
fig = FF.create_scatterplotmatrix(df_sub, diag='histogram', index='Prod_Budget',
height=800, width=800)
py.iplot(fig, filename='Histograms along Diagonal Subplots')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convert dates to datetime objects
Step2: fig = FF.create_scatterplotmatrix(df_a, diag='box', index='Prod_Budget',
Step3: Log Transform
|
15,666 | <ASSISTANT_TASK:>
Python Code:
# Let's handle units
from astropy import units as u
# Structure to map healpix' levels to their angular sizes
#
healpix_levels = {
0 : 58.63 * u.deg,
1 : 29.32 * u.deg,
2 : 14.66 * u.deg,
3 : 7.329 * u.deg,
4 : 3.665 * u.deg,
5 : 1.832 * u.deg,
6 : 54.97 * u.arcmin,
7 : 27.48 * u.arcmin,
8 : 13.74 * u.arcmin,
9 : 6.871 * u.arcmin,
10 : 3.435 * u.arcmin,
11 : 1.718 * u.arcmin,
12 : 51.53 * u.arcsec,
13 : 25.77 * u.arcsec,
14 : 12.88 * u.arcsec,
15 : 6.442 * u.arcsec,
16 : 3.221 * u.arcsec,
17 : 1.61 * u.arcsec,
18 : 805.2 * u.milliarcsecond,
19 : 402.6 * u.milliarcsecond,
20 : 201.3 * u.milliarcsecond,
21 : 100.6 * u.milliarcsecond,
22 : 50.32 * u.milliarcsecond,
23 : 25.16 * u.milliarcsecond,
24 : 12.58 * u.milliarcsecond,
25 : 6.291 * u.milliarcsecond,
26 : 3.145 * u.milliarcsecond,
27 : 1.573 * u.milliarcsecond,
28 : 786.3 * u.microarcsecond,
29 : 393.2 * u.microarcsecond
}
# as usual, matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
# Load Healpix
import healpy
# Erin Sheldon's healpix_util
import healpix_util as hu
# Thomas Boch's MOCpy
import mocpy
%ls
from astropy.io import fits
chandra = fits.open('Chandra_multiwavelength.fits')[1]
print chandra.columns
# we are interested here on columns 'RA','DEC' and 'RADEC_ERR'
_data = {'ra' : chandra.data['RA'] * u.degree,
'dec': chandra.data['DEC']* u.degree,
'pos_err' : chandra.data['RADEC_ERR']* u.arcsec}
from astropy.table import Table
_table = Table(_data)
import pandas as pd
df = _table.to_pandas()
del _table,_data
df.hist('pos_err',bins=100)
plt.show()
df.describe()
# A function to find out which healpix level corresponds a given (typical) size of coverage
def size2level(size):
Returns nearest Healpix level corresponding to a given diamond size
The 'nearest' Healpix level is here to be the nearest greater level,
right before the first level smaller than 'size'.
assert size.unit
ko = None
for k,v in healpix_levels.iteritems():
if v < 2 * size:
break
ko = k
return ko
level = size2level(df.pos_err.median()* u.arcsec)
nside = 2**level
print "Typical (median) position error: \n{}".format(df.pos_err.median())
print "\nCorrespondig healpix level: {} \n\t and nsize value: {}".format(level,nside)
# Let's convert from ra,dec to theta,phi
# This function comes from mocpy
def ra2phi(ra):
convert equatorial ra, dec in degrees
to polar theta, phi in radians
import math
return math.radians(ra)
def dec2theta(dec):
convert equatorial ra, dec in degrees
to polar theta, phi in radians
import math
return math.pi/2 - math.radians(dec)
def radec2thetaphi(ra,dec):
_phi = ra2phi(ra)
_theta = dec2theta(dec)
return _theta,_phi
import healpy
def healpix_radec2pix(nside, ra, dec, nest=True):
_theta,_phi = radec2thetaphi(ra, dec)
return healpy.ang2pix(nside, _theta, _phi, nest=nest)
df['phi'] = df.ra.apply(ra2phi)
df['theta'] = df.dec.apply(dec2theta)
df.describe()
hp_pix_eq = df.apply(lambda x:healpix_radec2pix(nside,x.ra,x.dec,nest=True), axis=1)
hp_pix_ang = df.apply(lambda x:healpy.ang2pix(nside,x.theta,x.phi,nest=True), axis=1)
import numpy
numpy.array_equal(hp_pix_ang,hp_pix_eq)
hpix = hu.HealPix(scheme='nest',nside=nside)
hpix
hu_pix = hpix.eq2pix(ra=df.ra,dec=df.dec)
numpy.array_equal(hu_pix,hp_pix_ang) and numpy.array_equal(hu_pix,hp_pix_eq)
# Curiosity: which one is faster?
%timeit hpix.eq2pix(ra=df.ra,dec=df.dec)
%timeit df.apply(lambda x:healpix_radec2pix(nside,x.ra,x.dec,nest=True), axis=1)
# So...all results are equal \o/ and ES's is faster
# we can now go on and put it inside our DataFrame
df['hpix'] = hu_pix
df.describe()
moc = mocpy.MOC()
moc.add_pix_list(level,df.hpix)
moc.plot()
moc.write('chandra_MOC_uniq.fits')
table = Table.from_pandas(df)
table.write('chandra_MOC_radec.fits',format='fits',overwrite=True)
del df,table,moc,chandra,hpix
%ls -lh
from astropy.io import fits
xmm = fits.open('XMM_multiwavelength_cat.fits')[1]
xmm.columns.names
# we are interested here on columns 'RA','DEC' and 'RADEC_ERR'
_data = {'ra' : xmm.data['RA'] * u.degree,
'dec': xmm.data['DEC']* u.degree,
'pos_err' : xmm.data['RADEC_ERR']* u.arcsec}
df = Table(_data).to_pandas()
df.hist('pos_err',bins=100)
plt.show()
df.describe()
level = size2level(df.pos_err.median()* u.arcsec)
nside = 2**level
print "Typical (median) position error: \n{}".format(df.pos_err.median())
print "\nCorrespondig healpix level: {} \n\t and nsize value: {}".format(level,nside)
hpix = hu.HealPix(scheme='nest',nside=nside)
hpix
df['hpix'] = hpix.eq2pix(ra=df.ra,dec=df.dec)
df.describe()
moc = mocpy.MOC()
moc.add_pix_list(level,df.hpix)
moc.plot()
moc.write('xmm_MOC_uniq.fits')
table = Table.from_pandas(df)
table.write('xmm_MOC_radec.fits',format='fits',overwrite=True)
%ls -lh
def radec_2_moc(filename,ra_column,dec_column,radius_column=None,radius_value=None):
import healpix_util
import mocpy
import time
start_all = time.clock()
tbhdu = open_fits(filename)
table = radec_table(tbhdu,ra_column,dec_column,radius_column)
start_convert = time.clock()
if not radius_column:
if radius_value != None and radius_value > 0:
radius = radius_value
else:
from astropy import units
radius = 1 * units.arcsec
else:
radius = radius_mean(tbhdu,radius_column)
assert hasattr(radius,'unit')
level = size2level(radius)
nside = 2**level
hpix = healpix_util.HealPix('nest',nside)
table['hpix'] = hpix.eq2pix(table['ra'],table['dec'])
stop_convert = time.clock()
fileroot = '.'.join(filename.split('.')[:-1])
start_write_normal = time.clock()
fileout = '_'.join([fileroot,'MOC_position.fit'])
table.write(fileout,format='fits',overwrite=True)
stop_write_normal = time.clock()
start_write_moc = time.clock()
# fileout = '_'.join([fileroot,'MOC_uniq.fit'])
# moc = mocpy.MOC()
# moc.add_pix_list(level,table['hpix'])
# moc.write(fileout)
stop_write_moc = time.clock()
stop_all = time.clock()
_msg = "Time elapsed converting pixels: {}\n".format(stop_convert-start_convert)
_msg += "Time elapsed on writing the table: {}\n".format(stop_write_normal-start_write_normal)
_msg += "Time elapsed on writing MOC: {}\n".format(stop_write_moc-start_write_moc)
_msg += "Total time: {}\n".format(stop_all-start_all)
_msg += "Number of points: {}\n".format(len(table))
return _msg
def open_fits(filename,hdu=1):
from astropy.io import fits
from astropy.units import Quantity
_tab = fits.open(filename,ignore_missing_end=True)[hdu]
return _tab
def radec_table(tbhdu,ra_column,dec_column,radius_column=None):
from astropy.table import Table
from astropy import units
import numpy
_data = {'ra':tbhdu.data.field(ra_column) * units.deg,
'dec':tbhdu.data.field(dec_column) * units.deg,
'id':numpy.arange(tbhdu.header['NAXIS2'])}
if radius_column:
try:
_d = tbhdu.data.field(radius_column)
_data.update({'radius':_d})
except:
pass
return Table(_data)
def radius_mean(tbhdu,radius_column):
from astropy.units import Quantity
radius = None
if radius_column:
_radius = Quantity(tbhdu.data.field(radius_column), u.arcsec)
radius = _radius.mean()
assert radius
return radius
res = radec_2_moc('Chandra_multiwavelength.fits','RA','DEC','RADEC_ERR')
print res
res = radec_2_moc('XMM_multiwavelength_cat.fits','RA','DEC','RADEC_ERR')
print res
%ls -lh
def print_fits_columns(fitsfile,hdu=1):
from astropy.io import fits
hdul = fits.open(fitsfile,ignore_missing_end=True)
tbhdu = hdul[1]
print "Number of objects: {}\n".format(tbhdu.header['NAXIS2'])
print "{} columns:\n".format(fitsfile)
ncols = len(tbhdu.columns)
i = 0
for c in tbhdu.columns:
if i<=5:
print "\t{}; ".format(c.name)
else:
print "\t... ({} columns)".format(ncols-i)
break
i += 1
hdul.close()
print_fits_columns('photometry/hers/hers_catalogue_3sig250_no_extended.fits')
res = radec_2_moc('photometry/hers/hers_catalogue_3sig250_no_extended.fits','RA','DEC')
print res
print_fits_columns('photometry/galex/S82_gmsc_chbrandt.fit')
res = radec_2_moc('photometry/galex/S82_gmsc_chbrandt.fit','ra','dec','poserr')
print res
print_fits_columns('photometry/sdss/Stripe82_photo_chbrandt.fit')
res = radec_2_moc('photometry/sdss/Stripe82_photo_chbrandt.fit','ra','dec')
print res
print_fits_columns('photometry/shela/shela_stripe82_v1.3_cat.fits')
res = radec_2_moc('photometry/shela/shela_stripe82_v1.3_cat.fits','SDSS_RA','SDSS_DEC')
print res
print_fits_columns('photometry/spies/SpIES_ch1ch2_allaor_5s_bothchan_final.fits')
res = radec_2_moc('photometry/spies/SpIES_ch1ch2_allaor_5s_bothchan_final.fits','RA','DEC')
print res
print_fits_columns('photometry/unwise/brandt.fits')
res = radec_2_moc('photometry/unwise/brandt.fits','ra','dec')
print res
print_fits_columns('photometry/vla/first_14dec17.fits')
res = radec_2_moc('photometry/vla/first_14dec17.fits','RA','DEC')
print res
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: The libraries we can use to generate/manipulate Healpix/MOC maps are
Step5: Let's do the same with healpix_util now
Step6: MOCpy for visualizing and writing the maps
Step7: MOC catalogs now for LaMassa's XMM
Step8: Automating MOC generation from fits files
|
15,667 | <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.python.client import timeline
import pylab
import numpy as np
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
tf.reset_default_graph()
config = tf.ConfigProto(
log_device_placement=True,
)
config.gpu_options.allow_growth=True
config.graph_options.optimizer_options.global_jit_level \
= tf.OptimizerOptions.ON_1
print(config)
sess = tf.Session(config=config)
print(sess)
from datetime import datetime
version = int(datetime.now().strftime("%s"))
num_samples = 100000
import numpy as np
import pylab
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
with tf.device("/device:XLA_GPU:0"):
x_observed = tf.placeholder(shape=[None],
dtype=tf.float32,
name='x_observed')
print(x_observed)
y_pred = W * x_observed + b
print(y_pred)
learning_rate = 0.025
with tf.device("/device:XLA_GPU:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Op: ", optimizer_op)
print("Train Op: ", train_op)
with tf.device("/cpu:0"):
init_op = tf.global_variables_initializer()
print(init_op)
sess.run(init_op)
print("Initial random W: %f" % sess.run(W))
print("Initial random b: %f" % sess.run(b))
def test(x, y):
return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
test(x_train, y_train)
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/xla_gpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/xla_gpu/%s/test' % version,
graph=tf.get_default_graph())
%%time
from tensorflow.python.client import timeline
with tf.device("/device:XLA_GPU:0"):
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps - 1):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline-xla-gpu.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 10 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y_pred,
feed_dict={x_observed: x_train,
y_observed: y_train}),
".",
label="predicted")
pylab.legend()
pylab.ylim(0, 1.0)
import os
optimize_me_parent_path = '/root/models/optimize_me/linear/xla_gpu'
saver = tf.train.Saver()
os.system('rm -rf %s' % optimize_me_parent_path)
os.makedirs(optimize_me_parent_path)
unoptimized_model_graph_path = '%s/unoptimized_xla_gpu.pb' % optimize_me_parent_path
tf.train.write_graph(sess.graph_def,
'.',
unoptimized_model_graph_path,
as_text=False)
print(unoptimized_model_graph_path)
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
saver.save(sess,
save_path=model_checkpoint_path)
print(model_checkpoint_path)
print(optimize_me_parent_path)
os.listdir(optimize_me_parent_path)
sess.close()
%%bash
summarize_graph --in_graph=/root/models/optimize_me/linear/xla_gpu/unoptimized_xla_gpu.pb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
input_graph='/root/models/optimize_me/linear/xla_gpu/unoptimized_xla_gpu.pb'
output_dot='/root/notebooks/unoptimized_xla_gpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
%%bash
dot -T png /root/notebooks/unoptimized_xla_gpu.dot \
-o /root/notebooks/unoptimized_xla_gpu.png > /tmp/a.out
from IPython.display import Image
Image('/root/notebooks/unoptimized_xla_gpu.png', width=1024, height=768)
%%bash
dot -T png /tmp/hlo_graph_1.*.dot -o /root/notebooks/hlo_graph_1.png &>/dev/null
dot -T png /tmp/hlo_graph_10.*.dot -o /root/notebooks/hlo_graph_10.png &>/dev/null
dot -T png /tmp/hlo_graph_50.*.dot -o /root/notebooks/hlo_graph_50.png &>/dev/null
dot -T png /tmp/hlo_graph_75.*.dot -o /root/notebooks/hlo_graph_75.png &>/dev/null
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reset TensorFlow Graph
Step2: Create TensorFlow Session
Step3: Generate Model Version (current timestamp)
Step4: Load Model Training and Test/Validation Data
Step5: Randomly Initialize Variables (Weights and Bias)
Step6: View Accuracy of Pre-Training, Initial Random Variables
Step7: Setup Loss Summary Operations for Tensorboard
Step8: Train Model
Step9: View Loss Summaries in Tensorboard
Step10: Show Graph
Step11: XLA JIT Visualizations
|
15,668 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from math import pi
def y(x):
return np.cos(pi*x)
x = np.linspace(-1, 1, 100)
X = np.random.uniform(-1, 1, 25)
X_data = X.reshape(25, 1)
y_obs_list = []
for i in range(len(X)):
y_obs = y(X[i]) + np.random.normal(0, 0.5)
y_obs_list.append(y_obs)
X_data = X.reshape(25, 1)
Y_data = np.asarray(y_obs_list).reshape(25, 1)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y(x))
ax.scatter(X_data, Y_data)
from sklearn import linear_model
regr = linear_model.LinearRegression()
regr.fit(X_data, Y_data)
print('intercept: \n', regr.intercept_)
print('coefficient: \n', regr.coef_)
print("Mean Squared Error: %.2f" % np.mean((regr.predict(X_data) - Y_data) ** 2))
print('R squared: %.2f' % regr.score(X_data, Y_data))
fig, ax = plt.subplots(figsize=(8,8))
ax.plot(x, y(x), label = "True Model")
ax.scatter(X_data, Y_data, label = "Training Points")
ax.plot(X_data, regr.predict(X_data), label = "Regression Line")
plt.legend(loc='lower center')
plt.show
from sklearn.preprocessing import PolynomialFeatures
X_data_example = np.random.random((3, 2))
print(X_data_example)
poly = PolynomialFeatures(degree=2)
poly_X_data_example = poly.fit_transform(X_data_example)
print(poly.fit_transform(X_data_example))
from sklearn.pipeline import make_pipeline
x_plot = np.linspace(-1, 1, 100).reshape(100,1)
regression = linear_model.LinearRegression()
for degree in [2, 5, 14]:
# chain PolynomialFeatures and LinearRegression into one
# estimator (make_pipeline is just a shorthand for Pipeline)
model = make_pipeline(PolynomialFeatures(degree), regression)
model.fit(X_data, Y_data)
#predict using the linear model
y_plot = model.predict(x_plot)
#plot
fig, ax = plt.subplots(figsize=(8,8))
ax.plot(x, y(x), label = "True Model")
ax.scatter(X_data, Y_data, label = "Training Points")
ax.plot(x_plot, y_plot, label="Degree %d" % degree)
plt.legend(loc='lower center')
#compute the mean squared error (MSE)
MSE = np.mean((model.predict(X_data) - Y_data) ** 2)
print("Mean Squared Error for Degree %d" % degree, ":")
print(MSE)
plt.title("Degree {}\nMSE = {:.2e}".format(degree, MSE))
plt.xlim(-1, 1)
plt.ylim(-2, 2)
plt.show()
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X_data, Y_data, test_size=0.4, random_state=0)
X_train.shape, Y_train.shape
X_test.shape, Y_test.shape
regr = linear_model.LinearRegression()
regr.fit(X_train, Y_train)
# The coefficients
print('intercept: \n', regr.intercept_)
print('coefficient: \n', regr.coef_)
# The mean square error computed on validation sets
print("Mean Squared Error: %.2f" % np.mean((regr.predict(X_test) - Y_test) ** 2))
regression = linear_model.LinearRegression()
for degree in [1, 2, 5, 14]:
model = make_pipeline(PolynomialFeatures(degree), regression)
model.fit(X_train, Y_train)
print("MSE on VALIDATION SET for Degree %d" % degree, ":")
print(np.mean((model.predict(X_test) - Y_test) ** 2))
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(linear_model.LinearRegression(), X_data, Y_data, cv = 5, scoring = 'mean_squared_error')
print("MSE array:", scores)
from sklearn.metrics.scorer import SCORERS
print(SCORERS.keys())
def my_own_scoring(regr, X_data, Y_data):
return np.mean(regr.predict(X_data) == Y_data)
cross_val_score(regr, X_data, Y_data, cv = 5, scoring = my_own_scoring)
from sklearn.cross_validation import KFold
kf = KFold(n = len(X_data), n_folds = 5, shuffle = True)
MSE_kth_list = []
print("len(kf):", len(kf))
for train_index, test_index in kf:
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X_data[train_index], X_data[test_index]
Y_train, Y_test = Y_data[train_index], Y_data[test_index]
regr = linear_model.LinearRegression().fit(X_train, Y_train)
MSE_kth = np.mean((regr.predict(X_test) - Y_test) ** 2)
MSE_kth_list.append(MSE_kth)
print("Average k-fold MSE:", np.mean(MSE_kth_list))
from sklearn.cross_validation import LeaveOneOut
loo = LeaveOneOut(len(X_data))
for train_index, test_index in loo:
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X_data[train_index], X_data[test_index]
Y_train, Y_test = Y_data[train_index], Y_data[test_index]
from sklearn.linear_model import Ridge
Ridge_regr = Ridge(alpha=0.5)
Ridge_regr.fit(X_data, Y_data)
print('Intercept: \n', Ridge_regr.intercept_)
print('Coefficient: \n', Ridge_regr.coef_)
print("Mean Squared Error: %.2f" % np.mean((Ridge_regr.predict(X_data) - Y_data) ** 2))
print('R^2: %.2f' % Ridge_regr.score(X_data, Y_data))
from sklearn.linear_model import RidgeCV
# a list of possible lambda values
lambda_list = [0.1, 1, 5, 10]
RidgeCV_regr = RidgeCV(alphas = lambda_list, scoring = 'mean_squared_error')
RidgeCV_regr.fit(X_data, Y_data)
print("Cross-Validated alpha:", RidgeCV_regr.alpha_)
from sklearn.pipeline import Pipeline
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import Ridge
x_plot = np.linspace(-1, 1, 100).reshape(100,1)
for degree in [1, 2, 5, 14]:
model = make_pipeline(PolynomialFeatures(degree), RidgeCV_regr)
model.fit(X_data, Y_data)
print("Cross-Validated Lambda for degree: %d" % degree)
print(RidgeCV_regr.alpha_)
# Evaluate the models using crossvalidation
scores = cross_val_score(model, X_data, Y_data, scoring="mean_squared_error", cv=10)
y_plot = model.predict(x_plot)
#plot
fig, ax = plt.subplots(figsize=(8,8))
ax.plot(x, y(x), label = "True Model")
ax.scatter(X_data, Y_data, label = "Training Points")
ax.plot(x_plot, y_plot, label="Degree %d" % degree)
plt.legend(loc='lower center')
print("Cross-Validated MSE:")
print(-scores.mean())
plt.title("Degree {}\nMSE = {:.2e}\n Chosen Lambda {}".format(degree, -scores.mean(), RidgeCV_regr.alpha_))
plt.xlim(-1, 1)
plt.ylim(-2, 2)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will begin by stating our "true model", defined as $y = \cos(\pi x)$.
Step2: We now add some random "noise" to it in order to generate 25 data points, i.e., 25 $(x_i, y_i)$-tuples that will form our dataset. To do so, we have
Step3: Here we want to plot the true model and the $(x_i, y_i)$-tuples we generated above.
Step4: Now, we use Scikit for Linear Regression. The module "LinearRegression" fits a linear model with coefficients $\beta = (\beta_1, \beta_2, \dots, \beta_p)$ to minimize the residual sum of squares between the observed responses in the dataset, and the responses predicted by the linear approximation. Mathematically it solves a problem of the form
Step5: We now plot the results, which are obviously not very significant.
Step6: To make things more interesting, we are now going to use Scikit's linear models trained on nonlinear functions of the data. This approach maintains the generally fast performance of linear methods, while allowing them to fit a much wider range of data.
Step7: This sort of preprocessing can be streamlined with the "Pipeline" tools. In fact, Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves two purposes here
Step8: We can see that a linear function (polynomial with degree 1) is not sufficient to fit the training samples. This is called underfitting. A polynomial of degree 5 approximates the true function almost perfectly. For higher degrees the model will overfit the training data, i.e. it learns the noise of the training data.
Step9: We can train a linear regression model on (X-train, Y-train) data, and then compute the Mean Squared Error on the (X-test, Y-test) set.
Step10: We can do the same with "polynomial fitting".
Step11: What we observe is a giant increase in the MSE for the polynomial of 14th degree. This is an obvious consequence of overfitting, i.e., with a 14th degree polynomal we are minimizing empirical risk, and not the "true" risk.
Step12: The simplest way to use cross-validation is to call the "cross_val_score" helper function on the estimator and the dataset. By default, the score computed at each CV iteration is the score method of the estimator. It is possible to change this by using the scoring parameter.
Step13: When the cv argument is an integer, "cross_val_score uses" the KFold by default.
Step14: There are many built-in scoring metrics. However, we can always define our own metric
Step15: Scikit-Learn provides many tools to generate indices that can be used to generate dataset splits according to different cross validation strategies.
Step16: Each fold is constituted by two arrays
Step17: Another example of cross-validation strategy is the "LeaveOneOut" (or LOO, a simple cross-validation. Each learning set is created by taking all the samples except one, the test set being the sample left out. Thus, for n samples, we have n different training sets and n different tests set. This cross-validation procedure does not waste much data as only one sample is removed from the training set
Step18: We now turn to Ridge Regression.
Step19: RidgeCV implements ridge regression with built-in cross-validation of the alpha parameter. It defaults to Generalized Cross-Validation (GCV), an efficient form of leave-one-out (LOO) cross-validation
Step20: In this final example, we demonstrates the problems of underfitting and overfitting and how we can use Ridge regression with polynomial features to approximate nonlinear functions. The plot shows the function that we want to approximate, which is a part of the cosine function. In addition, the samples from the real function and the approximations of different models are displayed. The models have polynomial features of different degrees.
|
15,669 | <ASSISTANT_TASK:>
Python Code:
from veneer.manage import start, create_command_line, kill_all_now
import veneer
veneer_install = 'D:\\src\\projects\\Veneer\\Compiled\\Source 4.1.1.4484 (public version)'
source_version = '4.1.1'
cmd_directory = 'E:\\temp\\veneer_cmd'
path = create_command_line(veneer_install,source_version,dest=cmd_directory)
path
catchment_project='ExampleProject/CalibrationExample.rsproj'
num_copies=20 # Important - set this to be a number ~ the number of CPU cores in your system!
first_port=9950
processes,ports = start(catchment_project,
n_instances=num_copies,
ports=first_port,
debug=True,
veneer_exe=path,
remote=False)
vs = [veneer.Veneer(port=p) for p in ports]
%matplotlib inline
v = vs[0]
v.network().as_dataframe().plot()
from veneer.pest import Case
calibration = Case('CalibrationCase',optimiser='cmaes_p',model_servers=ports)
calibration.options['PRECIS']='single'
v.model.find_model_type('GR4J')
params = v.model.find_parameters('TIME.Models.RainfallRunoff.GR4J.GR4J')
params
params = params[2:]
params
ranges = {
'x1':[100.0,500.0],
'x2':[1.0,5.0],
'x3':[1.0,200.0],
'x4':[0.5,3.0]
}
ranges
for param,param_range in ranges.items():
print('Configuring %s'%param)
pest_pname = '$'+param+'$'
# 1. Tell PEST how to set the parameter
calibration.parameters.model.catchment.runoff.set_param_values(param,pest_pname)
# 2. Details of the PEST parameter. name, starting value, min, max
# Decide what to use for the initial value... half way between min and max!
initial = 0.5*(param_range[0]+param_range[1])
calibration.parameters.describe(pest_pname,initial,param_range[0],param_range[1])
print(calibration.ptf_text())
print(calibration.pcf_text())
calibration.options
import pandas as pd
flows = pd.read_csv('SyntheticObservedFlow.csv',parse_dates=True,dayfirst=True,index_col=0)
flows[0::50] # Show every fifty days
flows.plot()
start,end = flows.index[[0,-1]]
start,end
network = v.network()
nodes = network['features'].find_by_feature_type('node')
nodes._all_values('name')
calibration_node = 'G123456A'
calibration.observations.data.read_csv('SyntheticObservedFlow.csv',parse_dates=True,dayfirst=True,index_col=0)
comparison={'NetworkElement':calibration_node,'RecordingVariable':'Downstream Flow Volume'}
flows.columns
from veneer import stats
dir(stats)
help(stats.nse)
calibration.observations.compare('Flow',comparison,stat=stats.nse,aggregation='daily')
for v in vs:
veneer.log('Configuring recording for server on port %d'%v.port)
v.configure_recording(enable=[comparison],disable=[{}])
print(calibration.ptf_text())
print(calibration.pif_text())
print(calibration.pcf_text())
print(calibration.prf_text())
pest_path='C:\\PEST'
import os
os.environ['PATH'] = os.environ['PATH']+';'+pest_path
results = calibration.run()
#results = calibration.get_results()
results['parameters']
kill_all_now(processes)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Also as before, we need a copy of the Veneer client for each copy of the server
Step2: The catchment
Step3: Describing the PEST 'Job'
Step4: At very least, we need to give a Case a name - which is the basis for all the filenames that will be written out.
Step5: PEST has many options - most of which we leave at default. One option that we currently need is to put PEST into single precision mode. This is because PEST, in double precision mode, uses a syntax for floating point literals that is not valid Python
Step6: Configuring the calibration parameters
Step7: We only want to calibrate x1-x4 - (C and k are specific to the eWater version of GR4J - they provide a baseflow filter)
Step8: We need to assign ranges to each of these. The model implementation in Source has metadata about suitable ranges - but at this stage, there isn't an easy way to interrogate that information from veneer-py. You can check in the Source user interface (Edit|Rainfall Runoff Models) to see the ranges.
Step9: Now, we can loop over each parameter and 'teach' PEST about it - ie tell PEST how to modify the parameter and tell PEST what range we want to calibrate over
Step10: Note
Step11: There are still gaps in the PTF - eg the # Compute Stats section - that will come as we describe the outputs and observations.
Step12: Note
Step13: Configuring the outputs and observations
Step14: Note
Step15: This (synthetic) observed flow sequence relates to the (synthetic) gauge towards the bottom of the system. What was it called?
Step16: Aaah, we want 'G123456A'
Step17: Now we can tell PEST about the observations and the comparison we want.
Step18: And we can set up the comparison
Step19: veneer-py configures the observation based on the column name in the observed flow file (so that you can have multiple comparisons from different columns and files)
Step20: We also need to reference a stats function. You can write your own (but you'll need to store it in a .py file) or you can access one from veneer.stats
Step21: We need to do one more thing
Step22: If we look at the content of the PEST config files now, we'll see more details filled in
Step23: Running PEST
|
15,670 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
import analytic
import thinkstats2
import thinkplot
thinkplot.PrePlot(3)
for lam in [2.0, 1, 0.5]:
xs, ps = thinkstats2.RenderExpoCdf(lam, 0, 3.0, 50)
label = r'$\lambda=%g$' % lam
thinkplot.Plot(xs, ps, label=label)
thinkplot.Config(title='Exponential CDF', xlabel='x', ylabel='CDF',
loc='lower right')
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label='actual')
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Time between births (minutes)', ylabel='CDF')
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel='Time between births (minutes)',
ylabel='CCDF', yscale='log', loc='upper right')
thinkplot.PrePlot(3)
mus = [1.0, 2.0, 3.0]
sigmas = [0.5, 0.4, 0.3]
for mu, sigma in zip(mus, sigmas):
xs, ps = thinkstats2.RenderNormalCdf(mu=mu, sigma=sigma,
low=-1.0, high=4.0)
label = r'$\mu=%g$, $\sigma=%g$' % (mu, sigma)
thinkplot.Plot(xs, ps, label=label)
thinkplot.Config(title='Normal CDF', xlabel='x', ylabel='CDF',
loc='upper left')
preg = nsfg.ReadFemPreg()
weights = preg.totalwgt_lb.dropna()
# estimate parameters: trimming outliers yields a better fit
mu, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
print('Mean, Var', mu, var)
# plot the model
sigma = np.sqrt(var)
print('Sigma', sigma)
xs, ps = thinkstats2.RenderNormalCdf(mu, sigma, low=0, high=12.5)
thinkplot.Plot(xs, ps, label='model', color='0.6')
# plot the data
cdf = thinkstats2.Cdf(weights, label='data')
thinkplot.PrePlot(1)
thinkplot.Cdf(cdf)
thinkplot.Config(title='Birth weights',
xlabel='Birth weight (pounds)',
ylabel='CDF')
n = 1000
thinkplot.PrePlot(3)
mus = [0, 1, 5]
sigmas = [1, 1, 2]
for mu, sigma in zip(mus, sigmas):
sample = np.random.normal(mu, sigma, n)
xs, ys = thinkstats2.NormalProbability(sample)
label = '$\mu=%d$, $\sigma=%d$' % (mu, sigma)
thinkplot.Plot(xs, ys, label=label)
thinkplot.Config(title='Normal probability plot',
xlabel='standard normal sample',
ylabel='sample values')
mean, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
std = np.sqrt(var)
xs = [-4, 4]
fxs, fys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(fxs, fys, linewidth=4, color='0.8')
xs, ys = thinkstats2.NormalProbability(weights)
thinkplot.Plot(xs, ys, label='all live')
thinkplot.Config(title='Normal probability plot',
xlabel='Standard deviations from mean',
ylabel='Birth weight (lbs)')
full_term = preg[preg.prglngth >= 37]
term_weights = full_term.totalwgt_lb.dropna()
mean, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
std = np.sqrt(var)
xs = [-4, 4]
fxs, fys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(fxs, fys, linewidth=4, color='0.8')
thinkplot.PrePlot(2)
xs, ys = thinkstats2.NormalProbability(weights)
thinkplot.Plot(xs, ys, label='all live')
xs, ys = thinkstats2.NormalProbability(term_weights)
thinkplot.Plot(xs, ys, label='full term')
thinkplot.Config(title='Normal probability plot',
xlabel='Standard deviations from mean',
ylabel='Birth weight (lbs)')
import brfss
df = brfss.ReadBrfss()
weights = df.wtkg2.dropna()
def MakeNormalModel(weights):
Plots a CDF with a Normal model.
weights: sequence
cdf = thinkstats2.Cdf(weights, label='weights')
mean, var = thinkstats2.TrimmedMeanVar(weights)
std = np.sqrt(var)
print('n, mean, std', len(weights), mean, std)
xmin = mean - 4 * std
xmax = mean + 4 * std
xs, ps = thinkstats2.RenderNormalCdf(mean, std, xmin, xmax)
thinkplot.Plot(xs, ps, label='model', linewidth=4, color='0.8')
thinkplot.Cdf(cdf)
MakeNormalModel(weights)
thinkplot.Config(title='Adult weight, linear scale', xlabel='Weight (kg)',
ylabel='CDF', loc='upper right')
log_weights = np.log10(weights)
MakeNormalModel(log_weights)
thinkplot.Config(title='Adult weight, log scale', xlabel='Weight (log10 kg)',
ylabel='CDF', loc='upper right')
def MakeNormalPlot(weights):
Generates a normal probability plot of birth weights.
weights: sequence
mean, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
std = np.sqrt(var)
xs = [-5, 5]
xs, ys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(xs, ys, color='0.8', label='model')
xs, ys = thinkstats2.NormalProbability(weights)
thinkplot.Plot(xs, ys, label='weights')
MakeNormalPlot(weights)
thinkplot.Config(title='Adult weight, normal plot', xlabel='Weight (kg)',
ylabel='CDF', loc='upper left')
MakeNormalPlot(log_weights)
thinkplot.Config(title='Adult weight, lognormal plot', xlabel='Weight (log10 kg)',
ylabel='CDF', loc='upper left')
xmin = 0.5
thinkplot.PrePlot(3)
for alpha in [2.0, 1.0, 0.5]:
xs, ps = thinkstats2.RenderParetoCdf(xmin, alpha, 0, 10.0, n=100)
thinkplot.Plot(xs, ps, label=r'$\alpha=%g$' % alpha)
thinkplot.Config(title='Pareto CDF', xlabel='x',
ylabel='CDF', loc='lower right')
import populations
pops = populations.ReadData()
print('Number of cities/towns', len(pops))
log_pops = np.log10(pops)
cdf = thinkstats2.Cdf(pops, label='data')
cdf_log = thinkstats2.Cdf(log_pops, label='data')
# pareto plot
xs, ys = thinkstats2.RenderParetoCdf(xmin=5000, alpha=1.4, low=0, high=1e7)
thinkplot.Plot(np.log10(xs), 1-ys, label='model', color='0.8')
thinkplot.Cdf(cdf_log, complement=True)
thinkplot.Config(xlabel='log10 population',
ylabel='CCDF',
yscale='log', loc='lower left')
thinkplot.PrePlot(cols=2)
mu, sigma = log_pops.mean(), log_pops.std()
xs, ps = thinkstats2.RenderNormalCdf(mu, sigma, low=0, high=8)
thinkplot.Plot(xs, ps, label='model', color='0.8')
thinkplot.Cdf(cdf_log)
thinkplot.Config(xlabel='log10 population',
ylabel='CDF', loc='lower right')
thinkstats2.NormalProbabilityPlot(log_pops, label='data')
thinkplot.Config(xlabel='Random variate',
ylabel='log10 population',
xlim=[-5, 5])
import random
def expovariate(lam):
p = random.random()
x = -np.log(1-p) / lam
return x
t = [expovariate(lam=2) for _ in range(1000)]
cdf = thinkstats2.Cdf(t)
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel='Exponential variate', ylabel='CCDF', yscale='log')
import scipy.stats
mu = 178
sigma = 7.7
dist = scipy.stats.norm(loc=mu, scale=sigma)
type(dist)
dist.mean(), dist.std()
dist.cdf(mu-sigma)
# Solution
low = dist.cdf(177.8) # 5'10"
high = dist.cdf(185.4) # 6'1"
low, high, high-low
alpha = 1.7
xmin = 1 # meter
dist = scipy.stats.pareto(b=alpha, scale=xmin)
dist.median()
# Solution
dist.mean()
# Solution
dist.cdf(dist.mean())
# Solution
(1 - dist.cdf(1000)) * 7e9, dist.sf(1000) * 7e9
# Solution
# One way to solve this is to search for a height that we
# expect one person out of 7 billion to exceed.
# It comes in at roughly 600 kilometers.
dist.sf(600000) * 7e9
# Solution
# Another way is to use `ppf`, which evaluates the "percent point function", which
# is the inverse CDF. So we can compute the height in meters that corresponds to
# the probability (1 - 1/7e9).
dist.ppf(1 - 1/7e9)
sample = [random.weibullvariate(2, 1) for _ in range(1000)]
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(cdf, transform='weibull')
thinkplot.Config(xlabel='Weibull variate', ylabel='CCDF')
import analytic
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label='actual')
n = len(diffs)
lam = 44.0 / 24 / 60
sample = [random.expovariate(lam) for _ in range(n)]
1/lam, np.mean(sample)
# Solution
model = thinkstats2.Cdf(sample, label='model')
thinkplot.PrePlot(2)
thinkplot.Cdfs([cdf, model], complement=True)
thinkplot.Config(xlabel='Time between births (minutes)',
ylabel='CCDF',
yscale='log')
# Solution
# If you plot distributions for a large number of samples, you get a sense
# of how much random variation to expect. In this case, the data fall within
# the range we expect, so there is no compelling reason to think it is
# not exponential.
for i in range(100):
sample = [random.expovariate(lam) for _ in range(n)]
thinkplot.Cdf(thinkstats2.Cdf(sample), complement=True, color='0.9')
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel='Time between births (minutes)',
ylabel='CCDF',
yscale='log')
import hinc
df = hinc.ReadData()
df
xs, ps = df.income.values, df.ps.values
cdf = thinkstats2.Cdf(xs, ps, label='data')
cdf_log = thinkstats2.Cdf(np.log10(xs), ps, label='data')
# linear plot
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='household income',
ylabel='CDF')
xs, ys = thinkstats2.RenderParetoCdf(xmin=55000, alpha=2.5,
low=0, high=250000)
thinkplot.Plot(xs, 1-ys, label='model', color='0.8')
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel='log10 household income',
ylabel='CCDF',
xscale='log',
yscale='log',
loc='lower left')
median = cdf_log.Percentile(50)
iqr = cdf_log.Percentile(75) - cdf_log.Percentile(25)
std = iqr / 1.349
# choose std to match the upper tail
std = 0.35
print(median, std)
xs, ps = thinkstats2.RenderNormalCdf(median, std, low=3.5, high=5.5)
thinkplot.Plot(xs, ps, label='model', color='0.8')
thinkplot.Cdf(cdf_log)
thinkplot.Config(xlabel='log10 household income',
ylabel='CDF')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exponential distribution
Step2: Here's the distribution of interarrival times from a dataset of birth times.
Step3: Here's what the CCDF looks like on a log-y scale. A straight line is consistent with an exponential distribution.
Step4: Normal distribution
Step5: I'll use a normal model to fit the distribution of birth weights from the NSFG.
Step6: Here's the observed CDF and the model. The model fits the data well except in the left tail.
Step7: A normal probability plot is a visual test for normality. The following example shows that if the data are actually from a normal distribution, the plot is approximately straight.
Step8: Here's the normal probability plot for birth weights, showing that the lightest babies are lighter than we expect from the normal mode, and the heaviest babies are heavier.
Step9: If we suspect that the deviation in the left tail is due to preterm babies, we can check by selecting only full term births.
Step10: Now the deviation in the left tail is almost gone, but the heaviest babies are still heavy.
Step11: Lognormal model
Step13: The following function estimates the parameters of a normal distribution and plots the data and a normal model.
Step14: Here's the distribution of adult weights and a normal model, which is not a very good fit.
Step15: Here's the distribution of adult weight and a lognormal model, plotted on a log-x scale. The model is a better fit for the data, although the heaviest people are heavier than the model expects.
Step17: The following function generates a normal probability plot.
Step18: When we generate a normal probability plot with adult weights, we can see clearly that the data deviate from the model systematically.
Step19: If we make a normal probability plot with log weights, the model fit the data well except in the tails, where the heaviest people exceed expectations.
Step20: Pareto distribution
Step21: The distribution of populations for cities and towns is sometimes said to be Pareto-like.
Step22: Here's the distribution of population for cities and towns in the U.S., along with a Pareto model. The model fits the data well in the tail.
Step23: The lognormal model might be a better fit for this data (as is often the case for things that are supposed to be Pareto).
Step24: Here's a normal probability plot for the log-populations. The model fits the data well except in the right tail, where the biggest cities are bigger than expected.
Step25: Random variates
Step26: We can test it by generating a sample.
Step27: And plotting the CCDF on a log-y scale.
Step28: A straight line is consistent with an exponential distribution.
Step29: For example <tt>scipy.stats.norm</tt> represents a normal distribution.
Step30: A "frozen random variable" can compute its mean and standard deviation.
Step31: It can also evaluate its CDF. How many people are more than one standard deviation below the mean? About 16%
Step32: How many people are between 5'10" and 6'1"?
Step33: Exercise
Step34: What is the mean height in Pareto world?
Step35: What fraction of people are shorter than the mean?
Step36: Out of 7 billion people, how many do we expect to be taller than 1 km? You could use <tt>dist.cdf</tt> or <tt>dist.sf</tt>.
Step37: How tall do we expect the tallest person to be?
Step38: Exercise
Step39: Exercise
Step40: Worked Example
Step41: Here's what the CDF looks like on a linear scale.
Step42: To check whether a Pareto model describes the data well, I plot the CCDF on a log-log scale.
Step43: For the lognormal model I estimate mu and sigma using percentile-based statistics (median and IQR).
Step44: Here's what the distribution, and fitted model, look like on a log-x scale.
|
15,671 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import display
from sympy import symbols, simplify, sympify, expand
from sympy import init_printing
from sympy import Eq, Function
from clebschVector import ClebschVec
from clebschVector import div, grad, gradPerp, advVec
from common import rho, theta, poisson
from common import displayVec
init_printing()
u_z = symbols('u_z', real = True)
# In reality this is a function, but as it serves only as a dummy it is here defined as a symbol
# This makes it easier to replace
f = symbols('f', real = True)
phi = Function('phi')(rho, theta)
n = Function('n')(rho, theta)
# Symbols for printing
zeta, chi, epsilon = symbols('zeta, chi, epsilon')
nGradPerpPhi = gradPerp(phi)*n
displayVec(nGradPerpPhi)
# The basis-vectors are contravariant => components are covariant
eTheta = ClebschVec(rho=0, theta=1, z=0, covariant=True)
eRho = ClebschVec(rho=1, theta=0, z=0, covariant=True)
B = eTheta^eRho
displayVec(B, 'B')
Blen = B.len()
display(Eq(symbols('B'), Blen))
b = B/(B.len())
displayVec(b, 'b')
gradPerpPhi = gradPerp(phi)
displayVec(gradPerpPhi)
# Normalized B
BTilde = 1
# Defining u_E
ue = - ((gradPerpPhi^b)/BTilde)
displayVec(ue, 'u_E')
ueDotGrad_f = ue*grad(f)
display(ueDotGrad_f)
aRho, aZ, aTheta = symbols('a^rho, a^z, a^theta')
a_Rho, a_Z, a_Theta = symbols('a_rho, a_z, a_theta')
aCov = ClebschVec(rho = a_Rho, z=a_Z, theta = a_Theta, covariant=True)
aCon = ClebschVec(rho = aRho, z=aZ, theta = aTheta, covariant=False)
aCovDotNablaGradPhi = advVec(aCov, nGradPerpPhi)
displayVec(aCovDotNablaGradPhi)
aConDotNablaGradPhi = advVec(aCon, nGradPerpPhi)
displayVec(aConDotNablaGradPhi)
ueDotGradnGradPerpPhi = advVec(ue, nGradPerpPhi)
displayVec(ueDotGradnGradPerpPhi.doitVec())
displayVec(ueDotGradnGradPerpPhi.doitVec().simplifyVec())
div_ueDotGradnGradPerpPhi = div(ueDotGradnGradPerpPhi)
zetaFunc = div_ueDotGradnGradPerpPhi.doit().expand()
display(Eq(zeta, simplify(zetaFunc)))
vortD = div(gradPerp(phi)*n)
display(Eq(symbols('Omega^D'), vortD.doit().expand()))
poissonPhiVortD = Blen*poisson(phi, vortD)
chiFunc = poissonPhiVortD.doit().expand()
display(Eq(chi, chiFunc))
epsilonFunc = (zetaFunc - chiFunc).expand()
display(Eq(epsilon, epsilonFunc))
epsMinusCorrection = epsilonFunc\
-\
(\
(1/rho)*phi.diff(rho)*poisson(phi.diff(rho), n)\
+(1/(rho)**3)*phi.diff(theta)*poisson(phi.diff(theta),n)\
+(1/(rho)**4)*n.diff(theta)*(phi.diff(theta))**2
)
display(epsMinusCorrection.simplify())
xi = (Blen/2)*poisson(ue*ue, n).doit()
epsMinusNewCorr = epsilonFunc - (Blen/2)*poisson(ue*ue, n).doit()
display(epsMinusNewCorr.simplify())
display((ue*ue).doit())
display((zetaFunc - (chiFunc + xi)).simplify())
S = expand(zetaFunc)
strS = str(S)
# phi rho derivatives
strS = strS.replace('Derivative(phi(rho, theta), rho)', 'phi_x')
strS = strS.replace('Derivative(phi(rho, theta), rho, rho)', 'phi_xx')
strS = strS.replace('Derivative(phi(rho, theta), rho, rho, rho)', 'phi_xxx')
# phi theta derivatives
strS = strS.replace('Derivative(phi(rho, theta), theta)', 'phi_z')
strS = strS.replace('Derivative(phi(rho, theta), theta, theta)', 'phi_zz')
strS = strS.replace('Derivative(phi(rho, theta), theta, theta, theta)', 'phi_zzz')
# phi mixed derivatives
strS = strS.replace('Derivative(phi(rho, theta), rho, theta)', 'phi_xz')
strS = strS.replace('Derivative(phi(rho, theta), rho, theta, theta)', 'phi_xzz')
strS = strS.replace('Derivative(phi(rho, theta), rho, rho, theta)', 'phi_xxz')
# Non-derivatives
strS = strS.replace('phi(rho, theta)', 'phi')
# n rho derivatives
strS = strS.replace('Derivative(n(rho, theta), rho)', 'n_x')
strS = strS.replace('Derivative(n(rho, theta), rho, rho)', 'n_xx')
# n theta derivatives
strS = strS.replace('Derivative(n(rho, theta), theta)', 'n_z')
strS = strS.replace('Derivative(n(rho, theta), theta, theta)', 'n_zz')
# n mixed derivatives
strS = strS.replace('Derivative(n(rho, theta), rho, theta)', 'n_xz')
# Non-derivatives
strS = strS.replace('n(rho, theta)', 'n')
newS = sympify(strS)
display(Eq(symbols('S_new'), expand(newS)))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Calculation of the $E\times B$ advection
Step2: Defining $\mathbf{u}_E$
Step3: NOTE
Step4: Calculation of $\mathbf{u}E\cdot\nabla \left(n\nabla\perp\phi\right)$
Step5: Calculation of $\mathbf{a}\cdot\nabla \left(n\nabla_\perp\phi\right)$
Step6: Using covariant vector
Step7: Using contravariant vector
Step8: Calculation of full $\mathbf{u}E\cdot\nabla \left(n\nabla\perp\phi\right)$
Step9: Calculation of $\nabla\cdot\left(\mathbf{u}E\cdot\nabla\left[n\nabla\perp\phi\right]\right)$
Step10: Comparison with $B{\phi,\Omega^D}$
Step11: We now write $\chi = B{\phi,\Omega^D}$
Step12: The difference $\epsilon$ between $\zeta = \nabla\cdot\left(\mathbf{u}E\cdot\nabla\left[n\nabla\perp\phi\right]\right)$ and $\chi = B{\phi,\Omega^D}$ is given by
Step13: In fact we see that
Step14: What is more interesting is in fact that
Step15: Where
Step16: Note that the last term $\frac{1}{\rho^4}(\partial_\theta n)(\partial_\theta\phi)^2$ does not appear to come from the Poisson bracket. This is however the case, and comes from the part which contains
Step17: Printing for comparison
|
15,672 | <ASSISTANT_TASK:>
Python Code:
from nipype import Node, Workflow
from nipype.interfaces.fsl import SliceTimer, MCFLIRT, Smooth
# Initiate a node to correct for slice wise acquisition
slicetimer = Node(SliceTimer(index_dir=False,
interleaved=True,
time_repetition=2.5),
name="slicetimer")
# Initiate a node to correct for motion
mcflirt = Node(MCFLIRT(mean_vol=True,
save_plots=True),
name="mcflirt")
# Initiate a node to smooth functional images
smooth = Node(Smooth(fwhm=4), name="smooth")
preproc01 = Workflow(name='preproc01', base_dir='.')
preproc01.connect([(slicetimer, mcflirt, [('slice_time_corrected_file', 'in_file')]),
(mcflirt, smooth, [('out_file', 'in_file')])])
preproc01.write_graph(graph2use='orig')
# Visualize graph
from IPython.display import Image
Image(filename="preproc01/graph_detailed.png")
slicetimer.inputs.in_file = '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'
%time preproc01.run('MultiProc', plugin_args={'n_procs': 5})
!tree preproc01 -I '*js|*json|*pklz|_report|*.dot|*html'
smooth.inputs.fwhm = 2
%time preproc01.run('MultiProc', plugin_args={'n_procs': 5})
# First, let's copy/clone 'preproc01'
preproc02 = preproc01.clone('preproc02')
preproc03 = preproc01.clone('preproc03')
preproc04 = preproc01.clone('preproc04')
preproc05 = preproc01.clone('preproc05')
metaflow = Workflow(name='metaflow', base_dir='.')
# Now we can add the five preproc workflows to the bigger metaflow
metaflow.add_nodes([preproc01, preproc02, preproc03,
preproc04, preproc05])
# As before, let's write the graph of the workflow
metaflow.write_graph(graph2use='flat')
# And visualize the graph
from IPython.display import Image
Image(filename="metaflow/graph_detailed.png")
%time metaflow.run('MultiProc', plugin_args={'n_procs': 5})
!tree metaflow -I '*js|*json|*pklz|_report|*.dot|*html'
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, we can import the interfaces that we want to use for the preprocessing.
Step2: Next, we will put the three interfaces into a node and define the specific input parameters.
Step3: After creating the nodes, we can now create the preprocessing workflow.
Step4: Now, we can put all the nodes into this preprocessing workflow. We specify the data flow / execution flow of the workflow by connecting the corresponding nodes to each other.
Step5: To better understand what we did we can write out the workflow graph and visualize it directly in this notebook.
Step6: Run the workflow on one functional image
Step7: To show off Nipype's parallelization power, let's run the workflow in parallel, on 5 processors and let's show the execution time
Step8: Conclusion
Step9: Rerunning of a workflow
Step10: And let's run the workflow again.
Step11: Conclusion
Step12: We now have five different preprocessing workflows. If we want to run them in parallel, we can put them all in another workflow.
Step13: Note
Step14: Ah... so now we can see that the metaflow has potential for parallelization. So let's put it to test
Step15: This time we can see that Nipype uses all available processors.
|
15,673 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.ensemble import VotingClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import LabelEncoder
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.layers import ZeroPadding2D
from keras.layers import Convolution2D
import h5py
from keras import applications
from keras.utils.np_utils import to_categorical
from keras.models import Model
import urllib
import os
import math
df_train = pd.read_excel('Training_Data_Assessment.xlsx')
df_test = pd.read_excel('Data To Classify_Assessment.xlsx')
categories = pd.read_excel('Categories_Assessment.xlsx',header=None,names=['CategoryName'])
df_train.head()
df_train.describe()
df_test.head()
df_test.describe()
categories.head()
unique_categories = categories['CategoryName']
df_train.drop_duplicates('ASIN',inplace=True)
x_df_train, x_df_test, y_df_train, y_df_test = train_test_split(df_train[['Title','ImageUrl']],df_train[['CategoryName']],
train_size=.8, stratify=df_train['CategoryName'],random_state=30)
current_path = os.getcwd()
train_path = os.path.join(current_path,'train')
validation_path = os.path.join(current_path,'validation')
test_path = os.path.join(current_path,'test')
def image_download(x_df,y_df,unique_categories,path,category_column='CategoryName',imageurl_column='ImageUrl'):
for i in x_df.index:
for j in unique_categories:
if y_df[category_column][i] == j:
class_path = os.path.join(path,j)
if not os.path.exists(class_path):
os.makedirs(class_path)
filename = '%s.jpg'%i
fullfilename = os.path.join(class_path,filename)
if not os.path.exists(fullfilename):
try:
urllib.request.urlretrieve(x_df[imageurl_column][i],fullfilename)
except:
print('Did not download:')
print(i)
print(x_df[imageurl_column][i])
pass
return
image_download(x_df=x_df_train,y_df=y_df_train,unique_categories=unique_categories,path=train_path)
image_download(x_df=x_df_test,y_df=y_df_test,unique_categories=unique_categories,path=validation_path)
for i in df_test.index:
class_path = os.path.join(test_path,'New folder')
if not os.path.exists(class_path):
os.makedirs(class_path)
filename = '%s.jpg'%i
fullfilename = os.path.join(class_path,filename)
if not os.path.exists(fullfilename):
try:
urllib.request.urlretrieve(df_test['ImageUrl'][i],fullfilename)
except:
print('Did not download:')
print(i)
print(df_test['ImageUrl'][i])
pass
text_clf_multinomial_nb = Pipeline([('vect', CountVectorizer(stop_words='english',lowercase=True)),('tfidf', TfidfTransformer()),('clf', MultinomialNB())])
text_clf_multinomial_nb.fit(x_df_train['Title'], y_df_train['CategoryName'])
text_clf_multinomial_nb.score(x_df_test['Title'],y_df_test['CategoryName'])
text_clf_sgd = Pipeline([('vect', CountVectorizer(stop_words='english',lowercase=True)),('tfidf', TfidfTransformer()),('clf', SGDClassifier())])
text_clf_sgd.fit(x_df_train['Title'], y_df_train['CategoryName'])
text_clf_sgd.score(x_df_test['Title'],y_df_test['CategoryName'])
text_clf_rf = Pipeline([('vect', CountVectorizer(stop_words='english',lowercase=True)),('tfidf', TfidfTransformer()),('clf', RandomForestClassifier())])
text_clf_rf.fit(x_df_train['Title'], y_df_train['CategoryName'])
text_clf_rf.score(x_df_test['Title'],y_df_test['CategoryName'])
text_clf_logis = Pipeline([('vect', CountVectorizer(stop_words='english',lowercase=True)),('tfidf', TfidfTransformer()),('clf', LogisticRegression())])
text_clf_logis.fit(x_df_train['Title'], y_df_train['CategoryName'])
text_clf_logis.score(x_df_test['Title'],y_df_test['CategoryName'])
# dimensions of our images.
img_width, img_height = 150, 150
top_model_weights_path = 'bottleneck_fc_model1.h5'
train_data_dir = 'train'
validation_data_dir = 'validation'
epochs = 50
batch_size = 32
datagen = ImageDataGenerator(rescale=1. / 255)
model = applications.VGG16(include_top=False, weights='imagenet')
#train generator
train_generator = datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode= 'categorical',
shuffle=False)
#train variables
nb_train_samples = len(train_generator.filenames)
num_classes = len(train_generator.class_indices)
predict_size_train = int(math.ceil(nb_train_samples / batch_size))
#creaing training feature set
bottleneck_features_train = model.predict_generator(
train_generator, predict_size_train)
np.save('bottleneck_features_train',bottleneck_features_train)
#train labels
train_labels = train_generator.classes
train_labels = to_categorical(train_labels,63)
#validation generator
generator = datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode= 'categorical',
shuffle=False)
#test variables
nb_validation_samples = len(generator.filenames)
predict_size_validation = int(math.ceil(nb_validation_samples / batch_size))
#creaing validation feature set
bottleneck_features_validation = model.predict_generator(
generator, predict_size_validation)
np.save('bottleneck_features_validation',bottleneck_features_validation)
#test labels
validation_labels = generator.classes
validation_labels = to_categorical(validation_labels,63)
#loading training and validation feature set generated above
train_data = np.load('bottleneck_features_train.npy')
validation_data = np.load('bottleneck_features_validation.npy')
from keras import optimizers
rmsprop = optimizers.RMSprop(lr=0.001)
top_model = Sequential()
top_model.add(Flatten(input_shape=train_data.shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.6))
top_model.add(Dense(63, activation='softmax'))
top_model.compile(optimizer=rmsprop,loss='categorical_crossentropy', metrics=['accuracy'])
top_model.fit(train_data, train_labels,
epochs=30,
batch_size=batch_size,
validation_data=(validation_data, validation_labels))
top_model.save_weights(top_model_weights_path)
model_vgg = applications.VGG16(weights='imagenet', include_top=False, input_shape=(150, 150, 3))
top_model = Sequential()
top_model.add(Flatten(input_shape=model_vgg.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.6))
top_model.add(Dense(63, activation='softmax'))
top_model.load_weights('bottleneck_fc_model1.h5')
model = Model(inputs = model_vgg.input, outputs = top_model(model_vgg.output))
df_test.reset_index(inplace=True)
filename=[]
vgg16_predictions=[]
final_data_dir='test'
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
final_data_dir,
target_size=(img_width, img_height),
class_mode=None,
batch_size=1)
filenames = test_generator.filenames
nb_samples = len(filenames)
predict = model.predict_generator(test_generator,steps = nb_samples)
for f in filenames:
filename.append(f)
for r in predict:
vgg16_predictions.append(np.argmax(r))
ir_res = pd.DataFrame(
{'filename': [f[11:-4] for f in filename],
'category': vgg16_predictions,
})
ir_res.to_csv('ir_res.csv')
len(ir_res)
x=pd.read_csv('ir_res.csv')
index=train_generator.class_indices
y=list(x['category'])
categories_p=[]
for i in y:
for j in index:
if i == index[j]:
categories_p.append(j)
df_test.set_index('ASIN',inplace=True)
a=text_clf_multinomial_nb.predict(df_test['Title'])
b=text_clf_sgd.predict(df_test['Title'])
c=text_clf_rf.predict(df_test['Title'])
d=text_clf_logis.predict(df_test['Title'])
ir_res_new = pd.DataFrame(
{'ASIN': x['filename'],
'CNN Category': categories_p,
})
ir_res_new.set_index('ASIN',inplace=True)
predictions = df_test.join(ir_res_new)
predictions.drop(['Title','ImageUrl','BrandName'],1, inplace=True)
predictions['Multinomial NB Category'] = a
predictions['SVC Category'] = b
predictions['RF Category'] = c
predictions['LogisticR Category'] = d
final_pred =[]
for i in range(len(df_test)):
lst = [a[i],b[i],c[i],d[i],predictions['CNN Category'][i]]
final_pred.append(max(set(lst), key=lst.count))
df_test['Category'] = final_pred
df_test.to_csv('final_prediction.csv')
predictions.to_csv('predictions_allClassifiers.csv')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploratory Data Analysis
Step2: Observations
Step3: Training and Validation Split
Step4: Image Downloading
Step5: Function to download test and validation images
Step6: Download Training Images
Step7: Download Validation Images
Step8: Download Test Images
Step9: Text Classification
Steps in pipeline
Step10: SGD Classifier
Step11: Random Forest Classifier
Step12: Logistic Regression Classifier
Step13: Note
Step14: Building VGG16 network
Step15: Running VGG16 is expensive, as working on CPU. I want to only do it only once. This prevents me from using data augmentation which can improve my model by preventing overfitting.
Step16: Generating Validation Feature Set
Step17: Loading Training and Validation feature set
Step18: Training top classifier model
Step19: Making Final Image Recognition Model
Step20: Predict Category based on Test Images
Step21: Make image classification results into a data frame
Step22: Final Product Classification
Step23: Collecting all predictions from all classifiers
Step24: Take Average of all 5 classifiers
|
15,674 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "tchhabr"
class O:
Basic Class which
- Helps dynamic updates
- Pretty Prints
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
# Few Utility functions
def say(*lst):
Print whithout going to new line
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
Generate a random number between low and high.
decimals incidicate number of decimal places
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
Shuffle a list
random.shuffle(lst)
return lst
class Decision(O):
Class indicating Decision of a problem
def __init__(self, name, low, high):
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
Class indicating Objective of a problem
def __init__(self, name, do_minimize=True):
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
O.__init__(self, name=name, do_minimize=do_minimize)
class Point(O):
Represents a member of the population
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions)
new.objectives = self.objectives
return new
class Problem(O):
Class representing the cone problem.
def __init__(self):
O.__init__(self)
# TODO 2: Code up decisions and objectives below for the problem
# using the auxilary classes provided above.
self.decisions = None
self.objectives = None
radius = Decision('radius', 0, 10)
height = Decision('height', 0, 20)
self.decisions = [radius, height]
s = Objective('surface')
t = Objective('total area')
self.objectives = [s,t]
@staticmethod
def evaluate(point):
[r, h] = point.decisions
point.objectives = None
# TODO 3: Evaluate the objectives S and T for the point.
l = (r**2 + h**2)**0.5
S = pi * r * l
T = S + pi * r**2
point.objectives = [S, T]
return point.objectives
@staticmethod
def is_valid(point):
[r, h] = point.decisions
# TODO 4: Check if the point has valid decisions
V = pi*(r**2)*h/3
return V > 200
def generate_one(self):
# TODO 5: Generate a valid instance of Point.
while True:
point = Point([random_value(d.low, d.high) for d in self.decisions])
if Problem.is_valid(point):
return point
cone = Problem()
point = cone.generate_one()
cone.evaluate(point)
print(point)
def populate(problem, size):
population = []
# TODO 6: Create a list of points of length 'size'
return [problem.generate_one() for _ in xrange(size)]
print (populate(cone,5))
def crossover(mom, dad):
# TODO 7: Create a new point which contains decisions from
# the first half of mom and second half of dad
n = len(mom.decisions)
return Point(mom.decisions[:n//2] + dad.decisions[n//2:])
pop = populate(cone,5)
crossover(pop[0], pop[1])
def mutate(problem, point, mutation_rate=0.01):
# TODO 8: Iterate through all the decisions in the problem
# and if the probability is less than mutation rate
# change the decision(randomly set it between its max and min).
for i, d in enumerate(problem.decisions):
if random.random() < mutation_rate:
point.decisions[i] = random_value(d.low, d.high)
return point
print (mutate(cone,point,0.1))
obs = populate(cone,5)
print (obs)
def bdom(problem, one, two):
Return if one dominates two
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
if (one == two):
return False
dominates = False
# TODO 9: Return True/False based on the definition
# of bdom above.
first = True
second = False
for i,_ in enumerate(problem.objectives):
if ((first is True) & gt(one.objectives[i], two.objectives[i])):
first = False
elif (not second & (one.objectives[i] is not two.objectives[i])):
second = True
dominates = first & second
return dominates
print (bdom(cone,obs[4],obs[4]))
def fitness(problem, population, point):
dominates = 0
# TODO 10: Evaluate fitness of a point.
# For this workshop define fitness of a point
# as the number of points dominated by it.
# For example point dominates 5 members of population,
# then fitness of point is 5.
for pop in population:
if bdom(problem, point, pop):
dominates += 1
return dominates
def elitism(problem, population, retain_size):
# TODO 11: Sort the population with respect to the fitness
# of the points and return the top 'retain_size' points of the population
fit_pop = [fitness(cone,population,pop) for pop in population]
population = [pop for _,pop in sorted(zip(fit_pop,population), reverse = True)]
return population[:retain_size]
def ga(pop_size = 100, gens = 250):
problem = Problem()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size)
gen += 1
print("")
return initial_population, population
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[0] for i in initial_objs]
initial_y = [i[1] for i in initial_objs]
final_x = [i[0] for i in final_objs]
final_y = [i[1] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Total Surface Area(T)")
plt.xlabel("Curved Surface Area(S)")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga()
plot_pareto(initial, final)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Genetic Algorithm Workshop
Step11: The optimization problem
Step12: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Step13: Crossover
Step14: Mutation
Step16: Fitness Evaluation
Step17: Fitness and Elitism
Step18: Putting it all together and making the GA
Step19: Visualize
|
15,675 | <ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Suite
import pandas as pd
import numpy as np
import thinkplot
class BayesTable(pd.DataFrame):
def __init__(self, hypo, prior=1, **options):
columns = ['prior', 'likelihood', 'unnorm', 'posterior']
super().__init__(index=hypo, columns=columns, **options)
self.prior = prior
def mult(self):
self.unnorm = self.prior * self.likelihood
def norm(self):
nc = np.sum(self.unnorm)
self.posterior = self.unnorm / nc
return nc
def update(self):
self.mult()
return self.norm()
def reset(self):
return BayesTable(self.hypo, self.posterior)
base_rate = 2.65 / 1000
hypo = ['cancer', 'no cancer']
prior = [base_rate, 1-base_rate]
table = BayesTable(hypo, prior)
sensitivity = 0.734
specificity = 0.877
table.likelihood = [sensitivity, 1-specificity]
table
likelihood_ratio = table.likelihood['cancer'] / table.likelihood['no cancer']
table.update()
table
table.posterior['cancer'] * 100
def compute_ppv(base_rate, sensitivity, specificity):
pmf = Pmf()
pmf['cancer'] = base_rate * sensitivity
pmf['no cancer'] = (1 - base_rate) * (1 - specificity)
pmf.Normalize()
return pmf
pmf = compute_ppv(base_rate, sensitivity, specificity)
ages = [40, 50, 60, 70, 80]
rates = pd.Series([2.65, 4.28, 5.70, 6.76, 8.51], index=ages)
for age, rate in rates.items():
pmf = compute_ppv(rate, sensitivity, specificity)
print(age, pmf['cancer'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interpreting medical tests
Step2: Assumptions and interpretation
Step3: So there is a 1.56% chance that this patient has cancer, given that the initial screening mammogram was positive.
|
15,676 | <ASSISTANT_TASK:>
Python Code:
import json
my_tweets = json.load(open('my_tweets.json'))
for id_, tweet_info in my_tweets.items():
print(id_, tweet_info)
break
def run_vader(nlp,
textual_unit,
lemmatize=False,
parts_of_speech_to_consider=set(),
verbose=0):
Run VADER on a sentence from spacy
:param str textual unit: a textual unit, e.g., sentence, sentences (one string)
(by looping over doc.sents)
:param bool lemmatize: If True, provide lemmas to VADER instead of words
:param set parts_of_speech_to_consider:
-empty set -> all parts of speech are provided
-non-empty set: only these parts of speech are considered
:param int verbose: if set to 1, information is printed
about input and output
:rtype: dict
:return: vader output dict
doc = nlp(textual_unit)
input_to_vader = []
for sent in doc.sents:
for token in sent:
if verbose >= 2:
print(token, token.pos_)
to_add = token.text
if lemmatize:
to_add = token.lemma_
if to_add == '-PRON-':
to_add = token.text
if parts_of_speech_to_consider:
if token.pos_ in parts_of_speech_to_consider:
input_to_vader.append(to_add)
else:
input_to_vader.append(to_add)
scores = vader_model.polarity_scores(' '.join(input_to_vader))
if verbose >= 1:
print()
print('INPUT SENTENCE', sent)
print('INPUT TO VADER', input_to_vader)
print('VADER OUTPUT', scores)
return scores
def vader_output_to_label(vader_output):
map vader output e.g.,
{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.4215}
to one of the following values:
a) positive float -> 'positive'
b) 0.0 -> 'neutral'
c) negative float -> 'negative'
:param dict vader_output: output dict from vader
:rtype: str
:return: 'negative' | 'neutral' | 'positive'
compound = vader_output['compound']
if compound < 0:
return 'negative'
elif compound == 0.0:
return 'neutral'
elif compound > 0.0:
return 'positive'
assert vader_output_to_label( {'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.0}) == 'neutral'
assert vader_output_to_label( {'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.01}) == 'positive'
assert vader_output_to_label( {'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': -0.01}) == 'negative'
import spacy
! python -m spacy download en_core_web_sm
nlp = spacy.load('en_core_web_sm')
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
vader_model = SentimentIntensityAnalyzer()
my_annotation = 'positive' # what you annotate yourself
sentence = "I like Python"
vader_output = run_vader(nlp, sentence)
vader_label = vader_output_to_label(vader_output)
accurate = my_annotation == vader_label
print()
print('SENTENCE', sentence) # the sentence
print('VADER OUTPUT', vader_output) # the VADER output
print('VADER LABEL', vader_label) # the VADER output mapped to a label, in this case 'positive'
print('MY ANNOTATION', my_annotation) # my annotation
print('ACCURACY', accurate) # did VADER predict the same label as my manual annotation?
import json
my_tweets = json.load(open('my_tweets.json'))
tweets = []
all_vader_output = []
manual_annotation = []
for id_, tweet_info in my_tweets.items():
the_tweet = tweet_info['text_of_tweet']
vader_output = ''# run vader
vader_label = ''# convert vader output to category
tweets.append(the_tweet)
all_vader_output.append(vader_label)
manual_annotation.append(tweet_info['sentiment_label'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Exercise 3
Step4: Exercise 3a
|
15,677 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import linalg
import mne
from mne.datasets import sample
from mne.viz import plot_sparse_source_estimates
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
subjects_dir = data_path + '/subjects'
condition = 'Left Auditory'
# Read noise covariance matrix
noise_cov = mne.read_cov(cov_fname)
# Handling average file
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0.04, tmax=0.18)
evoked = evoked.pick_types(eeg=False, meg=True)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
def apply_solver(solver, evoked, forward, noise_cov, loose=0.2, depth=0.8):
Call a custom solver on evoked data.
This function does all the necessary computation:
- to select the channels in the forward given the available ones in
the data
- to take into account the noise covariance and do the spatial whitening
- to apply loose orientation constraint as MNE solvers
- to apply a weigthing of the columns of the forward operator as in the
weighted Minimum Norm formulation in order to limit the problem
of depth bias.
Parameters
----------
solver : callable
The solver takes 3 parameters: data M, gain matrix G, number of
dipoles orientations per location (1 or 3). A solver shall return
2 variables: X which contains the time series of the active dipoles
and an active set which is a boolean mask to specify what dipoles are
present in X.
evoked : instance of mne.Evoked
The evoked data
forward : instance of Forward
The forward solution.
noise_cov : instance of Covariance
The noise covariance.
loose : float in [0, 1] | 'auto'
Value that weights the source variances of the dipole components
that are parallel (tangential) to the cortical surface. If loose
is 0 then the solution is computed with fixed orientation.
If loose is 1, it corresponds to free orientations.
The default value ('auto') is set to 0.2 for surface-oriented source
space and set to 1.0 for volumic or discrete source space.
depth : None | float in [0, 1]
Depth weighting coefficients. If None, no depth weighting is performed.
Returns
-------
stc : instance of SourceEstimate
The source estimates.
# Import the necessary private functions
from mne.inverse_sparse.mxne_inverse import \
(_prepare_gain, _check_loose_forward, is_fixed_orient,
_reapply_source_weighting, _make_sparse_stc)
all_ch_names = evoked.ch_names
loose, forward = _check_loose_forward(loose, forward)
# Handle depth weighting and whitening (here is no weights)
gain, gain_info, whitener, source_weighting, mask = _prepare_gain(
forward, evoked.info, noise_cov, pca=False, depth=depth,
loose=loose, weights=None, weights_min=None)
# Select channels of interest
sel = [all_ch_names.index(name) for name in gain_info['ch_names']]
M = evoked.data[sel]
# Whiten data
M = np.dot(whitener, M)
n_orient = 1 if is_fixed_orient(forward) else 3
X, active_set = solver(M, gain, n_orient)
X = _reapply_source_weighting(X, source_weighting, active_set)
stc = _make_sparse_stc(X, active_set, forward, tmin=evoked.times[0],
tstep=1. / evoked.info['sfreq'])
return stc
def solver(M, G, n_orient):
Run L2 penalized regression and keep 10 strongest locations.
Parameters
----------
M : array, shape (n_channels, n_times)
The whitened data.
G : array, shape (n_channels, n_dipoles)
The gain matrix a.k.a. the forward operator. The number of locations
is n_dipoles / n_orient. n_orient will be 1 for a fixed orientation
constraint or 3 when using a free orientation model.
n_orient : int
Can be 1 or 3 depending if one works with fixed or free orientations.
If n_orient is 3, then ``G[:, 2::3]`` corresponds to the dipoles that
are normal to the cortex.
Returns
-------
X : array, (n_active_dipoles, n_times)
The time series of the dipoles in the active set.
active_set : array (n_dipoles)
Array of bool. Entry j is True if dipole j is in the active set.
We have ``X_full[active_set] == X`` where X_full is the full X matrix
such that ``M = G X_full``.
K = linalg.solve(np.dot(G, G.T) + 1e15 * np.eye(G.shape[0]), G).T
K /= np.linalg.norm(K, axis=1)[:, None]
X = np.dot(K, M)
indices = np.argsort(np.sum(X ** 2, axis=1))[-10:]
active_set = np.zeros(G.shape[1], dtype=bool)
for idx in indices:
idx -= idx % n_orient
active_set[idx:idx + n_orient] = True
X = X[active_set]
return X, active_set
# loose, depth = 0.2, 0.8 # corresponds to loose orientation
loose, depth = 1., 0. # corresponds to free orientation
stc = apply_solver(solver, evoked, forward, noise_cov, loose, depth)
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Auxiliary function to run the solver
Step4: Define your solver
Step5: Apply your custom solver
Step6: View in 2D and 3D ("glass" brain like 3D plot)
|
15,678 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
def LSCE(x, y):
beta_1 = np.sum((x - np.mean(x))*(y-np.mean(y))) / np.sum((x-np.mean(x))*(x-np.mean(x)))
beta_0 = np.mean(y) - beta_1 * np.mean(x)
return beta_0, beta_1
advertising = pd.read_csv('Advertising.csv',index_col=0)
tv = advertising['TV']
sales = advertising['Sales']
beta_0, beta_1 = LSCE(tv,sales)
x = np.linspace(-10,310,1000)
y = beta_1 * x + beta_0
plt.scatter(tv, sales, marker='+')
plt.plot(x, y,c='k')
plt.xlim(-10,310)
plt.show()
beta_1 = 3
beta_0 = 2
random = np.random.normal(size=100, loc=0, scale=1)
X = np.linspace(-2,2,500)
X = np.random.choice(X, size=100, replace=False)
Y = X*beta_1+beta_0 +random
y_true = X*beta_1+beta_0
beta_0_, beta_1_ = LSCE(X, Y)
y_predict = X *beta_1_ + beta_0_
plt.scatter(X,Y)
plt.plot(X,y_true, c='g')
plt.plot(X, y_predict, c='r')
plt.show()
# calculate the parameter
from numpy.linalg import inv
X = advertising[['TV','Radio','Newspaper']].values
Y = advertising['Sales'].values
X = np.hstack((X, np.full((len(Y),1), 1.0)))
beta = inv(X.T.dot(X)).dot(X.T).dot(Y)
print ('the parameters are: ',beta[0], beta[1], beta[2], beta[-1])
# calculate the correlation
# X = advertising[['TV', 'Radio','Newspaper','Sales']].values
# X_mean = np.mean(X,axis=0)
# X -= X_mean
# numerator =X.T.dot(X)
# XX = X*X
# XX = np.sum(XX, axis=0)
# denumorator = np.sqrt((XX.T.dot(XX)))
# numerator/denumorator
advertising.corr()
auto = pd.read_table('Auto',sep='\s+')
rows=np.sum(auto.values=='?',axis=1)
delete_rows = []
for idx,_ in enumerate(rows):
if _!=0:
delete_rows.append(idx)
auto=auto.drop(auto.index[delete_rows])
data = auto[['mpg','horsepower']]
horsepower= data['horsepower'].values.astype('float')
data['horsepower_2'] = horsepower * horsepower
data['beta_0'] = np.full(horsepower.shape,1.0)
# plot the scatter
plt.scatter(horsepower, auto['mpg'])
# calcault linear
X = data[['horsepower','beta_0']].values.astype('float')
y = data['mpg'].values.astype('float').reshape(X.shape[0],1)
beta_linear =inv(X.T.dot(X)).dot(X.T).dot(y)
X = data[['horsepower','horsepower_2','beta_0']].values.astype('float')
beta_linear2 = inv(X.T.dot(X)).dot(X.T).dot(y)
x = np.linspace(40,230, 500)
y_linear = x*beta_linear[0] + beta_linear[1]
y_linear2 = x*beta_linear2[0] + x*x*beta_linear2[1] +beta_linear2[2]
plt.plot(x, y_linear, c='b')
plt.plot(x, y_linear2, c='g')
plt.show()
horsepower = data['horsepower'].values.astype('float')
mpg = data['mpg'].values.astype('float')
residual_linear = mpg - (horsepower*beta_linear[0]+beta_linear[1])
plt.scatter(mpg, residual_linear)
plt.show()
residual_quadratic = mpg - (horsepower*beta_linear2[0]+horsepower*horsepower*beta_linear2[1]+beta_linear2[-1])
plt.scatter(mpg, residual_quadratic)
plt.show()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import inv
auto = pd.read_table('Auto',sep='\s+')
rows=np.sum(auto.values=='?',axis=1)
delete_rows = []
for idx,_ in enumerate(rows):
if _!=0:
delete_rows.append(idx)
auto=auto.drop(auto.index[delete_rows])
horsepower= auto['horsepower'].values.astype('float')
auto['ones'] = np.full(horsepower.shape, 1.0)
X = auto[['horsepower','ones']].values.astype('float')
y = auto['mpg'].values.astype('float').reshape(X.shape[0],1)
beta_linear =inv(X.T.dot(X)).dot(X.T).dot(y)
print('β0 :',beta_linear[-1][0])
print('β1 :', beta_linear[0][0])
sample_num = len(y)
residual = np.power(X.dot(beta_linear)-y,2).sum()
sigma = np.sqrt(residual/(sample_num-2))
horsepower_98 = np.array([[98.0,1.0]])
mpg_98 = horsepower_98.dot(beta_linear)[0,0]
mpg_98_uppper_bound = mpg_98+2*sigma
mpg_98_lower_bound = mpg_98-2*sigma
print('predict value is %f when horsepower is 98'%mpg_98)
print('The range is [%f,%f]' %(mpg_98_lower_bound,mpg_98_uppper_bound))
mpg = auto['mpg'].values.astype('float')
plt.scatter(horsepower, mpg)
plt.show()
auto_cor=auto[['mpg','displacement','horsepower','weight','acceleration']]
auto_cor.corr()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The difference between the population regression line adn the least squres lien many seem quite confusing. The answer is using a sample to estimate the characteristics of a large population.
Step2: 2.1 Import Question
Step3: 3.6 Potential Probelms
Step4: 3.6.2 Correlation of Error Terms
|
15,679 | <ASSISTANT_TASK:>
Python Code:
import math
def isPower(x , y ) :
res1 = math . log(y ) // math . log(x )
res2 = math . log(y ) // math . log(x )
return(res1 == res2 )
def check(n ) :
x =(n + 7 ) // 8
if(( n + 7 ) % 8 == 0 and isPower(10 , x ) ) :
return True
else :
return False
n = 73
if(check(n ) != 0 ) :
print("Yes ")
else :
print("No ")
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
15,680 | <ASSISTANT_TASK:>
Python Code:
# make our x array
x = np.linspace(-4, 4, 801)
# f(x) = x^2
def f(x):
return x**2
# derivative of x^2 is 2x
def f_prime(x):
return 2*x
# take a look at the curve
plt.plot(x, f(x), c='black')
sns.despine();
# starting position on the curve
x_start = -4.0
# looking at the values of the derivative, for each value of x.
# we see the greatest change at the tops of the curve, namely 4 and -4
plt.plot(x, f_prime(x), c='black');
# learning rate
alpha = 0.2
# let's take our first step!
step1 = alpha*f_prime(x_start)
# our new value of x is just the previous value, minus the step
next_x = x_start - step1
# take a look at the step we took, with respect to the curve
plt.plot(x, f(x), c='black')
plt.scatter([x_start, next_x], [f(x_start), f(next_x)], c='red')
plt.plot([x_start, next_x], [f(x_start), f(next_x)], c='red')
plt.xlim((-4, 4))
plt.ylim((0, 16))
sns.despine();
another_x = next_x - alpha*f_prime(next_x)
# take a look at the combination of the two steps we've taken
plt.plot(x, f(x), c='black')
plt.plot([x_start, next_x], [f(x_start), f(next_x)], c='red')
plt.scatter([x_start, next_x], [f(x_start), f(next_x)], c='red')
plt.plot([x_start, next_x, another_x], [f(x_start), f(next_x), f(another_x)], c='red')
plt.scatter([x_start, next_x, another_x], [f(x_start), f(next_x), f(another_x)], c='red')
plt.xlim((-4, 4))
plt.ylim((0, 16))
sns.despine();
# how many steps we're going to take in our Descent
num_steps = 101
# hold our steps, including our initial starting position
x_steps = [x_start]
# do num_steps iterations
for i in xrange(num_steps):
prev_x = x_steps[i]
new_x = prev_x - alpha*f_prime(prev_x)
x_steps.append(new_x)
# plot the gradient descent as we go down the curve
plt.plot(x, f(x), c='black')
plt.plot(x_steps, [f(xi) for xi in x_steps], c='red')
plt.scatter(x_steps, [f(xi) for xi in x_steps], c='red')
plt.xlim((-4, 4))
plt.ylim((0, 16))
sns.despine();
# check the size of the derivative when we finished the iteration
print 'gradient at the end of our interations:', x_steps[-1]
# it's zero, for all intensive purposes
print 'Is the derivative effectively equal to zero at the bottom?', np.isclose(x_steps[-1], 0.0)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's assume we start at the top of the curve, at x = -4, and want to get down to x=0.
Step2: In this algorithm, alpha is known as the "learning rate", and all it does is keep us from taking steps that are too aggressive, where we could shoot past the minimum.
Step3: Here we can see that we took a pretty big step towards the minimum, just as we'd like.
Step4: At this point, we've taken two steps in our Gradient Descent, and we've gone from x=-4 all the way to x=-1.44. Every additional step we take is going to give us smaller and smaller returns, so instead of writing out each additional step, let's do this programatically.
|
15,681 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import seaborn as sns
import pandas as pd
from matplotlib import pyplot as plt, animation
%matplotlib notebook
#%matplotlib inline
sns.set_context("paper")
# interactive imports
import plotly
import cufflinks as cf
cf.go_offline(connected=True)
plotly.offline.init_notebook_mode(connected=True)
class RandomVar:
def __init__(self, probs):
self.values = np.arange(len(probs))
self.probs = probs
def pick(self, n=1):
return np.random.choice(self.values, p=self.probs)
coin = RandomVar([0.5, 0.5])
coin.pick()
biased_coin = RandomVar([0.1, 0.9])
biased_coin.pick()
die = RandomVar([1/6]*6)
die.pick()
# information content for a target probability
def info_content(p_x):
return -np.log2(p_x)
# entropy of a random variable probability distribution
def entropy(p_x):
return -sum(p_x*np.log2(p_x))
entropy([1/8]*8)
# log function
x = np.linspace(0.00001, 2, 100)
plt.plot(x, np.log(x), label='Log')
plt.legend()
plt.show()
#log of product equals sum of logs
n = 10
#a = np.random.random_sample(n)
#b = np.random.random_sample(n)
plt.plot(a, label='a')
plt.plot(b, label='b')
plt.plot(np.log(a), label='log(a)')
plt.plot(np.log(b), label='log(b)')
#plt.plot(np.log(a)+np.log(b), label='log(a)+log(b)')
plt.plot(np.log(a*b), label='log(a+b)')
plt.legend()
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Information Theory
Step2: Maximum entropy for a discrete random variable is obtained with a uniform distribution. For a continuous random variable we have an equivalent increase in entropy for an increase in the variance.
|
15,682 | <ASSISTANT_TASK:>
Python Code:
from pseudo_spectral_projection import gauss_quads
gauss_nodes = [nodes for nodes, _ in gauss_quads]
from monte_carlo_integration import sobol_samples
sobol_nodes = [sobol_samples[:, :nodes.shape[1]] for nodes in gauss_nodes]
from matplotlib import pyplot
pyplot.rc("figure", figsize=[12, 4])
pyplot.subplot(121)
pyplot.scatter(*gauss_nodes[4])
pyplot.title("Gauss quadrature nodes")
pyplot.subplot(122)
pyplot.scatter(*sobol_nodes[4])
pyplot.title("Sobol nodes")
pyplot.show()
import numpy
from problem_formulation import model_solver
gauss_evals = [
numpy.array([model_solver(node) for node in nodes.T])
for nodes in gauss_nodes
]
sobol_evals = [
numpy.array([model_solver(node) for node in nodes.T])
for nodes in sobol_nodes
]
from problem_formulation import coordinates
pyplot.subplot(121)
pyplot.plot(coordinates, gauss_evals[4].T, alpha=0.3)
pyplot.title("Gauss evaluations")
pyplot.subplot(122)
pyplot.plot(coordinates, sobol_evals[4].T, alpha=0.3)
pyplot.title("Sobol evaluations")
pyplot.show()
import chaospy
from problem_formulation import joint
expansions = [chaospy.generate_expansion(order, joint)
for order in range(1, 10)]
expansions[0].round(10)
gauss_model_approx = [
chaospy.fit_regression(expansion, samples, evals)
for expansion, samples, evals in zip(expansions, gauss_nodes, gauss_evals)
]
sobol_model_approx = [
chaospy.fit_regression(expansion, samples, evals)
for expansion, samples, evals in zip(expansions, sobol_nodes, sobol_evals)
]
pyplot.subplot(121)
model_approx = gauss_model_approx[4]
evals = model_approx(*gauss_nodes[1])
pyplot.plot(coordinates, evals, alpha=0.3)
pyplot.title("Gaussian approximation")
pyplot.subplot(122)
model_approx = sobol_model_approx[1]
evals = model_approx(*sobol_nodes[1])
pyplot.plot(coordinates, evals, alpha=0.3)
pyplot.title("Sobol approximation")
pyplot.show()
expected = chaospy.E(gauss_model_approx[-2], joint)
std = chaospy.Std(gauss_model_approx[-2], joint)
expected[:4].round(4), std[:4].round(4)
pyplot.rc("figure", figsize=[6, 4])
pyplot.xlabel("coordinates")
pyplot.ylabel("model approximation")
pyplot.fill_between(
coordinates, expected-2*std, expected+2*std, alpha=0.3)
pyplot.plot(coordinates, expected)
pyplot.show()
from problem_formulation import error_in_mean, error_in_variance
error_in_mean(expected), error_in_variance(std**2)
sizes = [nodes.shape[1] for nodes in gauss_nodes]
eps_gauss_mean = [
error_in_mean(chaospy.E(model, joint))
for model in gauss_model_approx
]
eps_gauss_var = [
error_in_variance(chaospy.Var(model, joint))
for model in gauss_model_approx
]
eps_sobol_mean = [
error_in_mean(chaospy.E(model, joint))
for model in sobol_model_approx
]
eps_sobol_var = [
error_in_variance(chaospy.Var(model, joint))
for model in sobol_model_approx
]
pyplot.rc("figure", figsize=[12, 4])
pyplot.subplot(121)
pyplot.title("Error in mean")
pyplot.loglog(sizes, eps_gauss_mean, "-", label="Gaussian")
pyplot.loglog(sizes, eps_sobol_mean, "--", label="Sobol")
pyplot.legend()
pyplot.subplot(122)
pyplot.title("Error in variance")
pyplot.loglog(sizes, eps_gauss_var, "-", label="Gaussian")
pyplot.loglog(sizes, eps_sobol_var, "--", label="Sobol")
pyplot.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The number of Sobol samples to use at each order is arbitrary, but for
Step2: Evaluating model solver
Step3: Select polynomial expansion
Step4: Solve the linear regression problem
Step5: Descriptive statistics
Step6: Error analysis
Step7: The analysis can be performed as follows
|
15,683 | <ASSISTANT_TASK:>
Python Code:
import larch, pandas, os, gzip
larch.__version__
from larch.data_warehouse import example_file
with gzip.open(example_file("arc"), 'rt') as previewfile:
print(*(next(previewfile) for x in range(70)))
itin = pandas.read_csv(example_file("arc"), index_col=['id_case','id_alt'])
itin.info()
itin.head()
d = larch.DataFrames(itin, ch='choice', crack=True, autoscale_weights=True)
d.info(1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The example itinerary choice described here is based on data derived from a ticketing database
Step2: The first line of the file contains column headers. After that, each line represents
|
15,684 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
%precision 6
import warnings
warnings.filterwarnings('ignore')
from thinkbayes2 import Pmf, Cdf
import thinkplot
import numpy as np
from numpy.fft import fft, ifft
from inspect import getsourcelines
def show_code(func):
lines, _ = getsourcelines(func)
for line in lines:
print(line, end='')
d6 = Pmf()
for x in range(1, 7):
d6[x] = 1
d6.Print()
show_code(Pmf.Normalize)
d6.Normalize()
d6.Print()
d6[3]
d6[7]
# Solution
die = Pmf(dict(red=2, blue=4))
die.Normalize()
die.Print()
show_code(Pmf.__getitem__)
show_code(Pmf.Mean)
show_code(Pmf.Var)
d6.Mean(), d6.Var()
show_code(Pmf.Expect)
mu = d6.Mean()
d6.Expect(lambda x: (x-mu)**3)
show_code(Pmf.AddPmf)
thinkplot.Pdf(d6)
twice = d6 + d6
thinkplot.Pdf(twice, color='green')
thrice = twice + d6
thinkplot.Pdf(d6)
thinkplot.Pdf(twice, color='green')
thinkplot.Pdf(thrice, color='red')
# Solution
dice = die + die
dice.Print()
show_code(Cdf.__init__)
cdf = Cdf(thrice)
cdf.Print()
thinkplot.Cdf(cdf);
show_code(Cdf.Probs)
cdf.Probs((2, 10, 18))
cdf[5]
# Solution
1 - cdf[14]
show_code(Cdf.Values)
cdf.Values((0.1, 0.5, 0.9))
show_code(Cdf.Sample)
cdf.Sample(1)
cdf.Sample(6)
cdf.Sample((2, 2))
# Solution
def iqr(cdf):
values = cdf.Values((0.25, 0.75))
return np.diff(values)[0]
iqr(cdf)
show_code(Cdf.Max)
best = cdf.Max(6)
thinkplot.Cdf(best);
best[10]
# Solution
def Min(cdf, k):
return Cdf(cdf.xs, 1 - (1-cdf.ps)**k)
worst = Min(cdf, 6)
thinkplot.Cdf(worst);
import matplotlib.pyplot as plt
class CharFunc:
def __init__(self, hs):
Initializes the CF.
hs: NumPy array of complex
self.hs = hs
def __mul__(self, other):
Computes the elementwise product of two CFs.
return CharFunc(self.hs * other.hs)
def make_pmf(self, thresh=1e-11):
Converts a CF to a PMF.
Values with probabilities below `thresh` are dropped.
ps = ifft(self.hs)
d = dict((i, p) for i, p in enumerate(ps.real) if p > thresh)
return Pmf(d)
def plot_cf(self, **options):
Plots the real and imaginary parts of the CF.
n = len(self.hs)
xs = np.arange(-n//2, n//2)
hs = np.roll(self.hs, len(self.hs) // 2)
plt.plot(xs, hs.real, label='real', **options)
plt.plot(xs, hs.imag, label='imag', **options)
plt.legend()
def compute_fft(d, n=256):
Computes the FFT of a PMF of integers.
Values must be integers less than `n`.
xs, freqs = zip(*d.items())
ps = np.zeros(256)
ps[xs,] = freqs
hs = fft(ps)
return hs
hs = compute_fft(thrice.d)
cf = CharFunc(hs)
cf.plot_cf()
show_code(CharFunc.make_pmf)
thinkplot.Pdf(cf.make_pmf())
show_code(CharFunc.__mul__)
sixth = (cf * cf).make_pmf()
thinkplot.Pdf(sixth)
sixth.Print()
sixth.Mean(), sixth.Var()
#Solution
n = len(cf.hs)
mags = np.abs(cf.hs)
plt.plot(np.roll(mags, n//2))
None
# The result approximates a Gaussian curve because
# the PMF is approximately Gaussian and the FT of a
# Gaussian is also Gaussian
class Dist(Pmf, Cdf, CharFunc):
def __init__(self, d):
Initializes the Dist.
Calls all three __init__ methods.
Pmf.__init__(self, d)
Cdf.__init__(self, d)
CharFunc.__init__(self, compute_fft(d))
def __add__(self, other):
Computes the distribution of the sum using Pmf.__add__.
pmf = Pmf.__add__(self, other)
return Dist(pmf.d)
def __mul__(self, other):
Computes the distribution of the sum using CharFunc.__mul__.
pmf = CharFunc.__mul__(self, other).make_pmf()
return Dist(pmf.d)
dist = Dist(sixth.d)
thinkplot.Pdf(dist)
dist[21]
dist.Mean(), dist.Var()
dist.ValueArray((0.25, 0.5, 0.75))
dist.Probs((18, 21, 24))
dist.Sample(10)
thinkplot.Cdf(dist.Max(6));
twelfth = dist + dist
thinkplot.Pdf(twelfth)
twelfth.Mean()
twelfth_fft = dist * dist
thinkplot.Pdf(twelfth_fft)
twelfth_fft.Mean()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Playing dice with the universe
Step2: Initially the "probabilities" are all 1, so the total probability in the Pmf is 6, which doesn't make a lot of sense. In a proper, meaningful, PMF, the probabilities add up to 1, which implies that one outcome, and only one outcome, will occur (for any given roll of the die).
Step3: Normalize adds up the probabilities in the PMF and divides through by the total. The result is a Pmf with probabilities that add to 1.
Step4: The fundamental operation provided by a Pmf is a "lookup"; that is, we can look up an outcome and get the corresponding probability. Pmf provides __getitem__, so we can use bracket notation to look up an outcome
Step5: And if you look up a value that's not in the Pmf, the probability is 0.
Step6: Exerise
Step7: Is that all there is?
Step8: Python dictionaries are implemented using hash tables, so we expect __getitem__ to be fast. In terms of algorithmic complexity, it is constant time, or $O(1)$.
Step9: This implementation is efficient, in the sense that it is $O(n)$, and because it uses a comprehension to traverse the outcomes, the overhead is low.
Step10: And here's how they are used
Step11: The structure of Mean and Var is the same
Step12: As an example, we can use Expect to compute the third central moment of the distribution
Step13: Because the distribution is symmetric, the third central moment is 0.
Step14: The outer loop traverses the outcomes and probabilities of the first Pmf; the inner loop traverses the second Pmf. Each time through the loop, we compute the sum of the outcome pair, v1 and v2, and the probability that the pair occurs.
Step15: When we use the + operator, Python invokes __add__, which invokes AddPmf, which returns a new Pmf object. Here's the Pmf that represents the sum of two dice
Step16: And here's the Pmf that represents the sum of three dice.
Step17: As we add up more dice, the result converges to the bell shape of the Gaussian distribution.
Step18: Cumulative probabilities
Step19: xs is the sorted list of values, and freqs are their frequencies or probabilities.
Step20: Because we have to sort the values, the time to compute a Cdf is $O(n \log n)$.
Step21: The range of the CDF is always from 0 to 1.
Step22: The details here are a little tricky because we have to deal with some "off by one" problems, and if any of the values are less than the smallest value in the Cdf, we have to handle that as a special case. But the basic idea is simple, and the implementation is efficient.
Step23: Cdf also provides __getitem__, so we can use brackets to look up a single value
Step24: Exercise
Step25: Reverse lookup
Step26: And here's an example that finds the 10th, 50th, and 90th percentiles
Step27: The Cdf representation is also good at generating random samples, by choosing a probability uniformly from 0 to 1 and finding the corresponding value. Here's the method Cdf provides
Step28: The result is a NumPy array with the given shape. The time to generate each random choice is $O(\log n)$
Step29: Exercise
Step30: Max and min
Step31: To get the distribution of the maximum, we make a new Cdf with the same values as the original, and with the ps raised to the kth power. Simple, right?
Step32: So the chance of generating a character whose best property is 10 is less than 2%.
Step37: Characteristic function
Step39: The attribute, hs, is the Fourier transform of the Pmf, represented as a NumPy array of complex numbers.
Step40: fft computes the Fast Fourier Transform (FFT), which is called "fast" because the run time is $O(n \log n)$.
Step41: The characteristic function contains all of the information from the Pmf, but it is encoded in a form that is hard to interpret. However, if we are given a characteristic function, we can find the corresponding Pmf.
Step42: And here's an example
Step43: Now we can use the characteristic function to compute a convolution. CharFunc provides __mul__, which multiplies the hs elementwise and returns a new CharFunc object
Step44: And here's how we can use it to compute the distribution of the sum of 6 dice.
Step45: Here are the probabilities, mean, and variance.
Step46: This might seem like a roundabout way to compute a convolution, but it is efficient. The time to Compute the CharFunc objects is $O(n \log n)$. Multiplying them together is $O(n)$. And converting back to a Pmf is $O(n \log n)$.
Step50: Distributions
Step51: When you create a Dist, you provide a dictionary of values and probabilities.
Step52: We inherit __getitem__ from Pmf, so we can look up the probability of a value.
Step53: We also get mean and variance from Pmf
Step54: But we can also use methods from Cdf, like ValueArray
Step55: And Probs
Step56: And Sample
Step57: Dist.__add__ uses Pmf.__add__, which performs convolution the slow way
Step58: Dist.__mul__ uses CharFunc.__mul__, which performs convolution the fast way.
|
15,685 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
from pyseidon import *
Station?
station=Station('http://ecoii.acadiau.ca/thredds/dodsC/ecoii/test/Station3D_dngrid_BF_20130730_20130809.nc')
print station.Grid.name
flowDir, velNorm = station.Util2D.flow_dir('GP_120726_BPa')
flowDir, velNorm = station.Util2D.flow_dir('GP_120726_BPa', exceedance=True)
fI, eI, pa, pav = station.Util2D.ebb_flood_split('GP_120726_BPa')
print "Flood mean flow speed: " + str(velNorm[fI].mean()) + " m/s"
print "Ebb mean flow speed: " + str(velNorm[eI].mean()) + " m/s"
station.Util2D.speed_histogram('Westport')
station.Util2D.speed_histogram('DG_1a')
station.Util2D.speed_histogram('PP-120917-BPa')
harmo = station.Util2D.Harmonic_analysis_at_point('GP_120726_BPa', velocity=True, elevation=False)
print harmo
velos = station.Util2D.Harmonic_reconstruction(harmo)
vs = station.Util3D.verti_shear('GP_120726_BPa', time_ind=eI)
norm = station.Util3D.velo_norm('GP_120726_BPa', time_ind=fI)
fd = station.Util3D.flow_dir('GP_120726_BPa', time_ind=fI, vertical=True)
depths = station.Util3D.depth('GP_120726_BPa')
meandepths = np.mean(depths[fI], axis=0)
meanfd = np.mean(fd,axis=0)
station.Plots.plot_xy(meanfd, meandepths, title='Flood flow direction at GP_120726_BPa', xlabel='direction', ylabel='depth')
station.dump_profile_data(meanfd, meandepths, title='Flood flow direction at GP_120726_BPa', xlabel='direction', ylabel='depth')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. PySeidon - Station object initialisation
Step2: Star here means all. Usually this form of statements would import the entire library. In the case of PySeidon, this statement will import the following object classes
Step3: According to the documentation, in order to define a Station object, the only required input is a filename. This string input represents path to a file or a folder containing files (e.g. testStation=Station('./path_to_Station_output_file/filename') or testStation=Station('./path_to_Station_output_file/folder/') ) and whose file can be a pickle file (i.e. .p) or a netcdf file (i.e. *.nc).
Step4: 1.3. Object attributes, functions, methods & special methods
Step5: Util3D
Step6: Save functions
|
15,686 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
def sigmoid(x):
return 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation here
self.activation_function = sigmoid
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2) #.T
targets = np.array(targets_list, ndmin=2) #.T
#print(inputs.shape, targets.shape)
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = inputs @ self.weights_input_to_hidden # signals into hidden layer
hidden_outputs = self.activation_function(x=hidden_inputs) #self.activation_function(x=hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = hidden_outputs @ self.weights_hidden_to_output #np.dot(self.weights_hidden_to_output, hidden_outputs) #hidden_outputs * self.weigths_hidden_to_output # Aras: no biases I am assuming # signals into final output layer
final_outputs = final_inputs * 1 #Aras: f(x)=x for in/out layer #self.activation_function(x=final_inputs) # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
output_errors = (final_outputs - targets) # Aras: derivative of f(x)=x # Output layer error is the difference between desired target and actual output.
output_grad = output_errors * 1 # activation functon for output layer is f(x)=x and derivative is 1 (slope)
# TODO: Backpropagated error - Replace these values with your calculations.
hidden_errors = output_grad @ self.weights_hidden_to_output.T #np.dot(self.weights_hidden_to_output.T, output_grad) #* (1-hidden_outputs) * hidden_outputs #(targets_list-final_outputs) * -1 * -(1 - self.activation_function(x)) * self.activation_function(x) # errors propagated to the hidden layer
#hidden_grad = np.dot(hidden_errors, self.activation_derivative(x=hidden_inputs)) #hidden_errors * hidden_outputs * (1-hidden_outputs) #np.dot(inputs, hidden_errors) #hidden_error * self.weights_hidden_to_input # hidden layer gradients
hidden_grad = hidden_errors * hidden_outputs * (1-hidden_outputs) # activation_function 1st derivative
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output -= self.lr * hidden_outputs.T @ output_grad #np.dot(hidden_outputs, output_grad).T #output_errors * self.weights_hidden_to_output * hidden_outputs # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden -= self.lr * inputs.T @ hidden_grad #hidden_grad * inputs_list # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2) #.T -> Aras: I don't understand why? As I mentioned also above.
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = inputs @ self.weights_input_to_hidden #np.dot(self.weights_input_to_hidden, inputs) #inputs * self.weight_input_to_hidden # signals into hidden layer
hidden_outputs = self.activation_function(x=hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = hidden_outputs @ self.weights_hidden_to_output #np.dot(self.weights_hidden_to_output, hidden_outputs) #hidden_outputs * self.weights_hidden_to_output # signals into final output layer
final_outputs = final_inputs * 1 #np.zeros((1, len(inputs_list))) # signals from final output layer
return final_outputs
def MSE(y_ref, Y_res): # Aras: ref==reference && res==result # Aras changed this part by adding ref and res.
return np.mean((y_ref-Y_res)**2)
import sys
### Set the hyperparameters here ###
epochs = 100
learning_rate = 0.001
hidden_nodes = 28 #56/2 - half of the input features/dimensions
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.loc[batch].values, # Aras: features/ Dimensions/ Variables_txn?
train_targets.loc[batch]['cnt']): # Aras: Labels/ Indeces/ Classes_txn?
network.train(record, target) # Aras: record==sample && target==label
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values) ## Aras: Can we use SSE instead?
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(top=2.5)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(top=2.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions, label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.loc[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions, label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.loc[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]]).T
test_w_h_o = np.array([[0.3, -0.1]]).T
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
#print(network.weights_hidden_to_output)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]]).T))
#print(network.weights_input_to_hidden)
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]]).T))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: OPTIONAL
|
15,687 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd # Start by importing the tweets data
X = pd.read_csv('../datasets/tweets.csv')
X.shape
X.columns
X.info()
X.head(5)
min(X.Avg)
max(X.Avg)
X.Avg.hist();
corpusTweets = X.Tweet.tolist() # get a list of all tweets, then is easier to apply preprocessign to each item
# Convert to lower-case
corpusLowered = [s.lower() for s in corpusTweets]
corpusLowered[0:5]
# Remove punctuation
import re
corpusNoPunct = [re.sub(r'([^\s\w_]|_)+', ' ', s.strip()) for s in corpusLowered]
corpusNoPunct[0:5]
import os
def readStopwords():
'''
returns stopwords as strings
Assume that a file called "stopwords.txt"
exists in the folder
'''
filename = "stopwords.txt"
path = os.path.join("", filename)
file = open(path, 'r')
return file.read().splitlines() # splitlines is used to remove newlines
stopWords = set(readStopwords())
"the" in stopWords # quick test
stopWords.add("apple")
stopWords.add("appl")
stopWords.add("iphone")
stopWords.add("ipad")
stopWords.add("ipod")
stopWords.add("itunes")
stopWords.add("ios")
stopWords.add("http")
print ("apple" in stopWords)
print ("google" in stopWords)
# tokenise
corpusTokens = [s.split() for s in corpusNoPunct]
corpusTokens[0:3]
# Stem document
from nltk import PorterStemmer
porter = PorterStemmer()
corpus = []
for tweet in corpusTokens:
cleanTokens = [token for token in tweet if token not in stopWords] # a list of tokens
stemmedTokens = [porter.stem(token) for token in cleanTokens]
cleanTweet = ' '.join(stemmedTokens)
corpus.append(cleanTweet)
corpus[0:5]
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(lowercase=False, max_features=500)
cv.fit(corpus)
'apple' in cv.vocabulary_ # a quick test
cv.get_feature_names()[0:20] # in alphabetical order
bagOfWords = cv.transform(corpus)
bagOfWords
sum_words = bagOfWords.toarray().sum(axis=0)
words_freq = [(word, sum_words[idx]) for word, idx in cv.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
words_freq[:10]
df = pd.DataFrame(bagOfWords.toarray())
df.shape
df.info()
df.head(1)
import numpy.random
numpy.random.seed(100) # just for reproducibility
from sklearn.model_selection import train_test_split
X.Avg = [int(round(a)) for a in X.Avg] # cluster target into 5 classes
X_train, X_test, y_train, y_test = train_test_split(df, X.Avg, test_size=0.25)
X_test.shape
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB()
classifier.fit(X_train, y_train)
predictions = classifier.predict(X_test)
predictions[0:100]
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy: {:.2}".format(metrics.accuracy_score(y_test, predictions)))
mat = metrics.confusion_matrix(y_test, predictions)
mat
import matplotlib.pyplot as plt
labels = ['strongly neg.', 'negative', 'neutral', 'positive', 'strongly pos.']
fig = plt.figure()
ax = fig.add_subplot(111)
cm = ax.matshow(mat)
# plot the title, use y to leave some space before the labels
plt.title("Confusion matrix - Tweets arranged by sentiment", y=1.2)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.setp(ax.get_xticklabels(), rotation=-30, ha="right",
rotation_mode="anchor")
plt.xlabel("Predicted")
plt.ylabel("Actual")
# Loop over data dimensions and create text annotations.
for i in range(len(mat)):
for j in range(len(mat)):
text = ax.text(j, i, mat[i, j],
ha="center", va="center", color="w")
# Create colorbar
fig.colorbar(cm);
correctPredictions = sum(mat[i][i] for i in range(len(mat)))
correctPredictions
print("Accuracy: {:.2}".format(correctPredictions / len(y_test)))
neutralTweets = sum(1 for sentiment in y_test if sentiment == 0) # neutral tweets in Test dataset
neutralTweets
len(y_test) - neutralTweets
print("Accuracy baseline: {:.2}".format(neutralTweets / len(y_test)))
# for simplicity, it re-uses the vectorizer and the classifier without passing them
# as arguments. Industrialising it would mean to create a pipeline with
# vectoriser > classifier > label string
def predictSentiment(t):
bow = cv.transform([t])
prediction = classifier.predict(bow)
if prediction == 0:
return "Neutral"
elif prediction > 0:
return "Positive"
else:
return "Negative"
predictSentiment("I don't know what to think about apple!")
predictSentiment("I love apple, its products are always the best, really!")
predictSentiment("Apple lost its mojo, I will never buy again an iphone better an Android")
X.loc[X.Avg < 0] = -1 # negative sentiment
X.loc[X.Avg >= 0] = 1 # NON-negative sentiment
X_train, X_test, y_train, y_test = train_test_split(df, X.Avg, test_size=0.25)
classifier = MultinomialNB() # 0.77
classifier.fit(X_train, y_train)
predictionsTwo = classifier.predict(X_test)
predictionsTwo[0:100]
# Model Accuracy, how often is the classifier correct?
print("Accuracy: {:.2}".format(metrics.accuracy_score(y_test, predictionsTwo)))
matBinary = metrics.confusion_matrix(y_test, predictionsTwo)
matBinary
labels = ['negative', 'NOT negative']
fig = plt.figure()
ax = fig.add_subplot(111)
cm = ax.matshow(matBinary)
# plot the title, use y to leave some space before the labels
plt.title("Confusion matrix - Tweets arranged by sentiment", y=1.2)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.setp(ax.get_xticklabels(), rotation=-30, ha="right",
rotation_mode="anchor")
plt.xlabel("Predicted")
plt.ylabel("Actual")
# Loop over data dimensions and create text annotations.
for i in range(len(matBinary)):
for j in range(len(matBinary)):
text = ax.text(j, i, matBinary[i, j],
ha="center", va="center", color="w")
# Create colorbar
fig.colorbar(cm);
tn, fp, fn, tp = matBinary.ravel()
print("True Negatives: ",tn)
print("False Positives: ",fp)
print("False Negatives: ",fn)
print("True Positives: ",tp)
Accuracy = (tn+tp)/(tp+tn+fp+fn)
print("Accuracy: {:.2f}".format(Accuracy))
Sensitivity = tp/(tp+fn)
print("Sensitivity {:0.2f}".format(Sensitivity))
#Specificity
Specificity = tn/(tn+fp)
print("Specificity {:0.2f}".format(Specificity))
probs = classifier.predict_proba(X_test) # get the probabilities
preds = probs[:,1] ## keep probabilities for the positive outcome only
fpr, tpr, threshold = metrics.roc_curve(y_test, preds) # calculate roc
roc_auc = metrics.auc(fpr, tpr) # calculate AUC
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.plot([0, 1], [0, 1],'r--') # plot random guessing
plt.legend(loc = 'lower right')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# Precision
Precision = tp/(tp+fp)
print("Precision or Positive Predictive Power: {:0.2f}".format(Precision))
# Negative Predictive Value
print("Negative predictive Power: {:0.2f}".format(tn / (tn+fn)))
# F1 Score
f1 = (2 * Precision * Sensitivity) / (Precision + Sensitivity)
print("F1 Score {:0.2f}".format(f1))
classifierTuned = MultinomialNB(class_prior=[.4, 0.6]) # try to max specificity
classifierTuned.fit(X_train, y_train)
predictionsTuned = classifierTuned.predict(X_test)
matTuned = metrics.confusion_matrix(y_test, predictionsTuned)
matTuned
tn, fp, fn, tp = matTuned.ravel()
Accuracy = (tn+tp)/(tp+tn+fp+fn)
print("Accuracy: {:.2f}".format(Accuracy)) # it was 0.79
Sensitivity = tp/(tp+fn)
print("Sensitivity {:0.2f}".format(Sensitivity)) #it was 0.9
Specificity = tn/(tn+fp)
print("Specificity {:0.2f}".format(Specificity)) # it was 0.53
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It contains 1181 tweets (as text) and one manually labeled sentiment.
Step2: 2 means very positive, 0 is neutral and -2 is very negative
Step3: Process the data
Step4: Now we remove the stopwords. First we define which are the common words (stopwords) to be removed
Step5: To remove a word from the corpus if that word is contained in our stopwords set, we need first to tokenise the corpus (i.e., split it into words or tokens)
Step6: Create a Document-Term matrix
Step7: Now we use the voctoriser to transform the corpus into a sparse matrix where each tweet has 1 if the feature is present in it or 0 if not.
Step8: We put it into a data frame to use it in the classifier
Step9: We start by splitting the tweets into training and test sets, as usual
Step10: Train and test the classifier
Step11: Metrics
Step12: The classifier was correct 64% of times (not only if a tweet was negative but also if it was strongly negative or moderately negative).
Step13: It's more clear if we visualise it as a heat map
Step14: The numbers in the diagonal are all the times when the predicted sentiment for a tweet was the same as the actual sentiment.
Step15: Which is the same value as above.
Step16: This tells us that in our test dataset we have 178 observation with neutral sentiment and 118 with positive or negative tweets.
Step17: So our Naive Bayesian model does better than the simple baseline.
Step18: Ok. We try with two new tweets and see what we get, one positive and one negative
Step19: Binary Classification
Step20: We need to re-apply the classifier
Step21: As you can see, there is no more classes 2, 0 or -2 now
Step22: Of course is better, we have less classes to predict, less errors to make.
Step23: In a two-class problem, we are often looking to discriminate between observations with a specific outcome, from normal observations. Such as a disease state or no disease state or spam versus no-spam.
Step24: Accuracy can be re-formulated as the ratio between the true events (positive and negative) and the total events
Step25: Accuracy is not a reliable metric for the real performance of a classifier, because it will yield misleading results if the data set is unbalanced (that is, when the numbers of observations in different classes vary greatly).
Step26: Sensitivity is a real number between 0 and 1. A sensitivity of 1 means that ALL the Negative cases have been correctly classified.
Step27: ROC (Receiver Operating Characteristic) curve
Step28: The ROC curve is a useful tool for a few reasons
Step29: Similarly, you can calculate the Negative Predictive Power
Step30: The F1 score is the harmonic mean of the Precision & Sensitivity, and is used to indicate a balance between them. It ranges from 0 to 1; F1 Score reaches its best value at 1 (perfect precision & sensitivity) and worst at 0.
Step31: What do we use the ROC for?
|
15,688 | <ASSISTANT_TASK:>
Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%matplotlib inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
# Folder where all the results will be collected
"results_dir" : "EnergyMeter_AEP",
# Define devlib modules to load
"modules" : ["cpufreq"], # Required by rt-app calibration
"exclude_modules" : [ 'hwmon' ],
# Energy Meters Configuration for ARM Energy Probe
"emeter" : {
"instrument" : "aep",
"conf" : {
# Value of the shunt resistor in Ohm
'resistor_values' : [0.099],
# Device entry assigned to the probe on the host
'device_entry' : '/dev/ttyACM0',
},
'channel_map' : {
'BAT' : 'BAT'
}
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
# "rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# EnergyMeter Start
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
# EnergyMeter Stop and samples collection
nrg_report = te.emeter.report(te.res_dir)
logging.info("Collected data:")
!tree $te.res_dir
logging.info("Measured channels energy:")
logging.info("%s", nrg_report.channels)
logging.info("Generated energy file:")
logging.info(" %s", nrg_report.report_file)
!cat $nrg_report.report_file
logging.info("Samples collected for the BAT channel (only first 10)")
samples_file = os.path.join(te.res_dir, 'samples.csv')
!head $samples_file
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import required modules
Step2: Target Configuration
Step3: Workload Execution and Power Consumptions Samping
Step4: Power Measurements Data
|
15,689 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
ef = pd.read_excel('EIA CO2 factors.xlsx', header=1, skip_footer=1,
index_col='EIA Fuel Code')
ef.columns = [name.strip() for name in ef.columns]
ef['Link'] = 'https://www.eia.gov/electricity/annual/html/epa_a_03.html'
ef.rename_axis({'Factor (Kilograms of CO2 Per Million Btu)**':'Fossil Factor'}, axis=1, inplace=True)
ef.loc['BFG',:] = ['Blast Furnace Gas', 274.32, None , 'https://www.epa.gov/sites/production/files/2015-07/documents/emission-factors_2014.pdf']
ef.loc['MSN',:] = ['Non-biomass Municipal Solid Waste', 90.7, 'Assume the same as MSW', 'https://www.epa.gov/sites/production/files/2015-07/documents/emission-factors_2014.pdf']
# Use the bituminous coal factor for synthetic coal
ef.loc['SC',:] = ['Synthetic Coal', 93.3, 'Use same factor as BIT and RC', None]
# Use fuel gas for other gases
ef.loc['OG', :] = ['Other gases', 59, 'Assume fuel gas', 'https://www.epa.gov/sites/production/files/2015-07/documents/emission-factors_2014.pdf']
# Use BIT factor for synthetic gas from coal (SGC) and DFO for SGP
# The actual factors won't matter as much because we're going to use the EPA
# emission values.
ef.loc['SGC', ['Fossil Factor', 'Notes']] = [93.3, 'Using BIT value. Likely not correct']
ef.loc['SGP', ['Fossil Factor', 'Notes']] = [73.16, 'Using DFO value. Likely not correct']
non_fossil_fuels = ['AB', 'BLQ', 'LFG', 'MSB', 'NUC', 'OBG', 'OBL', 'OBS',
'OTH', 'PUR', 'SLW', 'SUN', 'WAT', 'WDL', 'WDS', 'WH', 'WND']
for fuel in non_fossil_fuels:
ef.loc[fuel, ['Fossil Factor', 'Notes']] = [0, 'non-fossil fuel']
epa_factor_link = 'https://www.epa.gov/sites/production/files/2015-07/documents/emission-factors_2014.pdf'
ipcc_factor_link = 'http://www.ipcc-nggip.iges.or.jp/public/2006gl/pdf/2_Volume2/V2_2_Ch2_Stationary_Combustion.pdf'
ef['Total Factor'] = ef['Fossil Factor']
ef.loc['BLQ', ['Total Factor', 'Notes', 'Link']] = [94.4, 'Table 6-6, North American Softwood', 'https://www.epa.gov/sites/production/files/2015-03/documents/subpartaa-tsd-pulp_and_paper.pdf']
ef.loc['AB', ['Total Factor', 'Link']] = [118.17, epa_factor_link]
ef.loc['LFG', ['Total Factor', 'Link']] = [52.17, epa_factor_link]
ef.loc['MSB', ['Total Factor', 'Notes', 'Link']] = [90.7, 'Biomass portion of MSW', epa_factor_link]
ef.loc['OBG', ['Total Factor', 'Link']] = [52.17, epa_factor_link]
ef.loc['OBL', ['Total Factor', 'Link']] = [83.98, ipcc_factor_link]
ef.loc['OBS', ['Total Factor', 'Link']] = [105.51, epa_factor_link]
ef.loc[['SLW', 'WDL'], ['Total Factor', 'Notes', 'Link']] = [83.98, 'Assume same as OBL', ipcc_factor_link]
# ef.loc['WDL', ['Total Factor', 'Notes', 'Link']] = [83.98, 'Assume same as OBL', ipcc_factor_link]
ef.loc['WDS', ['Total Factor', 'Link']] = [93.8, epa_factor_link]
path = os.path.join('Clean data', 'Final emission factors.csv')
ef.to_csv(path)
ef
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Add EPA emission factors for fossil fuels not included in the EIA file
Step2: Add non-fossil emission factors for a total emission factor column
Step3: Export
|
15,690 | <ASSISTANT_TASK:>
Python Code:
import numpy as np #For numerical programming and multi-dimensional arrays
from pandas import date_range #For date-rate generation
from bqplot import LinearScale, Lines, Axis, Figure, DateScale, ColorScale
security_1 = np.cumsum(np.random.randn(150)) + 100.
security_2 = np.cumsum(np.random.randn(150)) + 100.
sc_x = LinearScale()
sc_y = LinearScale()
line = Lines(x=np.arange(len(security_1)), y=security_1,
scales={'x': sc_x, 'y': sc_y})
ax_x = Axis(scale=sc_x, label='Index')
ax_y = Axis(scale=sc_y, orientation='vertical', label='y-values of Security 1')
Figure(marks=[line], axes=[ax_x, ax_y], title='Security 1')
line.colors = ['DarkOrange']
# The opacity allows us to display the Line while featuring other Marks that may be on the Figure
line.opacities = [.5]
line.stroke_width = 2.5
line.fill = 'bottom'
line.fill_opacities = [0.2]
line.line_style = 'dashed'
line.interpolation = 'basis'
line.marker = 'triangle-down'
# Here we define the dates we would like to use
dates = date_range(start='01-01-2007', periods=150)
dt_x = DateScale()
sc_y = LinearScale()
time_series = Lines(x=dates, y=security_1, scales={'x': dt_x, 'y': sc_y})
ax_x = Axis(scale=dt_x, label='Date')
ax_y = Axis(scale=sc_y, orientation='vertical', label='Security 1')
Figure(marks=[time_series], axes=[ax_x, ax_y], title='A Time Series Plot')
x_dt = DateScale()
y_sc = LinearScale()
dates_new = date_range(start='06-01-2007', periods=150)
securities = np.cumsum(np.random.randn(150, 10), axis=0)
positions = np.random.randint(0, 2, size=10)
# We pass the color scale and the color data to the lines
line = Lines(x=dates, y=[security_1, security_2],
scales={'x': x_dt, 'y': y_sc},
labels=['Security 1', 'Security 2'])
ax_x = Axis(scale=x_dt, label='Date')
ax_y = Axis(scale=y_sc, orientation='vertical', label='Security 1')
Figure(marks=[line], axes=[ax_x, ax_y], legend_location='top-left')
line.x, line.y = [dates, dates_new], [security_1, security_2]
x_dt = DateScale()
y_sc = LinearScale()
col_sc = ColorScale(colors=['Red', 'Green'])
dates_color = date_range(start='06-01-2007', periods=150)
securities = 100. + np.cumsum(np.random.randn(150, 10), axis=0)
positions = np.random.randint(0, 2, size=10)
# Here we generate 10 random price series and 10 random positions
# We pass the color scale and the color data to the lines
line = Lines(x=dates_color, y=securities.T,
scales={'x': x_dt, 'y': y_sc, 'color': col_sc}, color=positions,
labels=['Security 1', 'Security 2'])
ax_x = Axis(scale=x_dt, label='Date')
ax_y = Axis(scale=y_sc, orientation='vertical', label='Security 1')
Figure(marks=[line], axes=[ax_x, ax_y], legend_location='top-left')
line.color = None
sc_x = LinearScale()
sc_y = LinearScale()
patch = Lines(x=[[0, 2, 1.2, np.nan, np.nan, np.nan, np.nan], [0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan], [4,5,6, 6, 5, 4, 3]],
y=[[0, 0, 1 , np.nan, np.nan, np.nan, np.nan], [0.5, 0.5, -0.5, np.nan, np.nan, np.nan, np.nan], [1, 1.1, 1.2, 2.3, 2.2, 2.7, 1.0]],
fill_colors=['orange', 'blue', 'red'],
fill='inside',
stroke_width=10,
close_path=True,
scales={'x': sc_x, 'y': sc_y},
display_legend=True)
Figure(marks=[patch], animation_duration=1000)
patch.opacities = [0.1, 0.2]
patch.x = [[2, 3, 3.2, np.nan, np.nan, np.nan, np.nan], [0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan], [4,5,6, 6, 5, 4, 3]]
patch.close_path = False
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Random Data Generation
Step2: Basic Line Chart
Step3: The x attribute refers to the data represented horizontally, while the y attribute refers the data represented vertically.
Step4: In a similar way, we can also change any attribute after the plot has been displayed to change the plot. Run each of the cells below, and try changing the attributes to explore the different features and how they affect the plot.
Step5: To switch to an area chart, set the fill attribute, and control the look with fill_opacities and fill_colors.
Step6: While a Lines plot allows the user to extract the general shape of the data being plotted, there may be a need to visualize discrete data points along with this shape. This is where the markers attribute comes in.
Step7: The marker attributes accepts the values square, circle, cross, diamond, square, triangle-down, triangle-up, arrow, rectangle, ellipse. Try changing the string above and re-running the cell to see how each marker type looks.
Step8: Plotting Multiples Sets of Data with Lines
Step9: We pass each data set as an element of a list. The colors attribute allows us to pass a specific color for each line.
Step10: Similarly, we can also pass multiple x-values for multiple sets of y-values
Step11: Coloring Lines according to data
Step12: We can also reset the colors of the Line to their defaults by setting the color attribute to None.
Step13: Patches
|
15,691 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sbn
import pandas as pd
from uuid import uuid4
from lpde.geometry import WidthOf, Window, PointAt, BoundingBox, Mapper, Grid
from lpde.estimators import ParallelEstimator
from lpde.estimators.datatypes import Event, Degree, Action, Scalings, Signal
from lpde.producers import MockParams, MockProducer
from lpde.producers.distributions import gaussian
%matplotlib notebook
legendre_width = WidthOf(1.8)
center = PointAt(0, 0)
window = Window(1.8, 1.8)
bounds = BoundingBox(center, window)
mapper = Mapper(bounds, legendre_width)
degree = Degree(20, 20)
params = MockParams(20, 100, 1, gaussian)
density = ParallelEstimator(degree, mapper, params)
action = Action.ADD
point = PointAt(0.5, 0.5)
event = Event(uuid4(), action, point)
grid = Grid(100, 100)
def gaussian():
x, y = np.random.multivariate_normal((0,0), ((0.1,0), (0,0.1)))
if (-0.9 <= x <= 0.9) and (-0.9 <= y <= 0.9):
return x, y
else:
return gaussian()
def uniform():
return np.random.uniform(low=-0.9, high=0.9, size=2)
locations = []
def new_event(dist):
location = dist()
locations.append(location)
point = PointAt(*location)
return Event(uuid4(), Action(1), point)
def random_event(dist):
event_type = np.random.randint(low=-1, high=2)
if event_type == 1:
location = dist()
point = PointAt(*location)
return Event(uuid4(), Action(1), point)
elif event_type == 0:
location = dist()
point = PointAt(*location)
column = density._phi.sample(1, axis=1).columns.values[0]
return Event(column, Action(0), point)
column = density._phi.sample(1, axis=1).columns.values[0]
return Event(column, Action(-1))
density.controller.start(1, 1.0)
density.producer.start()
density.at(point)
density.controller.stop()
#%%time
for i in range(100):
density.update_with(new_event(gaussian))
#while not density.controller.phi_queue.empty():
# pass
%%time
for i in range(1000):
density.update_with(random_event(gaussian))
while not density._phi_queue_empty:
pass
%%time
for i in range(1000):
density.at(new_event(uniform).location)
while not density._phi_queue_empty:
pass
n_hat = density.on(grid)
fig, ax = plt.subplots()
ax.set(xlabel=r'$x$', ylabel=r'$y$')
#ax.scatter(*zip(*locations), s=5, c='k')
contour = ax.imshow(n_hat,
cmap='viridis',
alpha=0.9,
extent=bounds.x_range+bounds.y_range,
origin='lower',
interpolation='bicubic')
cbar = plt.colorbar(contour, ax=ax, label=r'$n(x)$')
fig.tight_layout()
from bokeh.io import output_file, show
from bokeh.plotting import figure
from bokeh.tile_providers import STAMEN_TONER, STAMEN_TERRAIN
bound = 20000000 # meters
fig = figure(tools='pan, wheel_zoom', x_range=(-bound, bound), y_range=(-bound, bound))
fig.axis.visible = False
fig.add_tile(STAMEN_TERRAIN)
output_file("stamen_toner_plot.html")
show(fig)
from bokeh.plotting import figure, show, output_file
p = figure(x_range=bounds.x_range, y_range=bounds.y_range)
# must give a vector of image data for image parameter
p.image(image=[n_hat],
x=bounds.x_range[0],
y=bounds.y_range[0],
dw=bounds.window[0],
dh=bounds.window[1],
palette="Spectral11",
alpha=0.8)
output_file("image.html", title="image.py example")
show(p) # open a browser
def f():
pass
type(f)
callable(function)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notebook settings
Step2: Density Estimation
Step3: Create mock data streams
Step4: Timings of density estimation
Step5: Timings
|
15,692 | <ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, shape=[None, n_H0, n_W0, n_C0])
Y = tf.placeholder(tf.float32, shape=[None, n_y])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable("W1", [4,4,3,8], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
W2 = tf.get_variable("W2", [2,2,8,16], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X,W1, strides = [1,1,1,1], padding = 'SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize = [1,8,8,1], strides = [1,8,8,1], padding = 'SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1,W2, strides = [1,1,1,1], padding = 'SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize = [1,4,4,1], strides = [1,4,4,1], padding = 'SAME')
# FLATTEN
F = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(F, num_outputs=6,activation_fn=None,weights_regularizer = tf.contrib.layers.l2_regularizer(0.01))
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))+tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer,cost],feed_dict={X:minibatch_X, Y:minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
Step5: 1.1 - Create placeholders
Step7: Expected Output
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output
|
15,693 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import scanpy.api as sc
from anndata import AnnData
from numpy.random import negative_binomial, binomial, seed
seed(1234)
# n_cluster needs to be smaller than n_simulated_cells, n_marker_genes needs to be smaller than n_simulated_genes
n_simulated_cells=1000
n_simulated_genes=100
n_cluster=100
n_marker_genes=10
# Specify parameter between 0 and 1 for zero_inflation and p, positive integer for r
# Differential gene expression is simulated using reference parameters for all cells/genes
# except for marker genes in the distinct cells.
reference_zero_inflation=0.15
reference_p=0.25
reference_n=2
cluster_zero_inflation=0.9
cluster_p=0.5
cluster_n=1
adata=AnnData(np.multiply(binomial(1,reference_zero_inflation,(n_simulated_cells,n_simulated_genes)),
negative_binomial(reference_n,reference_p,(n_simulated_cells,n_simulated_genes))))
# adapt marker_genes for cluster
adata.X[0:n_cluster,0:n_marker_genes]=np.multiply(binomial(1,cluster_zero_inflation,(n_cluster,n_marker_genes)),
negative_binomial(cluster_n,cluster_p,(n_cluster,n_marker_genes)))
import pandas as pd
smp='true_groups'
true_groups_int=np.ones((n_simulated_cells,))
true_groups_int[0:n_cluster]=0
true_groups=list()
for i,j in enumerate(true_groups_int):
true_groups.append(str(j))
adata.smp['true_groups']=pd.Categorical(true_groups, dtype='category')
adata.uns[smp + '_order']=np.asarray(['0','1'])
sc.tl.rank_genes_groups(adata, 'true_groups')
sc.pl.rank_genes_groups(adata, n_genes=20)
sc.tl.rank_genes_groups(adata, 'true_groups', test_type='wilcoxon')
sc.pl.rank_genes_groups(adata, n_genes=20)
# n_cluster needs to be smaller than n_simulated_cells, n_marker_genes needs to be smaller than n_simulated_genes
n_simulated_cells=1000
n_simulated_genes=100
n_cluster=100
n_marker_genes=10
# Specify parameter between 0 and 1 for zero_inflation and p, positive integer for r
# Differential gene expression is simulated using reference parameters for all cells/genes
# except for marker genes in the distinct cells.
reference_zero_inflation=0.15
reference_p=0.5
reference_n=6
cluster_zero_inflation=0.9
cluster_p=0.5
cluster_n=1
adata=AnnData(np.multiply(binomial(1,reference_zero_inflation,(n_simulated_cells,n_simulated_genes)),
negative_binomial(reference_n,reference_p,(n_simulated_cells,n_simulated_genes))))
# adapt marker_genes for cluster
adata.X[0:n_cluster,0:n_marker_genes]=np.multiply(binomial(1,cluster_zero_inflation,(n_cluster,n_marker_genes)),
negative_binomial(cluster_n,cluster_p,(n_cluster,n_marker_genes)))
import pandas as pd
smp='true_groups'
true_groups_int=np.ones((n_simulated_cells,))
true_groups_int[0:n_cluster]=0
true_groups=list()
for i,j in enumerate(true_groups_int):
true_groups.append(str(j))
adata.smp['true_groups']=pd.Categorical(true_groups, dtype='category')
adata.uns[smp + '_order']=np.asarray(['0','1'])
sc.tl.rank_genes_groups(adata, 'true_groups')
sc.pl.rank_genes_groups(adata, n_genes=20)
sc.tl.rank_genes_groups(adata, 'true_groups', test_type='wilcoxon')
sc.pl.rank_genes_groups(adata, n_genes=20)
# n_cluster needs to be smaller than n_simulated_cells, n_marker_genes needs to be smaller than n_simulated_genes
n_simulated_cells=1000
n_simulated_genes=100
n_cluster=100
n_marker_genes=10
# Specify parameter between 0 and 1 for zero_inflation and p, positive integer for r
# Differential gene expression is simulated using reference parameters for all cells/genes
# except for marker genes in the distinct cells.
reference_zero_inflation=0.15
reference_p=0.5
reference_n=6
cluster_zero_inflation=0.9
cluster_p=0.55
cluster_n=2
adata=AnnData(np.multiply(binomial(1,reference_zero_inflation,(n_simulated_cells,n_simulated_genes)),
negative_binomial(reference_n,reference_p,(n_simulated_cells,n_simulated_genes))))
# adapt marker_genes for cluster
adata.X[0:n_cluster,0:n_marker_genes]=np.multiply(binomial(1,cluster_zero_inflation,(n_cluster,n_marker_genes)),
negative_binomial(cluster_n,cluster_p,(n_cluster,n_marker_genes)))
smp='true_groups'
true_groups_int=np.ones((n_simulated_cells,))
true_groups_int[0:n_cluster]=0
true_groups=list()
for i,j in enumerate(true_groups_int):
true_groups.append(str(j))
adata.smp['true_groups']=pd.Categorical(true_groups, dtype='category')
adata.uns[smp + '_order']=np.asarray(['0','1'])
sc.tl.rank_genes_groups(adata, 'true_groups')
sc.pl.rank_genes_groups(adata, n_genes=20)
sc.tl.rank_genes_groups(adata, 'true_groups', test_type='wilcoxon')
sc.pl.rank_genes_groups(adata, n_genes=20)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, data following a (zero-inflated) negative binomial (ZINB) distribution is created for testing purposes. Test size and distribution parameters can be specified.
Step2: Create data.
Step3: Cluster according to true grouping
Step4: Testing
Step5: As can be seen above, not on only does the wilcoxon-rank-sum test detect all marker genes, but there is also a clear difference to all other genes in ranking.
Step6: This parameter initialization leads to the following expectations/ variances
Step7: With smaller difference in variance, still all marker genes are detected, but less clearly.
Step8: This parameter initialization leads to the following expectations/ variances
|
15,694 | <ASSISTANT_TASK:>
Python Code:
# pip install cartoframes
import pandas as pd
stores_df = pd.read_csv('http://libs.cartocdn.com/cartoframes/files/starbucks_brooklyn.csv')
stores_df.head()
from cartoframes.auth import set_default_credentials
set_default_credentials('creds.json')
from cartoframes.data.services import Geocoding
stores_gdf, _ = Geocoding().geocode(stores_df, street='address')
stores_gdf.head()
from cartoframes.viz import Map, Layer
Map(Layer(stores_gdf))
from cartoframes.viz import Map, Layer, size_continuous_style
Map(Layer(stores_gdf, size_continuous_style('revenue', size_range=[10,40]), title='Annual Revenue ($)'))
from cartoframes.data.services import Isolines
isochrones_gdf, _ = Isolines().isochrones(stores_gdf, [15*60], mode='walk')
isochrones_gdf.head()
stores_map = Map([
Layer(isochrones_gdf),
Layer(stores_gdf, size_continuous_style('revenue', size_range=[10,40]), title='Annual Revenue ($)')
])
stores_map
from cartoframes.data.observatory import Catalog
datasets_df = Catalog().country('usa').category('demographics').datasets.to_dataframe()
datasets_df[datasets_df['is_public_data'] == True]
from cartoframes.data.observatory import Dataset
dataset = Dataset.get('acs_sociodemogr_b758e778')
variables_df = dataset.variables.to_dataframe()
variables_df[variables_df['description'].str.contains('total population', case=False, na=False)]
from cartoframes.data.observatory import Variable
from cartoframes.data.observatory import Enrichment
variable = Variable.get('total_pop_3cf008b3')
isochrones_gdf = Enrichment().enrich_polygons(isochrones_gdf, [variable])
isochrones_gdf.head()
from cartoframes.viz import color_continuous_style
Map([
Layer(isochrones_gdf, color_continuous_style('total_pop'), title='Total Population'),
Layer(stores_gdf, size_continuous_style('revenue', size_range=[10,40]), title='Annual Revenue ($)')
])
result_map = Map([
Layer(
isochrones_gdf,
color_continuous_style('total_pop', stroke_width=0, opacity=0.7),
title='Total Population',
default_widget=True
),
Layer(
stores_gdf,
size_continuous_style('revenue', size_range=[10,40], stroke_color='white'),
title='Annual Revenue ($)',
default_widget=True
)
])
result_map
result_map.publish('starbucks_analysis', password=None, if_exists='replace')
from cartoframes import to_carto
to_carto(stores_gdf, 'starbucks_stores', if_exists='replace')
to_carto(isochrones_gdf, 'starbucks_isochrones', if_exists='replace')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For other ways to install CARTOframes, check out the Installation guide.
Step2: To display your stores as points on a map, you first have to convert the address column into geometries. This process is called geocoding and CARTO provides a straightforward way to do it (you can learn more about it in the Data Services guide).
Step3: Now that your credentials are set, we are ready to geocode the dataframe. The resulting data will be a GeoDataFrame.
Step4: Done! Now that the stores are geocoded, you will notice a new column named geometry has been added. This column stores the geographic location of each store and it's used to plot each location on the map.
Step5: Great! You have a map!
Step6: Good job! By using the size continuous visualization style you can see right away where the stores with higher revenue are. By default, visualization styles also provide a popup with the mapped value and an appropriate legend.
Step7: There they are! To learn more about creating isochrones and isodistances check out the Data Services guide.
Step8: Nice! Let's take the first one (acs_sociodemogr_b758e778) that has aggregated data from 2013 to 2018 and check which of its variables have data about the total population.
Step9: We can see the variable that contains the total population is the one with the slug total_pop_3cf008b3. Now we are ready to enrich our areas of influence with that variable.
Step10: Great! Let's see the result on a map
Step11: At this stage, we could say that the store on the right performs better than others because its area of influence is the one with the lowest population but the store is not the one with the lowest revenue. This insight will help us to focus on them in further analyses.
Step12: Cool! Now that you have a small dashboard to play with, let's publish it on CARTO so you are able to share it with anyone. To do this, you just need to call the publish method from the Map class
Step13: In order to improve the performance and reduce the size of your map, we recommend to upload the data to CARTO and use the table names in the layers instead. To upload your data, you just need to call to_carto with your GeoDataFrame
|
15,695 | <ASSISTANT_TASK:>
Python Code:
def parse_barcodes(bcfile, bc_id='BC'):
res = {}
with open(bcfile, 'r') as fi:
for line in fi:
fields = line.strip().split(',')
if fields[0].startswith(bc_id):
res[fields[0]] = fields[1]
return res
def parse_exp_config(expfile, bc_dict):
res = []
fieldnames = ['id', 'sample', 'cond', 'barcode', 'size', 'region', 'Qbit', 'conc', 'dilution']
with open(expfile) as fi:
reader = csv.DictReader(fi, fieldnames=fieldnames)
for rec in reader:
if rec['id']:
res.append({
'sample': rec['sample'],
'bc_id': rec['barcode'],
'bc_seq': bc_dict[rec['barcode']],
'temp': int(rec['cond'][:2]),
'bcm': '+' in rec['cond'],
})
return pd.DataFrame.from_records(res)
bc_dict = parse_barcodes('../../data/Lexogen_Sense_RNA-Seq.csv')
exp_df = parse_exp_config('../../data/2017-03-09_NextSeq.csv', bc_dict)
agg_utr = pd.read_csv('../../data/utr.counts.csv')
agg_utr
def normalize(df, edf, columns=None):
'''
Prepares the UTR dataframe (`df`) for log transformation.
Adds experiment metadata from `edf`.
Adds pseudocounts to `utr_counts` and `UTR_length`.
Normalizes counts to UTR length.
'''
def pseudo_counts(x):
return x + 1 if x == 0 else x
df = df.merge(edf, how='left', on='sample')
if columns is not None:
df = df[columns]
# Add pseudocounts to allow log transform later
df['utr_counts'] = df['utr_counts'].apply(pseudo_counts)
df['UTR_length'] = df['UTR_length'].apply(pseudo_counts)
df['utr_norm'] = df['utr_counts'] / df['UTR_length']
return df
columns = ['gene', 'TSS', 'start', 'end', 'UTR_length',
'utr_counts', 'sample', 'bcm', 'temp']
utr = normalize(agg_utr, exp_df, columns)
utr
# build expression matrix
X = pd.DataFrame()
samples = []
for sample in set(utr['sample']):
mask = (utr['sample']==sample) & (utr['bcm']==False)
if not utr[mask].empty:
X[sample] = utr[mask]['utr_norm'].values
samples.append(sample)
# Same as .fit() and then .transform()
X_std = StandardScaler().fit_transform(X.values.T)
X_std
sklearn_pca = sklearnPCA(n_components=10)
Y = sklearn_pca.fit_transform(X_std)
print(Y)
print(sklearn_pca.explained_variance_)
print(sklearn_pca.explained_variance_ratio_)
vdf = pd.DataFrame()
vdf['PC'] = [(i+1) for i,x in enumerate(sklearn_pca.explained_variance_ratio_)]
vdf['var'] = sklearn_pca.explained_variance_ratio_
(ggplot(vdf, aes(x='PC', y='var'))
+ geom_point(size=5, alpha=0.3)
+ ylab('Explained variance')
+ ggtitle('Unfiltered -BCM')
)
pca_df = pd.DataFrame()
pca_df['cond'] = ['%doC' % exp_df[exp_df['sample']==sample]['temp'] for sample in samples]
pca_df['PC1'] = Y[:,0]
pca_df['PC2'] = Y[:,1]
pca_df
(ggplot(pca_df, aes(x='PC1', y='PC2', color='cond'))
+ geom_point(size=5, alpha=0.3)
+ ggtitle('Unfiltered -BCM')
)
# build expression matrix
X = pd.DataFrame()
samples = []
for sample in set(utr['sample']):
mask = (utr['sample']==sample) & (utr['bcm']==True)
if not utr[mask].empty:
X[sample] = utr[mask]['utr_norm'].values
samples.append(sample)
X_std = StandardScaler().fit_transform(X.values.T)
X_std
sklearn_pca = sklearnPCA(n_components=10)
Y = sklearn_pca.fit_transform(X_std)
print(Y)
print(sklearn_pca.explained_variance_)
print(sklearn_pca.explained_variance_ratio_)
vdf = pd.DataFrame()
vdf['PC'] = [(i+1) for i,x in enumerate(sklearn_pca.explained_variance_ratio_)]
vdf['var'] = sklearn_pca.explained_variance_ratio_
(ggplot(vdf, aes(x='PC', y='var'))
+ geom_point(size=5, alpha=0.3)
+ ylab('Explained variance')
+ ggtitle('Unfiltered +BCM')
)
pca_df = pd.DataFrame()
pca_df['cond'] = ['%doC' % exp_df[exp_df['sample']==sample]['temp'] for sample in samples]
pca_df['PC1'] = Y[:,0]
pca_df['PC2'] = Y[:,1]
pca_df
(ggplot(pca_df, aes(x='PC1', y='PC2', color='cond'))
+ geom_point(size=5, alpha=0.3)
+ ggtitle('Unfiltered +BCM')
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the counts table
Step2: Normalize to UTR length
Step3: Notation
Step4: Principal Component Analisys (PCA)
Step5: Aside
Step6: +BCM samples
|
15,696 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import nsfg
df = nsfg.ReadFemPreg()
df
df.birthord.value_counts().sort_index()
df.prglngth.value_counts().sort_index()
df.agepreg.hist(by=df.birthord)
df.totalwgt_lb.mean()
df['totalwgt_kg'] = df.totalwgt_lb / 2.20462
df.totalwgt_kg.mean()
df.finalwgt.value_counts().sort_index()
df.outcome == 1
live = df[df.outcome == 1]
len(live)
len(live[(live.birthwgt_lb >= 0) & (live.birthwgt_lb <= 5)])
len(live[(live.birthwgt_lb >= 9) & (live.birthwgt_lb <= 95)])
firsts = df[df.birthord==1]
others = df[df.birthord>1]
len(firsts), len(others)
firsts.totalwgt_lb.mean()
others.totalwgt_lb.mean()
fln = firsts.prglngth.mean()
oln = others.prglngth.mean()
hdiff = (fln - oln) * (7*24)
print "First baby's average pregnancy weeks: %.2f" % fln
print "Not first baby's average pregnancy weeks: %.2f" % oln
print "Average hours of difference: %.2f" % hdiff
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Print value counts for <tt>birthord</tt> and compare to results published in the codebook
Step2: Print value counts for <tt>prglngth</tt> and compare to results published in the codebook
Step3: Print value counts for <tt>agepreg</tt> and compare to results published in the codebook.
Step4: Compute the mean birthweight.
Step5: Create a new column named <tt>totalwgt_kg</tt> that contains birth weight in kilograms. Compute its mean. Remember that when you create a new column, you have to use dictionary syntax, not dot notation.
Step6: Look through the codebook and find a variable, other than the ones mentioned in the book, that you find interesting. Compute values counts, means, or other statistics.
Step7: Create a boolean Series.
Step8: Use a boolean Series to select the records for the pregnancies that ended in live birth.
Step9: Count the number of live births with <tt>birthwgt_lb</tt> between 0 and 5 pounds (including both). The result should be 1125.
Step10: Count the number of live births with <tt>birthwgt_lb</tt> between 9 and 95 pounds (including both). The result should be 798
Step11: Use <tt>birthord</tt> to select the records for first babies and others. How many are there of each?
Step12: Compute the mean weight for first babies and others.
Step13: Compute the mean <tt>prglngth</tt> for first babies and others. Compute the difference in means, expressed in hours.
|
15,697 | <ASSISTANT_TASK:>
Python Code:
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
chars.insert(0, "\0")
''.join(chars[1:-6])
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
idx = [char_indices[c] for c in text]
idx[:10]
''.join(indices_char[i] for i in idx[:70])
cs=3
c1_dat = [idx[i] for i in xrange(0, len(idx)-1-cs, cs)]
c2_dat = [idx[i+1] for i in xrange(0, len(idx)-1-cs, cs)]
c3_dat = [idx[i+2] for i in xrange(0, len(idx)-1-cs, cs)]
c4_dat = [idx[i+3] for i in xrange(0, len(idx)-1-cs, cs)]
x1 = np.stack(c1_dat[:-2])
x2 = np.stack(c2_dat[:-2])
x3 = np.stack(c3_dat[:-2])
y = np.stack(c4_dat[:-2])
x1[:4], x2[:4], x3[:4]
y[:4]
x1.shape, y.shape
n_fac = 42
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name)
emb = Embedding(n_in, n_out, input_length=1)(inp)
return inp, Flatten()(emb)
c1_in, c1 = embedding_input('c1', vocab_size, n_fac)
c2_in, c2 = embedding_input('c2', vocab_size, n_fac)
c3_in, c3 = embedding_input('c3', vocab_size, n_fac)
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
c1_hidden = dense_in(c1)
dense_hidden = Dense(n_hidden, activation='tanh')
c2_dense = dense_in(c2)
hidden_2 = dense_hidden(c1_hidden)
c2_hidden = merge([c2_dense, hidden_2])
c3_dense = dense_in(c3)
hidden_3 = dense_hidden(c2_hidden)
c3_hidden = merge([c3_dense, hidden_3])
dense_out = Dense(vocab_size, activation='softmax')
c4_out = dense_out(c3_hidden)
model = Model([c1_in, c2_in, c3_in], c4_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.optimizer.lr=0.000001
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4)
model.optimizer.lr=0.01
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4)
model.optimizer.lr.set_value(0.000001)
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4)
model.optimizer.lr.set_value(0.01)
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4)
def get_next(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict(arrs)
i = np.argmax(p)
return chars[i]
get_next('phi')
get_next(' th')
get_next(' an')
cs=8
c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
for n in range(cs)]
c_out_dat = [idx[i+cs] for i in xrange(0, len(idx)-1-cs, cs)]
xs = [np.stack(c[:-2]) for c in c_in_dat]
len(xs), xs[0].shape
y = np.stack(c_out_dat[:-2])
[xs[n][:cs] for n in range(cs)]
y[:cs]
n_fac = 42
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name+'_in')
emb = Embedding(n_in, n_out, input_length=1, name=name+'_emb')(inp)
return inp, Flatten()(emb)
c_ins = [embedding_input('c'+str(n), vocab_size, n_fac) for n in range(cs)]
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax')
hidden = dense_in(c_ins[0][1])
for i in range(1,cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden])
c_out = dense_out(hidden)
model = Model([c[0] for c in c_ins], c_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(xs, y, batch_size=64, nb_epoch=12)
def get_next(inp):
idxs = [np.array(char_indices[c])[np.newaxis] for c in inp]
p = model.predict(idxs)
return chars[np.argmax(p)]
get_next('for thos')
get_next('part of ')
get_next('queens a')
n_hidden, n_fac, cs, vocab_size = (256, 42, 8, 86)
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, activation='relu', inner_init='identity'),
Dense(vocab_size, activation='softmax')
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(np.concatenate(xs,axis=1), y, batch_size=64, nb_epoch=8)
def get_next_keras(inp):
idxs = [char_indices[c] for c in inp]
arrs = np.array(idxs)[np.newaxis,:]
p = model.predict(arrs)[0]
return chars[np.argmax(p)]
get_next_keras('this is ')
get_next_keras('part of ')
get_next_keras('queens a')
#c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
# for n in range(cs)]
c_out_dat = [[idx[i+n] for i in xrange(1, len(idx)-cs, cs)]
for n in range(cs)]
ys = [np.stack(c[:-2]) for c in c_out_dat]
[xs[n][:cs] for n in range(cs)]
[ys[n][:cs] for n in range(cs)]
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax', name='output')
inp1 = Input(shape=(n_fac,), name='zeros')
hidden = dense_in(inp1)
outs = []
for i in range(cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden], mode='sum')
# every layer now has an output
outs.append(dense_out(hidden))
model = Model([inp1] + [c[0] for c in c_ins], outs)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
zeros = np.tile(np.zeros(n_fac), (len(xs[0]),1))
zeros.shape
model.fit([zeros]+xs, ys, batch_size=64, nb_epoch=12)
def get_nexts(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict([np.zeros(n_fac)[np.newaxis,:]] + arrs)
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts(' this is')
get_nexts(' part of')
n_hidden, n_fac, cs, vocab_size
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, return_sequences=True, activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
xs[0].shape
x_rnn=np.stack(np.squeeze(xs), axis=1)
y_rnn=np.atleast_3d(np.stack(ys, axis=1))
x_rnn.shape, y_rnn.shape
model.fit(x_rnn, y_rnn, batch_size=64, nb_epoch=8)
def get_nexts_keras(inp):
idxs = [char_indices[c] for c in inp]
arr = np.array(idxs)[np.newaxis,:]
p = model.predict(arr)[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_keras(' this is')
model=Sequential([
SimpleRNN(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
oh_ys = [to_categorical(o, vocab_size) for o in ys]
oh_y_rnn=np.stack(oh_ys, axis=1)
oh_xs = [to_categorical(o, vocab_size) for o in xs]
oh_x_rnn=np.stack(oh_xs, axis=1)
oh_x_rnn.shape, oh_y_rnn.shape
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8)
def get_nexts_oh(inp):
idxs = np.array([char_indices[c] for c in inp])
arr = to_categorical(idxs, vocab_size)
p = model.predict(arr[np.newaxis,:])[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_oh(' this is')
bs=64
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs, batch_input_shape=(bs,8)),
BatchNormalization(),
LSTM(n_hidden, return_sequences=True, stateful=True),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
mx = len(x_rnn)//bs*bs
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
model.optimizer.lr=1e-4
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
n_input = vocab_size
n_output = vocab_size
def init_wgts(rows, cols):
scale = math.sqrt(2/rows)
return shared(normal(scale=scale, size=(rows, cols)).astype(np.float32))
def init_bias(rows):
return shared(np.zeros(rows, dtype=np.float32))
def wgts_and_bias(n_in, n_out):
return init_wgts(n_in, n_out), init_bias(n_out)
def id_and_bias(n):
return shared(np.eye(n, dtype=np.float32)), init_bias(n)
t_inp = T.matrix('inp')
t_outp = T.matrix('outp')
t_h0 = T.vector('h0')
lr = T.scalar('lr')
all_args = [t_h0, t_inp, t_outp, lr]
W_h = id_and_bias(n_hidden)
W_x = wgts_and_bias(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W_h, W_x, W_y]))
def step(x, h, W_h, b_h, W_x, b_x, W_y, b_y):
# Calculate the hidden activations
h = nnet.relu(T.dot(x, W_x) + b_x + T.dot(h, W_h) + b_h)
# Calculate the output activations
y = nnet.softmax(T.dot(h, W_y) + b_y)
# Return both (the 'Flatten()' is to work around a theano bug)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
X = oh_x_rnn
Y = oh_y_rnn
X.shape, Y.shape
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
print ("Error:{:.3f}".format(err/1000))
err=0.0
f_y = theano.function([t_h0, t_inp], v_y, allow_input_downcast=True)
pred = np.argmax(f_y(np.zeros(n_hidden), X[6]), axis=1)
act = np.argmax(X[6], axis=1)
[indices_char[o] for o in act]
[indices_char[o] for o in pred]
def sigmoid(x): return 1/(1+np.exp(-x))
def sigmoid_d(x):
output = sigmoid(x)
return output*(1-output)
def relu(x): return np.maximum(0., x)
def relu_d(x): return (x > 0.)*1.
relu(np.array([3.,-3.])), relu_d(np.array([3.,-3.]))
def dist(a,b): return pow(a-b,2)
def dist_d(a,b): return 2*(a-b)
import pdb
eps = 1e-7
def x_entropy(pred, actual):
return -np.sum(actual * np.log(np.clip(pred, eps, 1-eps)))
def x_entropy_d(pred, actual): return -actual/pred
def softmax(x): return np.exp(x)/np.exp(x).sum()
def softmax_d(x):
sm = softmax(x)
res = np.expand_dims(-sm,-1)*sm
res[np.diag_indices_from(res)] = sm*(1-sm)
return res
test_preds = np.array([0.2,0.7,0.1])
test_actuals = np.array([0.,1.,0.])
nnet.categorical_crossentropy(test_preds, test_actuals).eval()
x_entropy(test_preds, test_actuals)
test_inp = T.dvector()
test_out = nnet.categorical_crossentropy(test_inp, test_actuals)
test_grad = theano.function([test_inp], T.grad(test_out, test_inp))
test_grad(test_preds)
x_entropy_d(test_preds, test_actuals)
pre_pred = random(oh_x_rnn[0][0].shape)
preds = softmax(pre_pred)
actual = oh_x_rnn[0][0]
np.allclose(softmax_d(pre_pred).dot(x_entropy_d(preds,actual)), preds-actual)
softmax(test_preds)
nnet.softmax(test_preds).eval()
test_out = T.flatten(nnet.softmax(test_inp))
test_grad = theano.function([test_inp], theano.gradient.jacobian(test_out, test_inp))
test_grad(test_preds)
softmax_d(test_preds)
act=relu
act_d = relu_d
loss=x_entropy
loss_d=x_entropy_d
def scan(fn, start, seq):
res = []
prev = start
for s in seq:
app = fn(prev, s)
res.append(app)
prev = app
return res
scan(lambda prev,curr: prev+curr, 0, range(5))
inp = oh_x_rnn
outp = oh_y_rnn
n_input = vocab_size
n_output = vocab_size
inp.shape, outp.shape
def one_char(prev, item):
# Previous state
tot_loss, pre_hidden, pre_pred, hidden, ypred = prev
# Current inputs and output
x, y = item
pre_hidden = np.dot(x,w_x) + np.dot(hidden,w_h)
hidden = act(pre_hidden)
pre_pred = np.dot(hidden,w_y)
ypred = softmax(pre_pred)
return (
# Keep track of loss so we can report it
tot_loss+loss(ypred, y),
# Used in backprop
pre_hidden, pre_pred,
# Used in next iteration
hidden,
# To provide predictions
ypred)
def get_chars(n): return zip(inp[n], outp[n])
def one_fwd(n): return scan(one_char, (0,0,0,np.zeros(n_hidden),0), get_chars(n))
# "Columnify" a vector
def col(x): return x[:,newaxis]
def one_bkwd(args, n):
global w_x,w_y,w_h
i=inp[n] # 8x86
o=outp[n] # 8x86
d_pre_hidden = np.zeros(n_hidden) # 256
for p in reversed(range(len(i))):
totloss, pre_hidden, pre_pred, hidden, ypred = args[p]
x=i[p] # 86
y=o[p] # 86
d_pre_pred = softmax_d(pre_pred).dot(loss_d(ypred,y)) # 86
d_pre_hidden = (np.dot(d_pre_hidden, w_h.T)
+ np.dot(d_pre_pred,w_y.T)) * act_d(pre_hidden) # 256
# d(loss)/d(w_y) = d(loss)/d(pre_pred) * d(pre_pred)/d(w_y)
w_y -= col(hidden) * d_pre_pred * alpha
# d(loss)/d(w_h) = d(loss)/d(pre_hidden[p-1]) * d(pre_hidden[p-1])/d(w_h)
if (p>0): w_h -= args[p-1][3].dot(d_pre_hidden) * alpha
w_x -= col(x)*d_pre_hidden * alpha
return d_pre_hidden
scale=math.sqrt(2./n_input)
w_x = normal(scale=scale, size=(n_input,n_hidden))
w_y = normal(scale=scale, size=(n_hidden, n_output))
w_h = np.eye(n_hidden, dtype=np.float32)
overallError=0
alpha=0.0001
for n in range(10000):
res = one_fwd(n)
overallError+=res[-1][0]
deriv = one_bkwd(res, n)
if(n % 1000 == 999):
print ("Error:{:.4f}; Gradient:{:.5f}".format(
overallError/1000, np.linalg.norm(deriv)))
overallError=0
model=Sequential([
GRU(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8)
get_nexts_oh(' this is')
W_h = id_and_bias(n_hidden)
W_x = init_wgts(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
rW_h = init_wgts(n_hidden, n_hidden)
rW_x = wgts_and_bias(n_input, n_hidden)
uW_h = init_wgts(n_hidden, n_hidden)
uW_x = wgts_and_bias(n_input, n_hidden)
w_all = list(chain.from_iterable([W_h, W_y, uW_x, rW_x]))
w_all.extend([W_x, uW_h, rW_h])
def gate(x, h, W_h, W_x, b_x):
return nnet.sigmoid(T.dot(x, W_x) + b_x + T.dot(h, W_h))
def step(x, h, W_h, b_h, W_y, b_y, uW_x, ub_x, rW_x, rb_x, W_x, uW_h, rW_h):
reset = gate(x, h, rW_h, rW_x, rb_x)
update = gate(x, h, uW_h, uW_x, ub_x)
h_new = gate(x, h * reset, W_h, W_x, b_h)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.1
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
l_rate *= 0.95
print ("Error:{:.2f}".format(err/1000))
err=0.0
W = (shared(np.concatenate([np.eye(n_hidden), normal(size=(n_input, n_hidden))])
.astype(np.float32)), init_bias(n_hidden))
rW = wgts_and_bias(n_input+n_hidden, n_hidden)
uW = wgts_and_bias(n_input+n_hidden, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W, W_y, uW, rW]))
def gate(m, W, b): return nnet.sigmoid(T.dot(m, W) + b)
def step(x, h, W, b, W_y, b_y, uW, ub, rW, rb):
m = T.concatenate([h, x])
reset = gate(m, rW, rb)
update = gate(m, uW, ub)
m = T.concatenate([h*reset, x])
h_new = gate(m, W, b)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
print ("Error:{:.2f}".format(err/1000))
err=0.0
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sometimes it's useful to have a zero value in the dataset, e.g. for padding
Step2: Map from chars to indices and back again
Step3: idx will be the data we use from now own - it simply converts all the characters to their index (based on the mapping above)
Step4: 3 char model
Step5: Our inputs
Step6: Our output
Step7: The first 4 inputs and outputs
Step8: The number of latent factors to create (i.e. the size of the embedding matrix)
Step9: Create inputs and embedding outputs for each of our 3 character inputs
Step10: Create and train model
Step11: This is the 'green arrow' from our diagram - the layer operation from input to hidden.
Step12: Our first hidden activation is simply this function applied to the result of the embedding of the first character.
Step13: This is the 'orange arrow' from our diagram - the layer operation from hidden to hidden.
Step14: Our second and third hidden activations sum up the previous hidden state (after applying dense_hidden) to the new input state.
Step15: This is the 'blue arrow' from our diagram - the layer operation from hidden to output.
Step16: The third hidden state is the input to our output layer.
Step17: Test model
Step18: Our first RNN!
Step19: For each of 0 through 7, create a list of every 8th character with that starting point. These will be the 8 inputs to out model.
Step20: Then create a list of the next character in each of these series. This will be the labels for our model.
Step21: So each column below is one series of 8 characters from the text.
Step22: ...and this is the next character after each sequence.
Step23: Create and train model
Step24: The first character of each sequence goes through dense_in(), to create our first hidden activations.
Step25: Then for each successive layer we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state.
Step26: Putting the final hidden state through dense_out() gives us our output.
Step27: So now we can create our model.
Step28: Test model
Step29: Our first RNN with keras!
Step30: This is nearly exactly equivalent to the RNN we built ourselves in the previous section.
Step31: Returning sequences
Step32: Reading down each column shows one set of inputs and outputs.
Step33: Create and train model
Step34: We're going to pass a vector of all zeros as our starting point - here's our input layers for that
Step35: Test model
Step36: Sequence model with keras
Step37: To convert our previous keras model into a sequence model, simply add the 'return_sequences=True' parameter, and add TimeDistributed() around our dense layer.
Step38: One-hot sequence model with keras
Step39: Stateful model with keras
Step40: A stateful model is easy to create (just add "stateful=True") but harder to train. We had to add batchnorm and use LSTM to get reasonable results.
Step41: Since we're using a fixed batch shape, we have to ensure our inputs and outputs are a even multiple of the batch size.
Step42: Theano RNN
Step43: Using raw theano, we have to create our weight matrices and bias vectors ourselves - here are the functions we'll use to do so (using glorot initialization).
Step44: We return the weights and biases together as a tuple. For the hidden weights, we'll use an identity initialization (as recommended by Hinton.)
Step45: Theano doesn't actually do any computations until we explicitly compile and evaluate the function (at which point it'll be turned into CUDA code and sent off to the GPU). So our job is to describe the computations that we'll want theano to do - the first step is to tell theano what inputs we'll be providing to our computation
Step46: Now we're ready to create our intial weight matrices.
Step47: Theano handles looping by using the GPU scan operation. We have to tell theano what to do at each step through the scan - this is the function we'll use, which does a single forward pass for one character
Step48: Now we can provide everything necessary for the scan operation, so we can setup that up - we have to pass in the function to call at each step, the sequence to step through, the initial values of the outputs, and any other arguments to pass to the step function.
Step49: We can now calculate our loss function, and all of our gradients, with just a couple of lines of code!
Step50: We even have to show theano how to do SGD - so we set up this dictionary of updates to complete after every forward pass, which apply to standard SGD update rule to every weight.
Step51: We're finally ready to compile the function!
Step52: To use it, we simply loop through our input data, calling the function compiled above, and printing our progress from time to time.
Step53: Pure python RNN!
Step54: We also have to define our own scan function. Since we're not worrying about running things in parallel, it's very simple to implement
Step55: ...for instance, scan on + is the cumulative sum.
Step56: Set up training
Step57: Here's the function to do a single forward pass of an RNN, for a single character.
Step58: We use scan to apply the above to a whole sequence of characters.
Step59: Now we can define the backward step. We use a loop to go through every element of the sequence. The derivatives are applying the chain rule to each step, and accumulating the gradients across the sequence.
Step60: Now we can set up our initial weight matrices. Note that we're not using bias at all in this example, in order to keep things simpler.
Step61: Our loop looks much like the theano loop in the previous section, except that we have to call the backwards step ourselves.
Step62: Keras GRU
Step63: Theano GRU
Step64: Here's the definition of a gate - it's just a sigmoid applied to the addition of the dot products of the input vectors.
Step65: Our step is nearly identical to before, except that we multiply our hidden state by our reset gate, and we update our hidden state based on the update gate.
Step66: Everything from here on is identical to our simple RNN in theano.
Step67: Combined weights
|
15,698 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
before = np.loadtxt('Data/questionnaire_before.txt',delimiter=',')
after = np.loadtxt('Data/questionnaire_after.txt',delimiter=',')
form = np.loadtxt('Data/questionnaire_form.txt',delimiter=',')
questions = np.array(range(1,15))
form_quest = np.array([1,5,10,9,13,6,3,8,4,7,2,12,11,14])
before_m = before.mean(axis=0)
after_m = after.mean(axis=0)
form_m = form.mean(axis=0)
before
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111)
ax.grid(color='#E6E6E6',linewidth=2,linestyle='-',)
ax.plot(before_m,questions,'o-',color='DeepSkyBlue',label='Before',lw=4,ms=10,mew=0)
ax.plot(after_m,questions-0.2,'o-',color='DarkMagenta',label='After',lw=4,ms=10,mew=0)
[line.set_zorder(3) for line in ax.lines]
ax.tick_params(axis='both',color='#E6E6E6')
ax.spines['top'].set_color('#E6E6E6')
ax.spines['bottom'].set_color('#E6E6E6')
ax.spines['left'].set_color('#E6E6E6')
ax.spines['right'].set_color('#E6E6E6')
ax.set_xlim(0,6)
ax.set_ylim(15,0)
ax.set_ylabel('Statement Number')
ax.legend(loc=2,frameon=False)
ax.set_xticklabels(['','Completely agree','','Neither','','Completely Disagree',''])
before_responses = np.zeros((5,14))
after_responses = np.zeros((5,14))
form_responses = np.zeros((5,14))
for ii in range(14):
for jj in range(5):
before_responses[jj,ii] = sum(before[:,ii] == jj+1)
after_responses[jj,ii] = sum(after[:,ii] == jj+1)
form_responses[jj,ii] = sum(form[:,ii] == jj+1)
fig, ax = plt.subplots(1, 1, figsize=(10,5))
#Add nice light grey boundaries and grid:
ax.tick_params(axis='both', color='#E6E6E6')
ax.spines['top'].set_color('#E6E6E6')
ax.spines['bottom'].set_color('#E6E6E6')
ax.spines['left'].set_color('#E6E6E6')
ax.spines['right'].set_color('#E6E6E6')
ax.vlines(np.arange(0.5,15,1), 0.5,5.5, lw=2, colors='#E6E6E6')
ax.hlines(np.arange(0.5,6,1), 0.5,14.5, lw=2, colors='#E6E6E6')
#Plot the heatmap data
p = ax.imshow(before_responses, origin='lower', cmap='Blues',
extent=[0.5, 14.5, 0.5, 5.5], interpolation='nearest')
cb = plt.colorbar(p, aspect=6.5, fraction=0.05)
cb.set_label('Response Counts')
ax.set_yticklabels(['', 'Completely agree', '', 'Neither', '',
'Completely Disagree'])
ax.set_xlabel('Statement')
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib.ticker import MultipleLocator
fig, axs = plt.subplots(3, 1, figsize=(10,15))
#Add nice light grey boundaries and grid:
for ax in axs.reshape(-1):
ax.tick_params(axis='both', color='#E6E6E6')
ax.spines['top'].set_color('#E6E6E6')
ax.spines['bottom'].set_color('#E6E6E6')
ax.spines['left'].set_color('#E6E6E6')
ax.spines['right'].set_color('#E6E6E6')
ax.vlines(np.arange(0.5, 15, 1), 0.5, 5.5, lw=2, colors='#E6E6E6')
ax.hlines(np.arange(0.5, 6, 1), 0.5, 14.5, lw=2, colors='#E6E6E6')
#Plot the heatmap data
p = axs[0].imshow(before_responses, origin='lower', cmap='Blues',
extent=[0.5, 14.5, 0.5, 5.5], interpolation='nearest')
div1 = make_axes_locatable(axs[0])
cbloc = div1.append_axes("right", size="5%", pad=0.05)
cb1 = plt.colorbar(p, cax=cbloc, ticks=MultipleLocator(1.0))
axs[0].set_title('Before')
p = axs[1].imshow(after_responses, origin='lower', cmap='RdPu',
extent=[0.5, 14.5, 0.5, 5.5], interpolation='nearest')
div2 = make_axes_locatable(axs[1])
cbloc = div2.append_axes("right", size="5%", pad=0.05)
cb2 = plt.colorbar(p, cax=cbloc, ticks=MultipleLocator(1.0))
axs[1].set_title('After')
p = axs[2].imshow(form_responses, origin='lower', cmap='Greens',
extent=[0.5, 14.5, 0.5, 5.5], interpolation='nearest')
div3 = make_axes_locatable(axs[2])
cbloc = div3.append_axes("right", size="5%", pad=0.05)
cb3 = plt.colorbar(p, cax=cbloc, ticks=MultipleLocator(1.0))
axs[2].set_title('Form')
for ax in axs:
ax.set_yticklabels(['', 'Completely agree', '', 'Neither', '',
'Completely Disagree'])
ax.set_xlabel('Statement')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you can see above, the data as provided were arranged with rows for each question, and the responses of each individual as the columns.
Step2: There are several issues with the above plot. Firstly, as discussed above, we have used the mean of the responses to each statement. While this <i>does</i> give a rough feeling of the overal tone, it loses information. We could add error bars to the points to indicate the spread in the responses.
Step3: Which, I hope you agree, is a very attractive and informative plot.
|
15,699 | <ASSISTANT_TASK:>
Python Code:
success = True # We'll use this to keep track of the various tests
failures = []
try:
import numpy as np
import scipy
print "numpy and scipy imported -- success!"
except:
success = False
msg = "* There was a problem importing numpy or scipy. You will definitely need these!"
print msg
failures.append(msg)
try:
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
except:
success = False
msg = "* There was a problem importing matplotlib. You will definitely need this"
failures.append(msg)
plt.plot([1, 2, 3], [1, 4, 9], "ro--")
try:
import pandas, PyQt4, enaml
print "pandas, PyQt4, and enaml imported -- success!"
except:
success = False
msg = "* There was a problem importing pandas, pyqt, or enaml. You will need these for Days 2 and 3."
print msg
failures.append(msg)
try:
import h5py
from mpl_toolkits.basemap import Basemap
print "h5py and Basemap imported -- success!"
except:
success = False
msg = "* There was a problem with h5py and/or Basemap. You will need these for Day 2."
failures.append(msg)
# Basemap Test
try:
f = plt.figure(1, figsize=(14.0, 10.0))
f.suptitle("Basemap - First Map")
f.text(0.05, 0.95, "Mollewide")
f.subplots_adjust(left=0.05, right=0.95, top=0.80, bottom=0.05, wspace=0.2, hspace=0.4)
f.add_subplot(1, 1, 1)
b = Basemap(projection="moll", lon_0=0, resolution='c')
b.drawcoastlines()
b.drawparallels(np.arange( -90.0, 90.0, 20.0))
b.drawmeridians(np.arange(-180.0, 181.0, 20.0))
except:
success = False
msg = "* There was a problem creating a Basemap plot. You will need this for Day 2."
failures.append(msg)
if success:
print Congratulations! Your python environment seems to be working properly.
We look forward to seeing you at the Boot Camp!
elif failures:
print The following problems occurred:
%s.
Please contact us and we will try to help you fix things. % ("\n".join(failures))
else:
print There was a problem with your python environment -- please contact us
and we will try to help you figure out what happened.
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You should see a simple plot below the next cell.
Step5: There should be a Basemap plot displayed below this cell.
|