Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
---|---|---|
15,400 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
# mnist.train.images[0]
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_shape = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32,shape=(None,image_shape),name='inputs')
targets_ = tf.placeholder(tf.float32,shape=(None,image_shape),name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_,encoding_dim,activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded,image_shape) # linear activation
# Sigmoid output from logits
decoded = tf.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step5: Checking out the results
|
15,401 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
from os import path
from sklearn.ensemble import RandomForestClassifier
import numpy as np
from sklearn.ensemble import ExtraTreesClassifier
import sklearn
# Edit path if need be (shouldn't need to b/c we all have the same folder structure)
CSV_PATH_1 = '../Videos/all_data'
CSV_PATH_2 = '../Videos2/all_data2'
FILE_EXTENSION = '_all.csv'
GENRES = ['country', 'edm', 'pop', 'rap', 'rock']
# Containers for the data frames
genre_dfs = {}
all_genres = None
# Read in the 5 genre's of CV's
for genre in GENRES:
genre_csv_path_1 = path.join(CSV_PATH_1, genre) + FILE_EXTENSION
genre_csv_path_2 = path.join(CSV_PATH_2, genre) + FILE_EXTENSION
df_1 = pd.read_csv(genre_csv_path_1)
df_2 = pd.read_csv(genre_csv_path_2)
df_1 = df_1.drop('Unnamed: 0',1)
df_2 = df_2.drop('Unnamed: 0',1)
df_combined = pd.concat([df_1,df_2],ignore_index=True)
genre_dfs[genre] = df_combined
all_genres = pd.concat(genre_dfs.values())
all_genres.head()
# genre_dfs is now a dictionary that contains the 5 different data frames
# all_genres is a dataframe that contains all of the data
def genre_to_ordinal(genre_in):
if(genre_in == "country"):
return 0
elif(genre_in == "pop"):
return 1
elif(genre_in == "rock"):
return 2
elif(genre_in == "edm"):
return 3
elif(genre_in == "rap"):
return 4
else:
return genre_in
all_genres['genre_ordinal'] = all_genres.genre.apply(genre_to_ordinal)
# Adding is_country flag
def is_country(genre_in):
if(genre_in == "country"):
return 1
else:
return 0
all_genres['is_country'] = all_genres.genre.apply(is_country)
# Adding is_country flag
def is_rock(genre_in):
if(genre_in == "rock"):
return 1
else:
return 0
all_genres['is_rock'] = all_genres.genre.apply(is_rock)
# Adding is_edm flag
def is_edm(genre_in):
if(genre_in == "edm"):
return 1
else:
return 0
all_genres['is_edm'] = all_genres.genre.apply(is_edm)
# Adding is_rap flag
def is_rap(genre_in):
if(genre_in == "rap"):
return 1
else:
return 0
all_genres['is_rap'] = all_genres.genre.apply(is_rap)
# Adding is_country flag
def is_pop(genre_in):
if(genre_in == "pop"):
return 1
else:
return 0
all_genres['is_pop'] = all_genres.genre.apply(is_pop)
# Subset all_genres to group by individual genres
country_records = all_genres[all_genres["genre"] == "country"]
rock_records = all_genres[all_genres["genre"] == "rock"]
pop_records = all_genres[all_genres["genre"] == "pop"]
edm_records = all_genres[all_genres["genre"] == "edm"]
rap_records = all_genres[all_genres["genre"] == "rap"]
# From the subsets above, create train and test sets from each
country_train = country_records.head(len(country_records) / 2)
country_test = country_records.tail(len(country_records) / 2)
rock_train = rock_records.head(len(rock_records) / 2)
rock_test = rock_records.tail(len(rock_records) / 2)
pop_train = pop_records.head(len(pop_records) / 2)
pop_test = pop_records.tail(len(pop_records) / 2)
edm_train = edm_records.head(len(edm_records) / 2)
edm_test = edm_records.tail(len(edm_records) / 2)
rap_train = rap_records.head(len(rap_records) / 2)
rap_test = rap_records.tail(len(rap_records) / 2)
# Create big training and big test set for analysis
training_set = pd.concat([country_train,rock_train,pop_train,edm_train,rap_train])
test_set = pd.concat([country_test,rock_test,pop_test,edm_test,rap_test])
training_set = training_set.fillna(0)
test_set = test_set.fillna(0)
print "Training Records:\t" , len(training_set)
print "Test Records:\t\t" , len(test_set)
# training_set.head()
# Predicting based solely on non-color features, using RF
clf = RandomForestClassifier(n_estimators=11)
meta_data_features = ['rating', 'likes','dislikes','length','viewcount']
y, _ = pd.factorize(training_set['genre_ordinal'])
clf = clf.fit(training_set[meta_data_features], y)
z, _ = pd.factorize(test_set['genre_ordinal'])
print clf.score(test_set[meta_data_features],z)
pd.crosstab(test_set.genre_ordinal, clf.predict(test_set[meta_data_features]),rownames=["Actual"], colnames=["Predicted"])
def gen_new_headers(old_headers):
headers = ['colors_' + str(x+1) + '_' for x in range(10)]
h = []
for x in headers:
h.append(x + 'red')
h.append(x + 'blue')
h.append(x + 'green')
return old_headers + h + ['genre']
clf = RandomForestClassifier(n_estimators=11)
color_features = gen_new_headers([])[:-1]
# Predicting based solely on colors
y, _ = pd.factorize(training_set['genre_ordinal'])
clf = clf.fit(training_set[color_features], y)
z, _ = pd.factorize(test_set['genre_ordinal'])
print clf.score(test_set[color_features],z)
pd.crosstab(test_set.genre_ordinal, clf.predict(test_set[color_features]),rownames=["Actual"], colnames=["Predicted"])
clf = RandomForestClassifier(n_estimators=11)
all_features = meta_data_features + color_features
# Predicting based on colors and non-color features
y, _ = pd.factorize(training_set['genre_ordinal'])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set['genre_ordinal'])
print clf.score(test_set[all_features],z)
pd.crosstab(test_set.genre_ordinal, clf.predict(test_set[all_features]),rownames=["Actual"], colnames=["Predicted"])
clf = RandomForestClassifier(n_estimators=11)
all_features = meta_data_features + color_features
print all_features
# Predicting based on colors and non-color features
y, _ = pd.factorize(training_set['is_pop'])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set['is_pop'])
print clf.score(test_set[all_features],z)
pd.crosstab(test_set.is_pop, clf.predict(test_set[all_features]),rownames=["Actual"], colnames=["Predicted"])
clf = RandomForestClassifier(n_estimators=11)
all_features = meta_data_features + color_features
# Predicting based on colors and non-color features
y, _ = pd.factorize(training_set['is_rap'])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set['is_rap'])
print clf.score(test_set[all_features],z)
pd.crosstab(test_set.is_rap, clf.predict(test_set[all_features]),rownames=["Actual"], colnames=["Predicted"])
def multi_RF_averages(is_genre,num_iterations):
clf = RandomForestClassifier(n_estimators=11)
loop_indices = range(0,num_iterations)
cumsum = 0
for i in loop_indices:
y, _ = pd.factorize(training_set[is_genre])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set[is_genre])
cumsum = cumsum + clf.score(test_set[all_features],z)
print "Average Score for",len(loop_indices),is_genre,"iterations:", cumsum/len(loop_indices)
return clf
pop_class = multi_RF_averages("is_pop",50)
rap_class = multi_RF_averages("is_rap",50)
rock_class = multi_RF_averages("is_rock",50)
edm_class = multi_RF_averages("is_edm",50)
country_class = multi_RF_averages("is_country",50)
from sklearn.externals import joblib
# only use these to generate pickle files for website
# joblib.dump(pop_class, 'classifiers/pop_class.pkl')
# joblib.dump(rap_class, 'classifiers/rap_class.pkl')
# joblib.dump(rock_class, 'classifiers/rock_class.pkl')
# joblib.dump(edm_class, 'classifiers/edm_class.pkl')
# joblib.dump(country_class, 'classifiers/country_class.pkl')
# Removing EDM for better analysis - makes is_pop and is_rap much more accurate
training_set = pd.concat([country_train,rock_train,pop_train,rap_train])
test_set = pd.concat([country_test,rock_test,pop_test,rap_test])
multi_RF_averages("is_pop",50)
multi_RF_averages("is_rap",50)
multi_RF_averages("is_rock",50)
multi_RF_averages("is_edm",50)
multi_RF_averages("is_country",50)
training_set = pd.concat([country_train,rock_train,edm_train,rap_train,pop_train])
test_set = pd.concat([rock_test])
multi_RF_averages("is_rock",50)
test_set = pd.concat([rap_test])
multi_RF_averages("is_rap",50)
test_set = pd.concat([country_test])
multi_RF_averages("is_country",50)
test_set = pd.concat([pop_test])
multi_RF_averages("is_pop",50)
test_set = pd.concat([edm_test])
multi_RF_averages("is_edm",50)
test_set = pd.concat([edm_test,rock_test])
multi_RF_averages("is_edm",50)
multi_RF_averages("is_rock",50)
model = ExtraTreesClassifier()
training_set = pd.concat([country_train,pop_train,rap_train,rock_train,edm_train])
y, _ = pd.factorize(training_set['is_rock'])
model.fit(training_set[all_features], y)
# display the relative importance of each attribute
print model.feature_importances_
df = pd.DataFrame()
df['index'] = all_features
y, _ = pd.factorize(training_set['is_rap'])
model.fit(training_set[all_features], y)
df['rap'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_rock'])
model.fit(training_set[all_features], y)
df['rock'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_country'])
model.fit(training_set[all_features], y)
df['country'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_edm'])
model.fit(training_set[all_features], y)
df['edm'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_pop'])
model.fit(training_set[all_features], y)
df['pop'] = model.feature_importances_
df = df.set_index('index')
df = df.transpose()
df.head()
lol =
lol = df.values.tolist()
cols = []
for x in df.columns:
cols.append(x)
import plotly.offline as py # a little wordplay
import plotly.graph_objs as go
offline.init_notebook_mode()
title = 'Feature Importance By Genre'
labels = [ ]
mode_size = [8, 8, 12, 8]
line_size = [2, 2, 4, 2]
x_data = cols
y_data = df.values.tolist()
traces = []
for i in range(0, 4):
traces.append(go.Scatter(
x=x_data,
y=y_data[i],
mode='lines',
connectgaps=True,
))
layout = go.Layout(
yaxis=dict(
showgrid=False,
zeroline=False,
showline=False,
showticklabels=False,
),
autosize=False,
margin=dict(
autoexpand=True,
l=100,
r=20,
t=110,
),
showlegend=False,
)
annotations = []
# Adding labels
for y_trace, label in zip(y_data, labels):
# labeling the left_side of the plot
annotations.append(dict(xref='paper', x=0.05, y=y_trace[0],
xanchor='right', yanchor='middle',
text=label + ' {}%'.format(y_trace[0]),
font=dict(family='Arial',
size=16,
),
showarrow=False))
# labeling the right_side of the plot
annotations.append(dict(xref='paper', x=0.95, y=y_trace[11],
xanchor='left', yanchor='middle',
text='{}%'.format(y_trace[11]),
font=dict(family='Arial',
size=16,
),
showarrow=False))
# Title
annotations.append(dict(xref='paper', yref='paper', x=0.0, y=1.05,
xanchor='left', yanchor='bottom',
text='Feature Importance By Genre',
font=dict(family='Arial',
size=30,
),
showarrow=False))
# Source
# annotations.append(dict(xref='paper', yref='paper', x=0.5, y=-0.1,
# xanchor='center', yanchor='top',
# text='Source: PewResearch Center & ' +
# 'Storytelling with data',
# font=dict(family='Arial',
# size=12,
# ),
# showarrow=False))
layout['annotations'] = annotations
fig = go.Figure(data=traces, layout=layout)
py.iplot(fig, filename='news-source')
import seaborn as sns
sns.set_style("whitegrid")
ax = sns.pointplot(x="likes", y="rating",data=df)
sns.plt.show()
import seaborn as sns
sns.set_style("whitegrid")
tips = sns.load_dataset("tips")
print tips
ax = sns.pointplot(x="time", y="total_bill", data=tips)
sns.plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ordinal Genres
Step2: We add in some boolean genre classifiers to make our analysis more fine-grained. Rather than saying "we predict this video is country with 50% confidence", we could say "we predict this video is not edm with 90% confidence" and so on.
Step3: Test and Train Sets
Step4: Generating Random Forest - Viewer Statistics
Step5: As shown above, this method yields relatively poor results. This is because there's no distinct clusters being created by our random forest, and simple viewer statistics tell us nothing about what kind of video we're watching. However, we see that country, rap and pop are initially somewhat distinct (diagonal is the highest value), and rock and edm are getting mistaken for one another. Let's see if we can't make something of this.
Step6: This actually yields worse results than just the viewer statistics, because the color of a video by itself does not determine the genre. If rappers only had red in their videos and rockers only had black this might be somewhat accurate, but that's just not the case. But, what if we pair these findings with our initial viewer statistics?
Step7: Singling Out Pop and Rap
Step8: What we're seeing above is a confusion matrix that, based on our training data, predicts whether or not a video in the test set is a pop video or not. In the "predicted" row, 0 means it predicts it's not a pop video, and that the 1 is. Likewise with the actual, 0 shows that the video actually wasn't a pop video, and the 1 shows that it was.
Step9: The following creates several files that describe our classifiers. Our website will later
Step10: We ran the above test with all genres, and as shown in above analysis, our country and edm typically have very low accuracy. We've seen above that edm and rock videos are getting mixed up with one another, so we assume that something is characteristic of these 2 genres that's not of everything else. We take out the edm values from our training and test datasets, hoping to improve accuracy.
Step11: So, what does this tell us? Based on our training data, we have the best chance of accurately classifying something as pop or not pop (under these conditions).
Step12: Rock and EDM have suprisingly distinct classifiers. We should dive into the videos and see what this means.
Step13: Selecting Most Valuable Features per Genre - Rock
|
15,402 | <ASSISTANT_TASK:>
Python Code:
features.applymap(np.isreal).apply(pd.value_counts)
features.apply(lambda x: stats.shapiro(x))
numeric_feats = features.dtypes[features.dtypes != "object"].index
skewness = features[numeric_feats].apply(lambda x: skew(x.dropna())) #compute skewness
print skewness
def show_qqplot(x, data):
fig = plt.figure()
ax = fig.add_subplot(111)
stats.probplot(data[x], dist="norm", plot=pylab)
ax.set_title(x)
pylab.show()
for name in features.columns:
show_qqplot(name, features)
skewed_feats = skewness[skewness < -0.75]
skewed_feats = skewed_feats.index
features[skewed_feats] = np.exp2(features[skewed_feats])
features.head()
numeric_feats = features.dtypes[features.dtypes != "object"].index
print features[numeric_feats].apply(lambda x: skew(x.dropna()))
show_qqplot("lp2", features)
y = features.cra
X_train = features.drop(["cra"], axis=1)
from sklearn.linear_model import Ridge, RidgeCV, ElasticNet, Lasso, LassoCV, LassoLarsCV
from sklearn.model_selection import cross_val_score
def vector_norm(w):
return np.sqrt(np.sum(w**2))
def rmse_cv(model):
rmse= np.sqrt(-cross_val_score(model, X_train, y, scoring="neg_mean_squared_error", cv = 10))
return(rmse)
def coefficients_graphic(model, title):
coef = pd.Series(model.coef_, index = X_train.columns)
matplotlib.rcParams['figure.figsize'] = (8.0, 10.0)
coef.plot(kind = "barh")
plt.title(title)
def residuals_graph(model):
matplotlib.rcParams['figure.figsize'] = (6.0, 6.0)
preds = pd.DataFrame({"preds":model.predict(X_train), "true":y})
preds["residuals"] = preds["true"] - preds["preds"]
preds.plot(x = "preds", y = "residuals",kind = "scatter")
def cv_rmse_graph(cv_rmse, alpha_levels):
cv_rmse = pd.Series(cv_rmse, index = alpha_levels)
cv_rmse.plot(title = "Validation - Just Do It")
plt.xlabel("alpha")
plt.ylabel("rmse")
clf = Ridge(alpha=0)
clf.fit(X_train, y)
vector_norm(clf.coef_)
coefficients_graphic(clf, "Coefficients in the Ridge Model Not Regularized")
residuals_graph(clf)
alphas = [0, 0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 40, 50, 75]
cv_rmse = [rmse_cv(Ridge(alpha = level)).mean()
for level in alphas]
cv_rmse_graph(cv_rmse, alphas)
clf = Ridge(alpha=40)
clf.fit(X_train, y)
vector_norm(clf.coef_)
coefficients_graphic(clf, "Coefficients in Regularized Ridge Model")
residuals_graph(clf)
model_lasso = Lasso(alpha = [0]).fit(X_train, y)
coef_lasso = pd.Series(model_lasso.coef_, index = X_train.columns)
print("Lasso picked " + str(sum(coef_lasso != 0)) + " variables and eliminated the other " + str(sum(coef_lasso == 0)) + " variables")
coefficients_graphic(model_lasso, "Coefficients in the Lasso Model Not Regularized")
residuals_graph(model_lasso)
alphas = [0, 0.001, 0.01, 0.02, 0.03, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1]
cv_rmse = [rmse_cv(Lasso(alpha = level)).mean()
for level in alphas]
cv_rmse_graph(cv_rmse, alphas)
model_lasso = Lasso(alpha = 0.06).fit(X_train, y)
coef_lasso = pd.Series(model_lasso.coef_, index = X_train.columns)
print("Lasso picked " + str(sum(coef_lasso != 0)) + " variables and eliminated the other " + str(sum(coef_lasso == 0)) + " variables")
print coef_lasso
coefficients_graphic(model_lasso, "Coefficients in Regularized Lasso Model")
residuals_graph(model_lasso)
from sklearn.neighbors import KNeighborsRegressor
max_n_neighbors = int(y.shape[0] - 0.1*y.shape[0])
neighbors = range(1,max_n_neighbors)
cv_rmse = [rmse_cv(KNeighborsRegressor(n_neighbors = level)).mean()
for level in neighbors]
cv_rmse_graph(cv_rmse, neighbors)
best_knn_fit = KNeighborsRegressor(n_neighbors = 22)
residuals_graph(model_lasso)
teste = pd.read_csv("graduados_teste.csv")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Checking for skew in data
Step2: Skewness
Step3: Q-QPlot
Step4: Through our analysis, we conclude that the only feature in the necessity of being transformed is LP2. We conclude it by
Step5: 2. Ridge and LASSO Regression
Step6: Ridge Regression
Step7: Using Regularization
Step8: LASSO Regression
Step9: Using regularization
Step10: 3. KNN
Step11: Analysis
|
15,403 | <ASSISTANT_TASK:>
Python Code:
# import python packages here...
import mpmath
mpmath.plot([mpmath.cos, mpmath.sin], [-4, 4])
mpmath.plot(lambda x: mpmath.exp(x) * mpmath.li(x), [1, 4])
mpmath.cplot(lambda z: z, [-10, 10], [-10, 10])
mpmath.cplot(lambda z: z, [-10, 10], [-10, 10], points=100000)
mpmath.cplot(mpmath.gamma, [-10, 10], [-10, 10], points=100000)
f = lambda z: (z * 2 - 1)*(z + 2 - mpmath.j)**2 / (z * 2 + 2 - 2 * mpmath.j)
mpmath.cplot(f, [-5, 5], [-3, 3], points=100000)
f = lambda x, y: mpmath.sin(x+y) * mpmath.cos(y)
mpmath.splot(f) #, [-mpmath.pi, mpmath.pi], [-mpmath.pi, mpmath.pi])
r, R = 1, 2.5
f = lambda u, v: [r*mpmath.cos(u), (R+r*mpmath.sin(u))*mpmath.cos(v), (R+r*mpmath.sin(u))*mpmath.sin(v)]
mpmath.splot(f) #, [0, 2*mpmath.pi], [0, 2*mpmath.pi])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Arbitrary-precision floating-point arithmetic
Step2: Complex function plots
Step3: Use the points argument to increase the resolution.
Step4: 3D surface plots
|
15,404 | <ASSISTANT_TASK:>
Python Code:
# Set up paths/ os
import os
import sys
this_path=os.getcwd()
os.chdir("../data")
sys.path.insert(0, this_path)
# Load datasets
import pandas as pd
df = pd.read_csv("MedHelp-posts.csv",index_col=0)
df.head(2)
df_users = pd.read_csv("MedHelp-users.csv",index_col=0)
df_users.head(2)
# 1 classify users as professionals and general public:
df_users['is expert']=0
for user_id in df_users.index:
user_description=df_users.loc[user_id,['user description']].values
if ( "," in user_description[0]):
print(user_description[0])
df_users.loc[user_id,['is expert']]=1
# Save database:
df_users.to_csv("MedHelp-users-class.csv")
is_expert=df_users['is expert'] == 1
is_expert.value_counts()
# Select user_id from DB where is_professional = 1
experts_ids = df_users[df_users['is expert'] == 1 ].index.values
experts_ids
non_experts_ids = df_users[df_users['is expert'] == 0 ].index.values
# Select * where user_id in experts_ids
#df_users.loc[df_users.index.isin(experts_ids)]
df_experts=df.loc[df['user id'].isin(experts_ids)]
print('Total of posts from expert users {}'.format(len(df_experts)))
print('Total of posts {}'.format(len(df)))
print('Ratio {}'.format(len(df_experts)/len(df)))
del df_experts
# Tokenize data
import nltk
tokenizer = nltk.RegexpTokenizer(r'\w+')
# Get the length of tokens into a columns
df_text = df['text'].str.lower()
df_token = df_text.apply(tokenizer.tokenize)
df['token length'] = df_token.apply(len)
# Get list of tokens from text in first article:
#for text in df_text.values:
# ttext = tokenizer.tokenize(text.lower())
# lenght_text=len(ttext)
# break
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.mlab as mlab
from matplotlib import gridspec
from scipy.stats import norm
import numpy as np
from scipy.optimize import curve_fit
from lognormal import lognormal, lognormal_stats,truncated_normal
from scipy.stats import truncnorm
plt.rcParams['text.usetex'] = True
plt.rcParams['text.latex.unicode'] = True
plt.rcParams.update({'font.size': 24})
nbins=100
fig = plt.figure()
#fig=plt.figure(figsize=(2,1))
#fig.set_size_inches(6.6,3.3)
gs = gridspec.GridSpec(2, 1)
#plt.subplots_adjust(left=0.1,right=1.0,bottom=0.17,top=0.9)
#plt.suptitle('Text length (words count)')
fig.text(0.04,0.5,'Distribution',va='center',rotation='vertical')
#X ticks
xmax=1000
x=np.arange(0,xmax,100) #xtics
xx=np.arange(1,xmax,1)
# Panel 1
ax1=plt.subplot(gs[0])
ax1.set_xlim([0, xmax])
ax1.set_xticks(x)
ax1.tick_params(labelbottom='off')
#plt.ylabel('')
#Class 0
X=df.loc[df['user id'].isin(non_experts_ids)]['token length'].values
n,bins,patches=plt.hist(X,nbins,normed=1,facecolor='cyan',align='mid')
popt,pcov = curve_fit(truncated_normal,bins[:nbins],n)
c0,=plt.plot(xx,truncated_normal(xx,*popt),color='blue',label='non expert')
plt.legend(handles=[c0],bbox_to_anchor=(0.45, 0.95), loc=2, borderaxespad=0.)
print(popt)
mu=X.mean()
var=X.var()
print("Class 0: Mean,variance: ({},{})".format(mu,var))
# Panel 2
ax2=plt.subplot(gs[1])
ax2.set_xlim([0, xmax])
ax2.set_xticks(x)
#ax2.set_yticks(np.arange(0,8,2))
#plt.ylabel('Normal distribution')
#Class 1
X=df.loc[df['user id'].isin(experts_ids)]['token length'].values
#(mu,sigma) = norm.fit(X)
n,bins,patches=plt.hist(X,nbins,normed=1,facecolor='orange',align='mid')
popt,pcov = curve_fit(lognormal,bins[:nbins],n)
#c1,=plt.plot(xx,mlab.normpdf(xx, mu, sigma),color='darkorange',label='layered')
c1,=plt.plot(xx,lognormal(xx,*popt),color='red',label='expert')
plt.legend(handles=[c1],bbox_to_anchor=(0.45, 0.95), loc=2, borderaxespad=0.)
print("Class 1: Mean,variance:",lognormal_stats(*popt))
#plt.xlabel('Volume ratio (theor./expt.)')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Only 10 out of 505 users are experts!
Step2: Length ot text
|
15,405 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
file = "tsp.txt"
# file = "test2.txt"
data = open(file, 'r').readlines()
n = int(data[0])
graph = {}
for i,v in enumerate(data[1:]):
graph[i] = tuple(map(float, v.strip().split(" ")))
dist_val = np.zeros([n,n])
for i in range(n):
for k in range(n):
dist_val[i,k] = dist_val[k,i] = np.sqrt((graph[k][0]-graph[i][0])**2 + (graph[k][1]-graph[i][1])**2)
print (graph)
%matplotlib inline
import matplotlib.pyplot as plt
values = list(graph.values())
y = [values[i][0] for i in range(len(values))]
x = [values[i][1] for i in range(len(values))]
plt.scatter(y,x)
plt.show()
import collections
def to_key(a):
my_str = ""
for i in a:
my_str += str(int(i))
return my_str
def to_subset(v, n):
a = np.zeros(n)
a[v] = 1
return a
def create_all_subset(n):
A = collections.defaultdict(dict)
for m in range(1,n):
for a in (itertools.combinations(range(n), m)):
key = a + tuple([0 for i in range(n-m)])
print (a, tuple([0 for i in range(n-m)]), key, m, n)
for j in range(n):
A[to_key(key)][j] = np.inf
A[to_key(to_subset(0,n))][0] = 0
return A
# res= to_subset([2,3],5)
# print (res)
# print (to_key(res))
# A = create_all_subset(3)
# print (A)
# print (index_to_set(10,'25'))
# print(set_to_index([1,3]))
import itertools
def powerset(iterable):
"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"
s = list(iterable)
return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(1,len(s)+1))
def index_to_set(index, n='8'):
fmt = '{0:0'+n+'b}'
res = fmt.format(index)
mylist = list(res)
mylist.reverse()
print (res)
mylist = np.asarray(mylist, dtype=int)
ret = np.where(mylist==1)
# ret = []
# for i, j in enumerate(mylist):
# if j=="1":
# ret.append(i)
return list(ret[0])
def set_to_index(my_set):
# i = [1, 5, 7]
ret = 0
for i in my_set:
ret += 2**i
return ret
print ("~~ Test")
# print (set_to_index([1]))
# print (index_to_set(set_to_index([1])))
ex_all_sets = powerset(range(5))
for s in ex_all_sets:
print ("~~ Original set:", s)
print ("index:", set_to_index(s))
print ("recovered set:", index_to_set(set_to_index(s),'5'))
A = np.full([2**n, n], np.inf)
A[set_to_index([0]),0]=0
for i in range(0, n):
A[set_to_index([i]),i] = dist_val[i,0]
print (set_to_index([i]), dist_val[i,0])
from tqdm import tqdm
def _dist(k, j):
return np.sqrt((graph[k][0]-graph[j][0])**2 + (graph[k][1]-graph[j][1])**2)
FULL = range(n)
for m in range(1,n):
# all_sets = powerset(range(1,m))
all_sets = itertools.combinations(FULL, m+1)
print ("Subset Size:",m)
for _set in all_sets:
if not _set:
continue
_set = list(_set)
# print ("Len Set", len(_set))
set2_idx = set_to_index(_set)
for j in _set:
_set2 = _set.copy()
_set2.remove(j)
if j==0 or not _set2:
continue
# print ("_set2", _set2)
_set2_idx = set_to_index(_set2)
# print ("handle Set", _set2, "idx",_set2_idx, "j:", j)
minval = np.inf
for k in _set2:
# print ("idxSet:", _set2_idx, "k:", k, "dist", A[_set2_idx,k])
val = A[_set2_idx,k] + dist_val[k,j]
if val < minval:
minval = val
# print ("minval",minval)
A[set2_idx,j] = minval
# print (A)
my_set = [i for i in range(n)]
print ("Full Set", my_set, set_to_index(my_set))
minval = np.inf
for j in range(1,n):
val = A[set_to_index(my_set),j] + dist_val[j,0]
if val < minval:
minval = val
print ("minval", minval)
# print (A[set_to_index(my_set),:])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Draw points
Step2: Initialize the 2-D Array
Step3: Run the Dynamic Programming algorithm
|
15,406 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from IPython.display import Image
import matplotlib.pyplot as plt
# Import the random forest package
from sklearn.ensemble import RandomForestClassifier
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
filename ="CrowdstormingDataJuly1st.csv"
Data = pd.read_csv(filename)
Data.ix[:10,:13]
Data.ix[:10,13:28]
# Remove the players without rater 1 / 2 (ie: without photo) because we won't be
# able to train or test the values (this can be done as bonus later)
Data_hasImage = Data[pd.notnull(Data['photoID'])]
# Group by player and do the sum of every column, except for mean_rater (skin color) that we need to move away during the calculation (we don't want to sum skin color values !)
Data_aggregated = Data_hasImage.drop(['refNum', 'refCountry'], 1)
Data_aggregated = Data_aggregated.groupby(['playerShort', 'position'])['games','yellowCards', 'yellowReds', 'redCards'].sum()
Data_aggregated = Data_aggregated.reset_index()
# Take information of skin color for each player
Data_nbGames_skinColor = Data_hasImage
Data_nbGames_skinColor.drop_duplicates('playerShort')
Data_nbGames_skinColor['skinColor']=(Data_nbGames_skinColor['rater1']+Data_hasImage['rater2'])/2
Data_nbGames_skinColor = pd.DataFrame(Data_nbGames_skinColor[['playerShort','skinColor']])
Data_aggregated = pd.merge(left=Data_aggregated,right=Data_nbGames_skinColor, how='left', left_on='playerShort', right_on='playerShort')
Data_aggregated = Data_aggregated.drop_duplicates('playerShort')
Data_aggregated = Data_aggregated.reset_index(drop=True)
Data_aggregated
# Input
x = Data_aggregated
x = x.drop(['playerShort'], 1)
# We have to convert every columns to floats, to be able to train our model
mapping = {'Center Back': 1, 'Attacking Midfielder': 2, 'Right Midfielder': 3, 'Center Midfielder': 4, 'Defensive Midfielder': 5, 'Goalkeeper':6, 'Left Fullback':7, 'Left Midfielder':8, 'Right Fullback':9, 'Center Forward':10, 'Left Winger':11, 'Right Winger':12}
x = x.replace({'position': mapping})
x
# Output with the same length as the input, that will contains the associated cluster
y = pd.DataFrame(index=x.index, columns=['targetCluster'])
y.head()
# K Means Cluster
model = KMeans(n_clusters=2)
model = model.fit(x)
model
# We got a model with two clusters
model.labels_
# View the results
# Set the size of the plot
plt.figure(figsize=(14,7))
# Create a colormap for the two clusters
colormap = np.array(['blue', 'lime'])
# Plot the Model Classification PARTIALLY
plt.scatter((0.5*x.yellowCards + x.yellowReds + x.redCards)/x.games, x.skinColor, c=colormap[model.labels_], s=40)
plt.xlabel('Red cards per game (yellow = half a red card)')
plt.ylabel('Skin color')
plt.title('K Mean Classification')
plt.show()
cluster = pd.DataFrame(pd.Series(model.labels_, name='cluster'))
Data_Clustered = Data_aggregated
Data_Clustered['cluster'] = cluster
Data_Clustered
score = silhouette_score(x, model.labels_)
score
x_noSkinColor = x.drop(['skinColor'], 1)
model = KMeans(n_clusters=2)
model = model.fit(x_noSkinColor)
score_noSkinColor = silhouette_score(x_noSkinColor, model.labels_)
score_noSkinColor
score_noSkinColor / score
x_noPosition = x.drop(['position'], 1)
model = KMeans(n_clusters=2)
model = model.fit(x_noPosition)
score_noPosition= silhouette_score(x_noPosition, model.labels_)
score_noPosition
score_noPosition / score
x_noGameNumber = x.drop(['games'], 1)
model = KMeans(n_clusters=2)
model = model.fit(x_noGameNumber)
score_noGameNumber = silhouette_score(x_noGameNumber, model.labels_)
score_noGameNumber
score_noGameNumber / score
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) Peeking into the Data
Step2: II. Preparing data
Step3: 2) Getting rif of referees and grouping data by soccer player
Step4: III. Unsupervized machine learning
Step5: (We show only skin color and number of "red cards" because it's a 2D plot, but we actually used 5 parameters
Step6: So, do we have any new information ? What can we conclude of this ?
Step7: We got a silhouette score of 58%, which is honestly not enough to predict precisely the skin color of new players. A value closer to +1 would have indicated with higher confidence a difference between the clusters. 60% is enough to distinguish the two clusters but, still, we cannot rely on this model.
Step8: Seems like removing skin color from the input didn't change anything for the clustering performance !
Step9: Player position doesn't have much impact either. We can try to remove the number of games, but it won't make sense
|
15,407 | <ASSISTANT_TASK:>
Python Code:
# Import directives
#%pylab notebook
%pylab inline
pylab.rcParams['figure.figsize'] = (6, 6)
#import warnings
#warnings.filterwarnings('ignore')
import math
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from ipywidgets import interact
def plot2d(x, fmt="ok"):
plt.axis('equal')
plt.axis([-5, 5, -5, 5])
plt.xticks(np.arange(-5, 5, 1))
plt.yticks(np.arange(-5, 5, 1))
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(x[:,0], x[:,1], fmt)
plt.grid()
# Define initial points
A = np.array([[0., 0.],
[1., 0.],
[1., 1.],
[0., 0.]])
# Define the rotation angle
theta = np.radians(30)
# Define the rotation matrix
R = np.array([[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)]])
# Rotate points
Aprime = np.dot(R, A.T).T
# Print and plot
print(A)
print(Aprime)
plot2d(A, fmt="-ok")
plot2d(Aprime, fmt="-or")
def plot3d(x, axis=None, fmt="ok"):
if axis is None:
fig = plt.figure()
axis = axes3d.Axes3D(fig)
axis.scatter(x[:,0], x[:,1], x[:,2], fmt)
axis.plot(x[:,0], x[:,1], x[:,2], fmt)
# Define initial points
A = np.array([[0., 0., 0.],
[1., 0., 0.],
[0., 0., 0.],
[0., 1., 0.],
[0., 0., 0.],
[0., 0., 1.]])
# Define the rotation angle
theta = np.radians(90)
# Define the rotation matrices
Rx = np.array([[1., 0., 0.],
[0., np.cos(theta), -np.sin(theta)],
[0., np.sin(theta), np.cos(theta)]])
Ry = np.array([[np.cos(theta), 0., np.sin(theta)],
[0., 1., 0. ],
[-np.sin(theta), 0., np.cos(theta)]])
Rz = np.array([[np.cos(theta), -np.sin(theta), 0.],
[np.sin(theta), np.cos(theta), 0.],
[0., 0., 1.]])
# Rotate points
Ax = np.dot(Rx, A.T).T
Ay = np.dot(Ry, A.T).T
Az = np.dot(Rz, A.T).T
# Plot
fig = plt.figure()
ax = axes3d.Axes3D(fig)
plot3d(A, axis=ax, fmt="-ok")
plot3d(Ax, axis=ax, fmt=":or")
plot3d(Ay, axis=ax, fmt=":og")
plot3d(Az, axis=ax, fmt=":ob")
ax.text(1, 0, 0, "x", color="r")
ax.text(0, 1, 0, "y", color="g")
ax.text(0, 0, 1, "z", color="b")
# Define initial points
A = np.array([[0., 0., 0.],
[1., 0., 0.],
[0., 0., 0.],
[0., 1., 0.],
[0., 0., 0.],
[0., 0., 1.]])
# Define the rotation angle
theta = np.radians(10)
u = np.array([1., 1., 0.])
ux, uy, uz = u[0], u[1], u[2]
c = np.cos(theta)
s = np.sin(theta)
# Define the rotation matrices
R = np.array([[ux**2 * (1-c) + c, ux*uy * (1-c) - uz*s, ux*uz * (1-c) + uy*s],
[ux*uy * (1-c) + uz*s, ux**2 * (1-c) + c, uy*uz * (1-c) - ux*s],
[ux*uz * (1-c) - uy*s, uy*uz * (1-c) + ux*s, uz**2 * (1-c) + c]])
# Rotate points
Ar = np.dot(R, A.T).T
# Plot
fig = plt.figure()
ax = axes3d.Axes3D(fig)
plot3d(A, axis=ax, fmt="-ok")
plot3d(np.array([np.zeros(3), u]), axis=ax, fmt="--ok")
plot3d(Ar, axis=ax, fmt=":or")
ax.text(1, 0, 0, "x", color="k")
ax.text(0, 1, 0, "y", color="k")
ax.text(0, 0, 1, "z", color="k")
@interact(a=(-5., 5., 0.1), b=(-5., 5., 0.1), c=(-5., 5., 0.1))
def plot(a, b, c):
plt.axis('equal')
plt.axis([-5, 5, -5, 5])
plt.xticks(np.arange(-5,5,1))
plt.yticks(np.arange(-5,5,1))
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
x = np.array([-10., 10.])
f = lambda x: a/(-b) * x + c/(-b)
try:
plt.plot(x, f(x))
except ZeroDivisionError:
print("b should not be equal to 0")
plt.grid()
# Setup the plot
def plot(a, b, c, p, p2):
plt.axis('equal')
plt.axis([-5, 5, -5, 5])
plt.xticks(np.arange(-5,5,1))
plt.yticks(np.arange(-5,5,1))
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
x = np.array([-10., 10.])
f = lambda x: a/(-b) * x + c/(-b)
plt.plot(x, f(x))
plt.scatter(*p)
# Plot the projection point
plt.scatter(*p2)
plt.plot((p2[0], p[0]), (p2[1], p[1]))
#plt.arrow(*p2, *p) # TODO: doesn't work...
plt.grid()
# Define the distance and projection functions
def distance(a, b, c, p):
d1 = (a*p[0] + b*p[1] + c)
d2 = math.sqrt(math.pow(a, 2.) + math.pow(b, 2.))
d = abs(d1)/d2
return d
def projection(a, b, c, p):
p2 = ((b*(b*p[0] - a*p[1]) - a*c)/(math.pow(a,2.)+math.pow(b,2.)),
(a*(-b*p[0] + a*p[1]) - b*c)/(math.pow(a,2.)+math.pow(b,2.)))
return p2
# Define the line and the point
a = 2.
b = 1.
c = -2.
p = (-4., 2.)
# Compute the distance and the projection point on the line
d = distance(a, b, c, p)
p2 = projection(a, b, c, p)
print("Distance:", d)
print("Projection point:", p2)
# Plot the line and the point
plot(a, b, c, p, p2)
# TODO...
def angle_point_to_equation(angle_degree, p):
angle_radian = math.radians(angle_degree)
a = math.tan(angle_radian)
b = -1
c = -math.tan(angle_radian) * p[0] + p[1]
return a, b, c
angle_degree = 30
p0 = (3, 2)
a, b, c = angle_point_to_equation(angle_degree, p0)
p = (-4., 2.)
# Compute the distance and the projection point on the line
d = distance(a, b, c, p)
p2 = projection(a, b, c, p)
print("Distance:", d)
print("Projection point:", p2)
# Plot the line and the point
plot(a, b, c, p, p2)
plt.scatter(*p0)
# Define initial points to project
a = np.array([0., 1., 2.])
# Define camera's position
c = np.array([0., 0., 0.])
# Define viewer's position
e = np.array([0., 0., -1.])
# Define the orientation of the camera
theta = np.array([np.radians(0),
np.radians(0),
np.radians(0)])
theta_x, theta_y, theta_z = theta[0], theta[1], theta[2]
# Define the rotation matrices
Rx = np.array([[1., 0., 0.],
[0., np.cos(theta_x), np.sin(theta_x)],
[0., -np.sin(theta_x), np.cos(theta_x)]])
Ry = np.array([[np.cos(theta_y), 0., -np.sin(theta_y)],
[0., 1., 0. ],
[np.sin(theta_y), 0., np.cos(theta_y)]])
Rz = np.array([[np.cos(theta_z), np.sin(theta_z), 0.],
[-np.sin(theta_z), np.cos(theta_z), 0.],
[0., 0., 1.]])
d = np.dot(Rx, Ry)
d = np.dot(d, Rz)
d = np.dot(d, a-c)
## TODO: which version is correct ? The one above or the one below ?
#d = a - c
#d = np.dot(Rz, d)
#d = np.dot(Ry, d)
#d = np.dot(Rx, d)
print("d:", d)
b = np.array([e[2]/d[2] * d[0] - e[0],
e[2]/d[2] * d[1] - e[1]])
print("b:", b)
# Alternative to compute b
Rf = np.array([[1., 0., -e[0]/e[2], 0.],
[0., 1., -e[1]/e[2], 0.],
[0., 0., 1., 0.],
[0., 0., 1./e[2], 0.]])
f = np.dot(Rf, np.concatenate([d, np.ones(1)]))
b = np.array([f[0]/f[3],
f[1]/f[3]])
print("b:", b)
plot2d(np.array([b, b]), "ok")
@interact(theta_x=(-90., 90., 1.), theta_y=(-90., 90., 1.), theta_z=(-90., 90., 1.))
def projection(theta_x, theta_y, theta_z):
# Define initial points to project
A = np.array([[-1., 0., 1.],
[ 1., 0., 1.],
[-1., 0., 2.],
[ 1., 0., 2.],
[-1., 0., 5.],
[ 1., 0., 5.],
[-1., 0., 15.],
[ 1., 0., 15.]])
# Define camera's position
c = np.array([0., -2., 0.])
C = np.tile(c, (A.shape[0], 1))
# Define viewer's position
e = np.array([0., 0., -1.])
# Define the orientation of the camera
theta = np.radians(np.array([theta_x, theta_y, theta_z]))
theta_x, theta_y, theta_z = theta[0], theta[1], theta[2]
# Define the rotation matrices
Rx = np.array([[1., 0., 0.],
[0., np.cos(theta_x), np.sin(theta_x)],
[0., -np.sin(theta_x), np.cos(theta_x)]])
Ry = np.array([[np.cos(theta_y), 0., -np.sin(theta_y)],
[0., 1., 0. ],
[np.sin(theta_y), 0., np.cos(theta_y)]])
Rz = np.array([[np.cos(theta_z), np.sin(theta_z), 0.],
[-np.sin(theta_z), np.cos(theta_z), 0.],
[0., 0., 1.]])
d = np.dot(Rx, Ry)
d = np.dot(d, Rz)
d = np.dot(d, (A-C).T)
## TODO: which version is correct ? The one above or the one below ?
#d = a - c
#d = np.dot(Rz, d)
#d = np.dot(Ry, d)
#d = np.dot(Rx, d)
print("d:", d)
b = np.array([e[2]/d[2] * d[0] - e[0],
e[2]/d[2] * d[1] - e[1]])
print("b:", b)
# Alternative to compute b
Rf = np.array([[1., 0., -e[0]/e[2], 0.],
[0., 1., -e[1]/e[2], 0.],
[0., 0., 1., 0.],
[0., 0., 1./e[2], 0.]])
# Add a line of ones
d = np.vstack([d, np.ones(d.shape[1])])
f = np.dot(Rf, d)
b = np.array([f[0]/f[3],
f[1]/f[3]])
print("b:", b)
plot2d(b.T, "ok")
plot2d(b.T, "-k")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2D transformations
Step2: Rotation around the origin
Step3: 3D transformations
Step4: Rotation around the x axis
Step5: Rotation around a given axis
Step6: Projections
Step7: Distance from a point to a line
Step8: Line defined by two points
Step9: Line defined by a point and an angle
Step10: Project 3D points on a plane without perspective
Step11: Multiple points version
|
15,408 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import bigbang.mailman as mailman
import bigbang.graph as graph
import bigbang.process as process
from bigbang.parse import get_date
reload(process)
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import numpy as np
import math
import pytz
import pickle
import os
pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formatting options
urls = ["http://www.ietf.org/mail-archive/text/ietf-privacy/",
"http://lists.w3.org/Archives/Public/public-privacy/"]
mlists = [mailman.open_list_archives(url,"../archives") for url in urls]
activities = [process.activity(ml) for ml in mlists]
a = activities[1] # for the first mailing list
ta = a.sum(0) # sum along the first axis
ta.sort()
ta[-10:].plot(kind='barh', width=1)
levdf = process.sorted_lev(a) # creates a slightly more nuanced edit distance matrix
# and sorts by rows/columns that have the best candidates
levdf_corner = levdf.iloc[:25,:25] # just take the top 25
fig = plt.figure(figsize=(15, 12))
plt.pcolor(levdf_corner)
plt.yticks(np.arange(0.5, len(levdf_corner.index), 1), levdf_corner.index)
plt.xticks(np.arange(0.5, len(levdf_corner.columns), 1), levdf_corner.columns, rotation='vertical')
plt.colorbar()
plt.show()
consolidates = []
# gather pairs of names which have a distance of less than 10
for col in levdf.columns:
for index, value in levdf.loc[levdf[col] < 10, col].iteritems():
if index != col: # the name shouldn't be a pair for itself
consolidates.append((col, index))
print str(len(consolidates)) + ' candidates for consolidation.'
c = process.consolidate_senders_activity(a, consolidates)
print 'We removed: ' + str(len(a.columns) - len(c.columns)) + ' columns.'
lev_c = process.sorted_lev(c)
levc_corner = lev_c.iloc[:25,:25]
fig = plt.figure(figsize=(15, 12))
plt.pcolor(levc_corner)
plt.yticks(np.arange(0.5, len(levc_corner.index), 1), levc_corner.index)
plt.xticks(np.arange(0.5, len(levc_corner.columns), 1), levc_corner.columns, rotation='vertical')
plt.colorbar()
plt.show()
fig, axes = plt.subplots(nrows=2, figsize=(15, 12))
ta = a.sum(0) # sum along the first axis
ta.sort()
ta[-20:].plot(kind='barh',ax=axes[0], width=1, title='Before consolidation')
tc = c.sum(0)
tc.sort()
tc[-20:].plot(kind='barh',ax=axes[1], width=1, title='After consolidation')
plt.show()
reload(process)
grouped = tc.groupby(process.domain_name_from_email)
domain_groups = grouped.size()
domain_groups.sort(ascending=True)
domain_groups[-20:].plot(kind='barh', width=1, title="Number of participants at domain")
domain_messages_sum = grouped.sum()
domain_messages_sum.sort(ascending=True)
domain_messages_sum[-20:].plot(kind='barh', width=1, title="Number of messages from domain")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the BigBang modules as needed. These should be in your Python environment if you've installed BigBang correctly.
Step2: Also, let's import a number of other dependencies we'll use later.
Step3: Now let's load the data for analysis.
Step4: This variable is for the range of days used in computing rolling averages.
Step5: This might be useful for seeing the distribution (does the top message sender dominate?) or for identifying key participants to talk to.
Step6: For this still naive measure (edit distance on a normalized string), it appears that there are many duplicates in the <10 range, but that above that the edit distance of short email addresses at common domain names can take over.
Step7: We can create the same color plot with the consolidated dataframe to see how the distribution has changed.
Step8: Of course, there are still some duplicates, mostly people who are using the same name, but with a different email address at an unrelated domain name.
Step9: Okay, not dramatically different, but the consolidation makes the head heavier. There are more people close to that high end, a stronger core group and less a power distribution smoothly from one or two people.
Step10: Pandas lets us group by the results of a keying function, which we can use to group participants sending from email addresses with the same domain.
Step11: We can also aggregate the number of messages that come from addresses at each domain.
|
15,409 | <ASSISTANT_TASK:>
Python Code:
from pprint import pprint
%%HTML
<p style="color:red;font-size: 150%;">Classes are more than that in Python. Classes are objects too.</p>
%%HTML
<p style="color:red;font-size: 150%;">Yes, objects.</p>
%%HTML
<p style="color:red;font-size: 150%;">As soon as you use the keyword class, Python executes it and creates an OBJECT. The instruction</p>
class ObjectCreator(object):
pass
%%HTML
<p style="color:red;font-size: 150%;">This object (the class) is itself capable of creating objects (the instances), and this is why it's a class.</p>
object_creator_class = ObjectCreator
print(object_creator_class)
from copy import copy
ObjectCreatorCopy = copy(ObjectCreator)
print(ObjectCreatorCopy)
print("copy ObjectCreatorCopy is not ObjectCreator: ", ObjectCreatorCopy is not ObjectCreator)
print("variable object_creator_class is ObjectCreator: ", object_creator_class is ObjectCreator)
print("ObjectCreator has an attribute 'new_attribute': ", hasattr(ObjectCreator, 'new_attribute'))
ObjectCreator.new_attribute = 'foo' # you can add attributes to a class
print("ObjectCreator has an attribute 'new_attribute': ", hasattr(ObjectCreator, 'new_attribute'))
print("attribute 'new_attribute': ", ObjectCreator.new_attribute)
def echo(o):
print(o)
# you can pass a class as a parameter
print("return value of passing Object Creator to {}: ".format(echo), echo(ObjectCreator))
%%HTML
<p style="color:red;font-size: 150%;">Since classes are objects, you can create them on the fly, like any object.</p>
def get_class_by(name):
class Foo:
pass
class Bar:
pass
classes = {
'foo': Foo,
'bar': Bar
}
return classes.get(name, None)
for class_ in (get_class_by(name) for name in ('foo', 'bar', )):
pprint(class_)
print(type(1))
print(type("1"))
print(type(int))
print(type(ObjectCreator))
print(type(type))
classes = Foo, Bar = [type(name, (), {}) for name in ('Foo', 'Bar')]
for class_ in classes:
pprint(class_)
classes_with_attributes = Foo, Bar = [type(name, (), namespace)
for name, namespace
in zip(
('Foo', 'Bar'),
(
{'assigned_attr': 'foo_attr'},
{'assigned_attr': 'bar_attr'}
)
)
]
for class_ in classes_with_attributes:
pprint([item for item in vars(class_).items()])
def an_added_function(self):
return "I am an added function."
Foo.added = an_added_function
foo = Foo()
print(foo.added())
%%HTML
<p style="color:red;font-size: 150%;">[Creating a class on the fly, dynamically] is what Python does when you use the keyword class, and it does so by using a metaclass.</p>
%%HTML
<p style="color:red;font-size: 150%;">Metaclasses are the 'stuff' that creates classes.</p>
%%HTML
<p style="color:red;font-size: 150%;">Well, metaclasses are what create these objects. They are the classes' classes.</p>
%%HTML
<p style="color:red;font-size: 150%;">Everything, and I mean everything, is an object in Python. That includes ints, strings, functions and classes. All of them are objects. And all of them have been created from a class (which is also an object).</p>
class MyType(type):
pass
class MySpecialClass(metaclass=MyType):
pass
msp = MySpecialClass()
type(msp)
type(MySpecialClass)
type(MyType)
%%HTML
<p style="color:red;font-size: 150%;">"Build a class"? This is a task for metaclasses. The following implementation comes from Python 3 Patterns, Recipes and Idioms.</p>
class Singleton(type):
instance = None
def __call__(cls, *args, **kwargs):
if not cls.instance:
cls.instance = super(Singleton, cls).__call__(*args, **kwargs)
return cls.instance
class ASingleton(metaclass=Singleton):
pass
a = ASingleton()
b = ASingleton()
print(a is b)
print(hex(id(a)))
print(hex(id(b)))
%%HTML
<p style="color:red;font-size: 150%;">The tasks of the two methods are very clear and distinct: __new__() shall perform actions needed when creating a new instance while __init__ deals with object initialization.</p>
class MyClass:
def __new__(cls, *args, **kwargs):
obj = super().__new__(cls, *args, **kwargs)
# do something here
obj.one = 1
return obj # instance of the container class, so __init__ is called
%%HTML
<p style="color:red;font-size: 150%;"> Anyway, __init__() will be called only if you return an instance of the container class. </p>
my_class = MyClass()
my_class.one
class MyInt:
def __new__(cls, *args, **kwargs):
obj = super().__new__(cls, *args, **kwargs)
obj.join = ':'.join
return obj
mi = MyInt()
print(mi.join(str(n) for n in range(10)))
class MyBool(int):
def __repr__(self):
return 'MyBool.' + ['False', 'True'][self]
t = MyBool(1)
t
bool(2) == 1
MyBool(2) == 1
%%HTML
<p style="color:red;font-size: 150%;">In many classes we use __init__ to mutate the newly constructed object, typically by storing or otherwise using the arguments to __init__. But we can’t do this with a subclass of int (or any other immuatable) because they are immutable.</p>
bool.__doc__
class NewBool(int):
def __new__(cls, value):
# bool
return int.__new__(cls, bool(value))
y = NewBool(56)
y == 1
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: creates in memory an object with the name "ObjectCreator".
Step2: But still, it's an object, and therefore
Step3: you can copy it
Step4: you can add attributes to it
Step5: you can pass it as a function parameter
Step6: But it's not so dynamic, since you still have to write the whole class yourself.
Step7: Well, type has a completely different ability, it can also create classes on the fly. type can take the description of a class as parameters, and return a class.
Step8: type accepts a dictionary to define the attributes of the class. So
Step9: Eventually you'll want to add methods to your class. Just define a function with the proper signature and assign it as an attribute.
Step10: You see where we are going
Step11: You define classes in order to create objects, right?
Step12: Changing to blog post entitled Python 3 OOP Part 5—Metaclasses
Step13: Metaclasses are a very advanced topic in Python, but they have many practical uses. For example, by means of a custom metaclass you may log any time a class is instanced, which can be important for applications that shall keep a low memory usage or have to monitor it.
Step14: The constructor mechanism in Python is on the contrary very important, and it is implemented by two methods, instead of just one
Step15: Subclassing int
Step16: The solution to the problem is to use new. Here we will show that it works, and later we will explain elsewhere exactly what happens.
|
15,410 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-2', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
15,411 | <ASSISTANT_TASK:>
Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for pandas
import requests
from bs4 import BeautifulSoup
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
plt.style.use('ggplot')
url = 'table1_2015.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2015 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E", headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2015 = data_2015[12:27]
#columns = ['Incidents','Offenses', 'Victims','Known Offenders']
#original_2015 = religion_2015.copy(deep=True)
#for col in columns:
# new_val = religion_2015.iloc[5][col] + religion_2015[col][7:14].sum()
#print(new_val)
#religion_2015 = religion_2015.set_value('Anti-Other Religion',col,new_val)
#religion_2015.ix['Anti-Other Religion',col] = new_val
religion_2015
url = 'table1_2014.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2014 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2014 = data_2014[9:17]
url = 'table1_2013.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2013 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2013 = data_2013[9:17]
url = 'table1_2012.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2012 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2012 = data_2012[8:16]
url = 'table1_2011.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2011 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2011 = data_2011[8:16]
url = 'table1_2010.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2010 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2010 = data_2010[7:15]
url = 'table1_2008.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2008 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2008 = data_2008[7:15]
url = 'table1_2007.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2007 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2007 = data_2007[7:15]
url = 'table1_2006.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2006 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2006 = data_2006[7:15]
url = 'table1_2005.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2005 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2005 = data_2005[7:15]
target = 'source_2004.txt'
target = open(target, "w")
url = "https://www2.fbi.gov/ucr/hc2004/hctable1.htm"
data_2004 = requests.get(url)
data_2004_soup = BeautifulSoup(data_2004.content, 'html.parser')
data_2004_soup
religion_part = data_2004_soup.find_all('tr')
for row_number in range(9,17):
row = religion_part[row_number]
tmp_string = ''
table_header = row.find('th')
table_values = row.find_all('td')
tmp_string += table_header.text + ' '
for tb in table_values:
tmp_string += tb.text + ' '
tmp_string = tmp_string[:-1].replace('\n','') + '\n'
target.write(tmp_string)
target.close()
# Global Variables
all_years = [] # list of all the DataFrames.
sourcenames = ["source_"+str(year)+".txt" for year in range(1996,2005)] # list of source files names for 1996-2003, to be converted to .csv
targetnames = ["table1_"+str(year)+".csv" for year in range(1996,2005)] # List of name of all .csv files, to be imported in DataFrames
datanames = ["religion_"+str(year) for year in range(1996,2005)] # List of name of all dataframes, to be created e.g religion_1998,religion1999
'''
Steps for cleaing and converting the files to .csv format,
and loading them in pandas DataFrames, using year 2003 as example:
'''
# Loop through the years 1996 to 2003 and repeat the same steps.
for i in range(9):
source = sourcenames[i]
target = targetnames[i]
try:
#Open the source file e.g source_2003
source = open(source,"r",)
except:
print("Could not open the source file")
else:
# Open the target file e.g table1_2003.csv
target = open(target, "w")
lines = source.readlines();
rows = len(lines)
cols = 5
# Loop through each line in the source file:
for line in lines:
# Remove the endline character i.e '\n'
line = line.replace('\n','')
# Remove all the commas ',' from the line.
line = line.replace(",","")
# Split the line into an array, using empty space as split character
line_elements= line.split(' ')
# Check if the number of array elements are greater than. If so, array[:-4] are part of the index in the table: join these elements into one element.
if len(line_elements) > 5:
# join the resulting elemets into a string using ',' as join character, and ending the string with newline character '\n'.
new_line = " ".join(line_elements[:-4]) + ',' + ','.join(line_elements[-4:]) + '\n'
else:
# join the resulting elemets into a string using ',' as join character, and ending the string with newline character '\n'.
new_line = ','.join(line_elements) + '\n'
# write the resutling string to target file.
target.write(new_line)
# Close the target and source files.
source.close()
target.close()
url = targetnames[i]
# Use pandas_readcsv(filename) method to read the .csv file into DataFrames. Set DataFrame headers to ["Motivation","Incidents","Offenses","Victims","Known Offenders"]. Name the returned DataFrame as religion_2003.
exec('%s = pd.read_csv(url, engine = "python", names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])' % (datanames[i]))
# Save religion_2003 to all_years array of DataFrames.
exec('all_years.append(%s)' % (datanames[i]))
# adding DataFrames for years 2005-2015 excluding 2009 into the all_years list
all_years.extend([religion_2005,religion_2006,religion_2007,religion_2008,religion_2010,religion_2011,religion_2012,religion_2013,religion_2014])
print('Variable dtypes:\n', religion_2000.dtypes, sep='')
religion_1996
rel = religion_1996['Motivation']
rel
religion_2003
#Variables and Description
# List of Indices (Motivation) in a DataFrame for a particular yaer
header_rows = ['All Religion','Anti-Jewish','Anti-Catholic','Anti-Protestants','Anti-Islamic','Anti-Other Religion','Anti-Multiple Religion,Group','Anti-Atheism/Agnosticism/etc.']
# List of headers in a DataFrame for particular yaer
columns = ['Incidents','Offenses', 'Victims','Known Offenders']
# List of headers for the new DataFrame
all_years_headers = []
#List of list of all values in the DataFrames for all years
all_years_list=[]
# List of the new indices, representing all reported years, for the new DataFrams.
all_years_keys = []
'''
Folloing Steps Are taken for Combining the Data:
'''
'''
Combine 8 Motivations with the different data values' headers:
* Use the 8 motivations : ['All Religion','Anti-Jewish','Anti-Catholic','Anti-Protestants','Anti-Islamic','
Anti-Other Religion','Anti-Multiple Religion,Group','Anti-Atheism/Agnosticism/etc.']
* Use the 4 Data Values headers = ['Incidents','Offenses', 'Victims','Known Offenders']
* Create 32 headers such that for each motivation, there are 4 different headers for the different data values.
* E.g for 'Anti-Jewish' motivation, the resulting headers will be Anti-Jewish: Incidents,Anti-Jewish: Offenses,
Anti-Jewish: Victims', and Anti-Jewish: Known Offenders.
* all_years_headers is the list of all the generated headers.
'''
for row in header_rows:
for col in columns:
header_val = row + ': ' + str(col)
all_years_headers.append(header_val)
'''
Generate a list called all_years_keys, which will correspond to the indices of the new DataFrame.
'''
for i in list(range(1996,2009)) + list(range(2010, 2015)):
all_years_keys.append(str(i))
count = 0
'''
Create the combined DataFrame:
'''
# Loop through all_year - the list of the DataFrames representing each year *
for single_year in all_years:
tmp_list =[]
# Within each DataFrameLoop through all rows :
for row in range(8):
current_row = single_year.iloc[row]
# Within each row, loop through all column values
for col in columns:
# add the column values into a temporary list
tmp_list.append(current_row[col])
# Add the temporary list cosisting of all the data values of the data frame into all_years_list.
all_years_list.append(tmp_list)
count+=1
'''
Create the DataFrame using all_years_list as data, all_years_keys as indices, all_years_headers as headers.
Name this DataFrame hc, representing hate crimes
'''
hc = pd.DataFrame(all_years_list, columns= all_years_headers, index = all_years_keys)
hc
anti_islam = hc['Anti-Islamic: Incidents']
anti_islam.plot(kind='line',
grid = True,
title = 'Anti-Islam Hate Crimes',
sharey = True,
sharex = True,
use_index = True,
legend = True,
fontsize = 10
)
print(anti_islam)
anti_islam_2011 = anti_islam[5]
anti_islam_2010 = anti_islam[4]
anti_islam_2012 = anti_islam[6]
percentage_change_2011 = (((anti_islam_2011 - anti_islam_2010)/anti_islam_2010)*100)
percentage_change_2012 = (((anti_islam_2012 - anti_islam_2011)/anti_islam_2011)*100)
print("Hate Crimes against Muslims growth in 2011 from 2010: ", percentage_change_2011, '%')
print("Hate Crimes against Muslims growth in 2010 from 2011: ", percentage_change_2012, '%')
anti_islam_before_2011 = anti_islam[:5].mean()
anti_islam_after_2011 = anti_islam[6:].mean()
print('Average hate crimes against Muslims before 2011: ', anti_islam_before_2011)
print('Average hate crimes against Muslims before 2011: ', anti_islam_after_2011)
avg = (((anti_islam_after_2011 - anti_islam_before_2011)/anti_islam_before_2011)*100)
print('Percentage increased in the average number of hate crimes against Muslims after 2011: ', avg)
anti_religion = hc['All Religion: Incidents']
anti_religion.plot(kind='line',
title = 'Hate Crimes Against All Religion',
sharey = True,
sharex = True,
use_index = True,
legend = True)
anti_religion_2011 = anti_religion[5]
anti_religion_2010 = anti_religion[4]
anti_religion_2012 = anti_religion[6]
avg_before_2011 = anti_religion[:5].mean()
avg_after_2011 = anti_religion[6:].mean()
avg_after_2008 = anti_religion[13:].mean()
print('Average Number of Crimes before 2011 : ', avg_before_2011)
print('Avearage Number of Crimes after 2011 : ', avg_after_2011)
print('Avearage Number of Crimes after 2008 : ', avg_after_2008)
print('Hate Crimes in 2011 : ', anti_religion_2011)
anti_muslim_percentage= (hc['Anti-Islamic: Incidents']/hc['All Religion: Incidents'])*100
anti_muslim_percentage.plot(kind = 'line',
title = 'Percentage of Hate Crimes Against Muslims Among All Religion',
sharey = True,
sharex = True,
use_index = True)
avg_before_2011 = something[:5].mean()
#not including 2011 in either average before or after 2011
avg_after_2011 = something[6:].mean()
perc_increase = (((avg_after_2011 - avg_before_2011)/avg_before_2011)*100)
print(avg_before_2011, avg_after_2011, perc_increase)
growth_list = []
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Import (2005 - 2015)
Step2: 2014 Data
Step3: 2013 Data
Step4: 2012 Data
Step5: 2011 Data
Step6: 2010 Data
Step7: 2009 Data
Step8: 2007 Data
Step9: 2006 Data
Step10: 2005 Data
Step11: Data Web Scraping
Step12: Data Import 1996 - 2004
Steps for Data Collection using example of 2003
Step13: DataFrame Description for a particular year
Step14: Combining DataFrames for all years into one DataFrame
Step15: Q
Step16: Answer
Step17: Q
Step18: Answer
Step19: Answer
Step20: Answer
|
15,412 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
15,413 | <ASSISTANT_TASK:>
Python Code:
def hi():
print('Hello world!')
hi()
def cobbDouglas(A,alpha,k):
''' Computes output per worker y given A, alpha, and a value of capital per worker k
Args:
A (float): TFP
alpha (float): Cobb-Douglas parameter
k (float or numpy array): capital per worker
Returns
float or numpy array'''
return A*k**alpha
# Use cobbDouglas to plot the production function for a bunch of values of alpha between 0 and 1.
def solow_example(A,alpha,delta,s,n,K0,L0,T):
'''Returns DataFrame with simulated values for a Solow model with labor growth and constant TFP
Args:
A (float): TFP
alpha (float): Cobb-Douglas production function parameter
delta (float): capital deprection rate
s (float): saving rate
n (float): labor force growth rate
K0 (float): initial capital stock
L0 (float): initial labor force
T (int): number of periods to simulate
Returns:
pandas DataFrame with columns:
'capital', 'labor', 'output', 'consumption', 'investment',
'capital_pw','output_pw', 'consumption_pw', 'investment_pw'
'''
# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0
capital = np.zeros(T+1)
capital[0] = K0
# Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to L0
labor = np.zeros(T+1)
labor[0] = L0
# Compute all capital and labor values by iterating over t from 0 through T
for t in np.arange(T):
labor[t+1] = (1+n)*labor[t]
capital[t+1] = s*A*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t]
# Store the simulated capital df in a pandas DataFrame called data
df = pd.DataFrame({'capital':capital,'labor':labor})
# Create columns in the DataFrame to store computed values of the other endogenous variables
df['output'] = df['capital']**alpha*df['labor']**(1-alpha)
df['consumption'] = (1-s)*df['output']
df['investment'] = df['output'] - df['consumption']
# Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker
df['capital_pw'] = df['capital']/df['labor']
df['output_pw'] = df['output']/df['labor']
df['consumption_pw'] = df['consumption']/df['labor']
df['investment_pw'] = df['investment']/df['labor']
return df
# Create the DataFrame with simulated values
df = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100)
# Create a 2x2 grid of plots of the capital per worker, output per worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(df['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(df['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(df['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(df['investment_pw'],lw=3)
ax.grid()
ax.set_title('Investment per worker')
df1 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100)
df2 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)
# Create a 2x2 grid of plots of the capital per worker, output per worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(df1['capital_pw'],lw=3)
ax.plot(df2['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(df1['output_pw'],lw=3)
ax.plot(df2['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(df1['consumption_pw'],lw=3)
ax.plot(df2['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(df1['investment_pw'],lw=3,label='$k_0=20$')
ax.plot(df2['investment_pw'],lw=3,label='$k_0=10$')
ax.grid()
ax.set_title('Investment per worker')
ax.legend(loc='lower right')
df1 = solow_example(A=5,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)
df2 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)
df3 = solow_example(A=15,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)
# Create a 2x2 grid of plots of the capital per worker, output per worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(df1['capital_pw'],lw=3)
ax.plot(df2['capital_pw'],lw=3)
ax.plot(df3['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(df1['output_pw'],lw=3)
ax.plot(df2['output_pw'],lw=3)
ax.plot(df3['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(df1['consumption_pw'],lw=3)
ax.plot(df2['consumption_pw'],lw=3)
ax.plot(df3['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(df1['investment_pw'],lw=3,label='$A=20$')
ax.plot(df2['investment_pw'],lw=3,label='$A=10$')
ax.plot(df3['investment_pw'],lw=3,label='$A=10$')
ax.grid()
ax.set_title('Investment per worker')
ax.legend(loc='lower right',ncol=3)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notice that in the previous example, the function takes no arguments and returns nothing. It just does the task that it's supposed to.
Step2: Note the cobbDouglas() has a docstring. The docstring is optional, but it tells users about the function. The contents of the docstring can be accessed with the help() function. It's good practice to make use of doc strings.
Step3: The Solow model with exogenous population growth
Step4: Example
Step5: Example
Step6: Example
|
15,414 | <ASSISTANT_TASK:>
Python Code:
from search import *
from notebook import psource, heatmap, gaussian_kernel, show_map, final_path_colors, display_visual, plot_NQueens
# Needed to hide warnings in the matplotlib sections
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
from matplotlib import lines
from ipywidgets import interact
import ipywidgets as widgets
from IPython.display import display
import time
psource(Problem)
psource(Node)
psource(GraphProblem)
romania_map = UndirectedGraph(dict(
Arad=dict(Zerind=75, Sibiu=140, Timisoara=118),
Bucharest=dict(Urziceni=85, Pitesti=101, Giurgiu=90, Fagaras=211),
Craiova=dict(Drobeta=120, Rimnicu=146, Pitesti=138),
Drobeta=dict(Mehadia=75),
Eforie=dict(Hirsova=86),
Fagaras=dict(Sibiu=99),
Hirsova=dict(Urziceni=98),
Iasi=dict(Vaslui=92, Neamt=87),
Lugoj=dict(Timisoara=111, Mehadia=70),
Oradea=dict(Zerind=71, Sibiu=151),
Pitesti=dict(Rimnicu=97),
Rimnicu=dict(Sibiu=80),
Urziceni=dict(Vaslui=142)))
romania_map.locations = dict(
Arad=(91, 492), Bucharest=(400, 327), Craiova=(253, 288),
Drobeta=(165, 299), Eforie=(562, 293), Fagaras=(305, 449),
Giurgiu=(375, 270), Hirsova=(534, 350), Iasi=(473, 506),
Lugoj=(165, 379), Mehadia=(168, 339), Neamt=(406, 537),
Oradea=(131, 571), Pitesti=(320, 368), Rimnicu=(233, 410),
Sibiu=(207, 457), Timisoara=(94, 410), Urziceni=(456, 350),
Vaslui=(509, 444), Zerind=(108, 531))
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
romania_locations = romania_map.locations
print(romania_locations)
# node colors, node positions and node label positions
node_colors = {node: 'white' for node in romania_map.locations.keys()}
node_positions = romania_map.locations
node_label_pos = { k:[v[0],v[1]-10] for k,v in romania_map.locations.items() }
edge_weights = {(k, k2) : v2 for k, v in romania_map.graph_dict.items() for k2, v2 in v.items()}
romania_graph_data = { 'graph_dict' : romania_map.graph_dict,
'node_colors': node_colors,
'node_positions': node_positions,
'node_label_positions': node_label_pos,
'edge_weights': edge_weights
}
show_map(romania_graph_data)
psource(SimpleProblemSolvingAgentProgram)
class vacuumAgent(SimpleProblemSolvingAgentProgram):
def update_state(self, state, percept):
return percept
def formulate_goal(self, state):
goal = [state7, state8]
return goal
def formulate_problem(self, state, goal):
problem = state
return problem
def search(self, problem):
if problem == state1:
seq = ["Suck", "Right", "Suck"]
elif problem == state2:
seq = ["Suck", "Left", "Suck"]
elif problem == state3:
seq = ["Right", "Suck"]
elif problem == state4:
seq = ["Suck"]
elif problem == state5:
seq = ["Suck"]
elif problem == state6:
seq = ["Left", "Suck"]
return seq
state1 = [(0, 0), [(0, 0), "Dirty"], [(1, 0), ["Dirty"]]]
state2 = [(1, 0), [(0, 0), "Dirty"], [(1, 0), ["Dirty"]]]
state3 = [(0, 0), [(0, 0), "Clean"], [(1, 0), ["Dirty"]]]
state4 = [(1, 0), [(0, 0), "Clean"], [(1, 0), ["Dirty"]]]
state5 = [(0, 0), [(0, 0), "Dirty"], [(1, 0), ["Clean"]]]
state6 = [(1, 0), [(0, 0), "Dirty"], [(1, 0), ["Clean"]]]
state7 = [(0, 0), [(0, 0), "Clean"], [(1, 0), ["Clean"]]]
state8 = [(1, 0), [(0, 0), "Clean"], [(1, 0), ["Clean"]]]
a = vacuumAgent(state1)
print(a(state6))
print(a(state1))
print(a(state3))
def tree_breadth_search_for_vis(problem):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
Don't worry about repeated paths to a state. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
#Adding first node to the queue
frontier = deque([Node(problem.initial)])
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
#Popping first node of queue
node = frontier.popleft()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier.extend(node.expand(problem))
for n in node.expand(problem):
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def breadth_first_tree_search(problem):
"Search the shallowest nodes in the search tree first."
iterations, all_node_colors, node = tree_breadth_search_for_vis(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
a, b, c = breadth_first_tree_search(romania_problem)
display_visual(romania_graph_data, user_input=False,
algorithm=breadth_first_tree_search,
problem=romania_problem)
def tree_depth_search_for_vis(problem):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
Don't worry about repeated paths to a state. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
#Adding first node to the stack
frontier = [Node(problem.initial)]
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
#Popping first node of stack
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier.extend(node.expand(problem))
for n in node.expand(problem):
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def depth_first_tree_search(problem):
"Search the deepest nodes in the search tree first."
iterations, all_node_colors, node = tree_depth_search_for_vis(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_first_tree_search,
problem=romania_problem)
def breadth_first_search_graph(problem):
"[Figure 3.11]"
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = deque([node])
# modify the color of frontier nodes to blue
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.popleft()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
if problem.goal_test(child.state):
node_colors[child.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, child)
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=breadth_first_search_graph,
problem=romania_problem)
def graph_search_for_vis(problem):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
If two paths reach a state, only use the first one. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
frontier = [(Node(problem.initial))]
explored = set()
# modify the color of frontier nodes to orange
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
# Popping first node of stack
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
frontier.extend(child for child in node.expand(problem)
if child.state not in explored and
child not in frontier)
for n in frontier:
# modify the color of frontier nodes to orange
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def depth_first_graph_search(problem):
Search the deepest nodes in the search tree first.
iterations, all_node_colors, node = graph_search_for_vis(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_first_graph_search,
problem=romania_problem)
def best_first_graph_search_for_vis(problem, f):
Search the nodes with the lowest f scores first.
You specify the function f(node) that you want to minimize; for example,
if f is a heuristic estimate to the goal, then we have greedy best
first search; if f is node.depth then we have breadth-first search.
There is a subtlety: the line "f = memoize(f, 'f')" means that the f
values will be cached on the nodes as they are computed. So after doing
a best first search you can examine the f values of the path returned.
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
f = memoize(f, 'f')
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = PriorityQueue('min', f)
frontier.append(node)
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.pop()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
elif child in frontier:
incumbent = frontier[child]
if f(child) < incumbent:
del frontier[child]
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def uniform_cost_search_graph(problem):
"[Figure 3.14]"
#Uniform Cost Search uses Best First Search algorithm with f(n) = g(n)
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, lambda node: node.path_cost)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=uniform_cost_search_graph,
problem=romania_problem)
def depth_limited_search_graph(problem, limit = -1):
'''
Perform depth first search of graph g.
if limit >= 0, that is the maximum depth of the search.
'''
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
frontier = [Node(problem.initial)]
explored = set()
cutoff_occurred = False
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
# Popping first node of queue
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
elif limit >= 0:
cutoff_occurred = True
limit += 1
all_node_colors.pop()
iterations -= 1
node_colors[node.state] = "gray"
explored.add(node.state)
frontier.extend(child for child in node.expand(problem)
if child.state not in explored and
child not in frontier)
for n in frontier:
limit -= 1
# modify the color of frontier nodes to orange
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return 'cutoff' if cutoff_occurred else None
def depth_limited_search_for_vis(problem):
Search the deepest nodes in the search tree first.
iterations, all_node_colors, node = depth_limited_search_graph(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_limited_search_for_vis,
problem=romania_problem)
def iterative_deepening_search_for_vis(problem):
for depth in range(sys.maxsize):
iterations, all_node_colors, node=depth_limited_search_for_vis(problem)
if iterations:
return (iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=iterative_deepening_search_for_vis,
problem=romania_problem)
def greedy_best_first_search(problem, h=None):
Greedy Best-first graph search is an informative searching algorithm with f(n) = h(n).
You need to specify the h function when you call best_first_search, or
else in your Problem subclass.
h = memoize(h or problem.h, 'h')
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, lambda n: h(n))
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=greedy_best_first_search,
problem=romania_problem)
def astar_search_graph(problem, h=None):
A* search is best-first graph search with f(n) = g(n)+h(n).
You need to specify the h function when you call astar_search, or
else in your Problem subclass.
h = memoize(h or problem.h, 'h')
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem,
lambda n: n.path_cost + h(n))
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=astar_search_graph,
problem=romania_problem)
def recursive_best_first_search_for_vis(problem, h=None):
[Figure 3.26] Recursive best-first search
# we use these two variables at the time of visualizations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
h = memoize(h or problem.h, 'h')
def RBFS(problem, node, flimit):
nonlocal iterations
def color_city_and_update_map(node, color):
node_colors[node.state] = color
nonlocal iterations
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
color_city_and_update_map(node, 'green')
return (iterations, all_node_colors, node), 0 # the second value is immaterial
successors = node.expand(problem)
if len(successors) == 0:
color_city_and_update_map(node, 'gray')
return (iterations, all_node_colors, None), infinity
for s in successors:
color_city_and_update_map(s, 'orange')
s.f = max(s.path_cost + h(s), node.f)
while True:
# Order by lowest f value
successors.sort(key=lambda x: x.f)
best = successors[0]
if best.f > flimit:
color_city_and_update_map(node, 'gray')
return (iterations, all_node_colors, None), best.f
if len(successors) > 1:
alternative = successors[1].f
else:
alternative = infinity
node_colors[node.state] = 'gray'
node_colors[best.state] = 'red'
iterations += 1
all_node_colors.append(dict(node_colors))
result, best.f = RBFS(problem, best, min(flimit, alternative))
if result[2] is not None:
color_city_and_update_map(node, 'green')
return result, best.f
else:
color_city_and_update_map(node, 'red')
node = Node(problem.initial)
node.f = h(node)
node_colors[node.state] = 'red'
iterations += 1
all_node_colors.append(dict(node_colors))
result, bestf = RBFS(problem, node, infinity)
return result
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=recursive_best_first_search_for_vis,
problem=romania_problem)
all_node_colors = []
# display_visual(romania_graph_data, user_input=True, algorithm=breadth_first_tree_search)
algorithms = { "Breadth First Tree Search": tree_breadth_search_for_vis,
"Depth First Tree Search": tree_depth_search_for_vis,
"Breadth First Search": breadth_first_search_graph,
"Depth First Graph Search": graph_search_for_vis,
"Best First Graph Search": best_first_graph_search_for_vis,
"Uniform Cost Search": uniform_cost_search_graph,
"Depth Limited Search": depth_limited_search_for_vis,
"Iterative Deepening Search": iterative_deepening_search_for_vis,
"Greedy Best First Search": greedy_best_first_search,
"A-star Search": astar_search_graph,
"Recursive Best First Search": recursive_best_first_search_for_vis}
display_visual(romania_graph_data, algorithm=algorithms, user_input=True)
psource(recursive_best_first_search)
recursive_best_first_search(romania_problem).solution()
puzzle = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))
assert puzzle.check_solvability((2, 4, 3, 1, 5, 6, 7, 8, 0))
recursive_best_first_search(puzzle).solution()
goal = [1, 2, 3, 4, 5, 6, 7, 8, 0]
# Heuristics for 8 Puzzle Problem
import math
def linear(node):
return sum([1 if node.state[i] != goal[i] else 0 for i in range(8)])
def manhattan(node):
state = node.state
index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}
index_state = {}
index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]
x, y = 0, 0
for i in range(len(state)):
index_state[state[i]] = index[i]
mhd = 0
for i in range(8):
for j in range(2):
mhd = abs(index_goal[i][j] - index_state[i][j]) + mhd
return mhd
def sqrt_manhattan(node):
state = node.state
index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}
index_state = {}
index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]
x, y = 0, 0
for i in range(len(state)):
index_state[state[i]] = index[i]
mhd = 0
for i in range(8):
for j in range(2):
mhd = (index_goal[i][j] - index_state[i][j])**2 + mhd
return math.sqrt(mhd)
def max_heuristic(node):
score1 = manhattan(node)
score2 = linear(node)
return max(score1, score2)
# Solving the puzzle
puzzle = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))
puzzle.check_solvability((2, 4, 3, 1, 5, 6, 7, 8, 0)) # checks whether the initialized configuration is solvable or not
astar_search(puzzle).solution()
astar_search(puzzle, linear).solution()
astar_search(puzzle, manhattan).solution()
astar_search(puzzle, sqrt_manhattan).solution()
astar_search(puzzle, max_heuristic).solution()
recursive_best_first_search(puzzle, manhattan).solution()
puzzle_1 = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))
puzzle_2 = EightPuzzle((1, 2, 3, 4, 5, 6, 0, 7, 8))
puzzle_3 = EightPuzzle((1, 2, 3, 4, 5, 7, 8, 6, 0))
%%timeit
astar_search(puzzle_1)
astar_search(puzzle_2)
astar_search(puzzle_3)
%%timeit
astar_search(puzzle_1, linear)
astar_search(puzzle_2, linear)
astar_search(puzzle_3, linear)
%%timeit
astar_search(puzzle_1, manhattan)
astar_search(puzzle_2, manhattan)
astar_search(puzzle_3, manhattan)
%%timeit
astar_search(puzzle_1, sqrt_manhattan)
astar_search(puzzle_2, sqrt_manhattan)
astar_search(puzzle_3, sqrt_manhattan)
%%timeit
astar_search(puzzle_1, max_heuristic)
astar_search(puzzle_2, max_heuristic)
astar_search(puzzle_3, max_heuristic)
%%timeit
recursive_best_first_search(puzzle_1, linear)
recursive_best_first_search(puzzle_2, linear)
recursive_best_first_search(puzzle_3, linear)
psource(hill_climbing)
class TSP_problem(Problem):
subclass of Problem to define various functions
def two_opt(self, state):
Neighbour generating function for Traveling Salesman Problem
neighbour_state = state[:]
left = random.randint(0, len(neighbour_state) - 1)
right = random.randint(0, len(neighbour_state) - 1)
if left > right:
left, right = right, left
neighbour_state[left: right + 1] = reversed(neighbour_state[left: right + 1])
return neighbour_state
def actions(self, state):
action that can be excuted in given state
return [self.two_opt]
def result(self, state, action):
result after applying the given action on the given state
return action(state)
def path_cost(self, c, state1, action, state2):
total distance for the Traveling Salesman to be covered if in state2
cost = 0
for i in range(len(state2) - 1):
cost += distances[state2[i]][state2[i + 1]]
cost += distances[state2[0]][state2[-1]]
return cost
def value(self, state):
value of path cost given negative for the given state
return -1 * self.path_cost(None, None, None, state)
distances = {}
all_cities = []
for city in romania_map.locations.keys():
distances[city] = {}
all_cities.append(city)
all_cities.sort()
print(all_cities)
import numpy as np
for name_1, coordinates_1 in romania_map.locations.items():
for name_2, coordinates_2 in romania_map.locations.items():
distances[name_1][name_2] = np.linalg.norm(
[coordinates_1[0] - coordinates_2[0], coordinates_1[1] - coordinates_2[1]])
distances[name_2][name_1] = np.linalg.norm(
[coordinates_1[0] - coordinates_2[0], coordinates_1[1] - coordinates_2[1]])
def hill_climbing(problem):
From the initial node, keep choosing the neighbor with highest value,
stopping when no neighbor is better. [Figure 4.2]
def find_neighbors(state, number_of_neighbors=100):
finds neighbors using two_opt method
neighbors = []
for i in range(number_of_neighbors):
new_state = problem.two_opt(state)
neighbors.append(Node(new_state))
state = new_state
return neighbors
# as this is a stochastic algorithm, we will set a cap on the number of iterations
iterations = 10000
current = Node(problem.initial)
while iterations:
neighbors = find_neighbors(current.state)
if not neighbors:
break
neighbor = argmax_random_tie(neighbors,
key=lambda node: problem.value(node.state))
if problem.value(neighbor.state) <= problem.value(current.state):
Note that it is based on negative path cost method
current.state = neighbor.state
iterations -= 1
return current.state
tsp = TSP_problem(all_cities)
hill_climbing(tsp)
psource(simulated_annealing)
psource(exp_schedule)
initial = (0, 0)
grid = [[3, 7, 2, 8], [5, 2, 9, 1], [5, 3, 3, 1]]
directions4
problem = PeakFindingProblem(initial, grid, directions4)
solutions = {problem.value(simulated_annealing(problem)) for i in range(100)}
max(solutions)
grid = gaussian_kernel()
heatmap(grid, cmap='jet', interpolation='spline16')
directions8
problem = PeakFindingProblem(initial, grid, directions8)
%%timeit
solutions = {problem.value(simulated_annealing(problem)) for i in range(100)}
max(solutions)
%%timeit
solution = problem.value(hill_climbing(problem))
solution = problem.value(hill_climbing(problem))
solution
grid = [[0, 0, 0, 1, 4],
[0, 0, 2, 8, 10],
[0, 0, 2, 4, 12],
[0, 2, 4, 8, 16],
[1, 4, 8, 16, 32]]
heatmap(grid, cmap='jet', interpolation='spline16')
problem = PeakFindingProblem(initial, grid, directions8)
solution = problem.value(hill_climbing(problem))
solution
solutions = {problem.value(simulated_annealing(problem)) for i in range(100)}
max(solutions)
psource(genetic_algorithm)
psource(recombine)
psource(mutate)
psource(init_population)
target = 'Genetic Algorithm'
# The ASCII values of uppercase characters ranges from 65 to 91
u_case = [chr(x) for x in range(65, 91)]
# The ASCII values of lowercase characters ranges from 97 to 123
l_case = [chr(x) for x in range(97, 123)]
gene_pool = []
gene_pool.extend(u_case) # adds the uppercase list to the gene pool
gene_pool.extend(l_case) # adds the lowercase list to the gene pool
gene_pool.append(' ') # adds the space character to the gene pool
max_population = 100
mutation_rate = 0.07 # 7%
def fitness_fn(sample):
# initialize fitness to 0
fitness = 0
for i in range(len(sample)):
# increment fitness by 1 for every matching character
if sample[i] == target[i]:
fitness += 1
return fitness
population = init_population(max_population, gene_pool, len(target))
parents = select(2, population, fitness_fn)
# The recombine function takes two parents as arguments, so we need to unpack the previous variable
child = recombine(*parents)
child = mutate(child, gene_pool, mutation_rate)
population = [mutate(recombine(*select(2, population, fitness_fn)), gene_pool, mutation_rate) for i in range(len(population))]
current_best = max(population, key=fitness_fn)
print(current_best)
current_best_string = ''.join(current_best)
print(current_best_string)
ngen = 1200 # maximum number of generations
# we set the threshold fitness equal to the length of the target phrase
# i.e the algorithm only terminates whne it has got all the characters correct
# or it has completed 'ngen' number of generations
f_thres = len(target)
def genetic_algorithm_stepwise(population, fitness_fn, gene_pool=[0, 1], f_thres=None, ngen=1200, pmut=0.1):
for generation in range(ngen):
population = [mutate(recombine(*select(2, population, fitness_fn)), gene_pool, pmut) for i in range(len(population))]
# stores the individual genome with the highest fitness in the current population
current_best = ''.join(max(population, key=fitness_fn))
print(f'Current best: {current_best}\t\tGeneration: {str(generation)}\t\tFitness: {fitness_fn(current_best)}\r', end='')
# compare the fitness of the current best individual to f_thres
fittest_individual = fitness_threshold(fitness_fn, f_thres, population)
# if fitness is greater than or equal to f_thres, we terminate the algorithm
if fittest_individual:
return fittest_individual, generation
return max(population, key=fitness_fn) , generation
psource(genetic_algorithm)
population = init_population(max_population, gene_pool, len(target))
solution, generations = genetic_algorithm_stepwise(population, fitness_fn, gene_pool, f_thres, ngen, mutation_rate)
edges = {
'A': [0, 1],
'B': [0, 3],
'C': [1, 2],
'D': [2, 3]
}
population = init_population(8, ['R', 'G'], 4)
print(population)
def fitness(c):
return sum(c[n1] != c[n2] for (n1, n2) in edges.values())
solution = genetic_algorithm(population, fitness, gene_pool=['R', 'G'])
print(solution)
print(fitness(solution))
population = init_population(100, range(8), 8)
print(population[:5])
def fitness(q):
non_attacking = 0
for row1 in range(len(q)):
for row2 in range(row1+1, len(q)):
col1 = int(q[row1])
col2 = int(q[row2])
row_diff = row1 - row2
col_diff = col1 - col2
if col1 != col2 and row_diff != col_diff and row_diff != -col_diff:
non_attacking += 1
return non_attacking
solution = genetic_algorithm(population, fitness, f_thres=25, gene_pool=range(8))
print(solution)
print(fitness(solution))
psource(NQueensProblem)
nqp = NQueensProblem(8)
%%timeit
depth_first_tree_search(nqp)
dfts = depth_first_tree_search(nqp).solution()
plot_NQueens(dfts)
%%timeit
breadth_first_tree_search(nqp)
bfts = breadth_first_tree_search(nqp).solution()
plot_NQueens(bfts)
%%timeit
uniform_cost_search(nqp)
ucs = uniform_cost_search(nqp).solution()
plot_NQueens(ucs)
psource(NQueensProblem.h)
%%timeit
astar_search(nqp)
astar = astar_search(nqp).solution()
plot_NQueens(astar)
psource(and_or_graph_search)
vacuum_world = GraphProblemStochastic('State_1', ['State_7', 'State_8'], vacuum_world)
plan = and_or_graph_search(vacuum_world)
plan
def run_plan(state, problem, plan):
if problem.goal_test(state):
return True
if len(plan) is not 2:
return False
predicate = lambda x: run_plan(x, problem, plan[1][x])
return all(predicate(r) for r in problem.result(state, plan[0]))
run_plan('State_1', vacuum_world, plan)
psource(OnlineDFSAgent)
psource(LRTAStarAgent)
one_dim_state_space
LRTA_problem = OnlineSearchProblem('State_3', 'State_5', one_dim_state_space)
lrta_agent = LRTAStarAgent(LRTA_problem)
lrta_agent('State_3')
lrta_agent('State_4')
lrta_agent('State_3')
lrta_agent('State_4')
lrta_agent('State_5')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CONTENTS
Step2: PROBLEM
Step3: The Problem class has six methods.
Step4: The Node class has nine methods. The first is the __init__ method.
Step5: Have a look at our romania_map, which is an Undirected Graph containing a dict of nodes as keys and neighbours as values.
Step6: It is pretty straightforward to understand this romania_map. The first node Arad has three neighbours named Zerind, Sibiu, Timisoara. Each of these nodes are 75, 140, 118 units apart from Arad respectively. And the same goes with other nodes.
Step7: Romania Map Visualisation
Step8: Let's get started by initializing an empty graph. We will add nodes, place the nodes in their location as shown in the book, add edges to the graph.
Step9: We have completed building our graph based on romania_map and its locations. It's time to display it here in the notebook. This function show_map(node_colors) helps us do that. We will be calling this function later on to display the map at each and every interval step while searching, using variety of algorithms from the book.
Step10: Voila! You see, the romania map as shown in the Figure[3.2] in the book. Now, see how different searching algorithms perform with our problem statements.
Step11: The SimpleProblemSolvingAgentProgram class has six methods
Step12: Now, we will define all the 8 states and create an object of the above class. Then, we will pass it different states and check the output
Step14: SEARCHING ALGORITHMS VISUALIZATION
Step15: Now, we use ipywidgets to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button Visualize, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button.
Step17: 2. DEPTH-FIRST TREE SEARCH
Step18: 3. BREADTH-FIRST GRAPH SEARCH
Step21: 4. DEPTH-FIRST GRAPH SEARCH
Step23: 5. BEST FIRST SEARCH
Step24: 6. UNIFORM COST SEARCH
Step26: 7. DEPTH LIMITED SEARCH
Step27: 8. ITERATIVE DEEPENING SEARCH
Step29: 9. GREEDY BEST FIRST SEARCH
Step31: 10. A* SEARCH
Step33: 11. RECURSIVE BEST FIRST SEARCH
Step34: RECURSIVE BEST-FIRST SEARCH
Step35: This is how recursive_best_first_search can solve the romania_problem
Step36: recursive_best_first_search can be used to solve the 8 puzzle problem too, as discussed later.
Step37: A* HEURISTICS
Step38: Heuristics
Step39: We can solve the puzzle using the astar_search method.
Step40: This case is solvable, let's proceed.
Step41: In the following cells, we use different heuristic functions.
Step42: And here's how recursive_best_first_search can be used to solve this problem too.
Step43: Even though all the heuristic functions give the same solution, the difference lies in the computation time.
Step44: The default heuristic function is the same as the linear heuristic function, but we'll still check both.
Step45: We can infer that the manhattan heuristic function works the fastest.
Step46: It is quite a lot slower than astar_search as we can see.
Step53: We will find an approximate solution to the traveling salespersons problem using this algorithm.
Step54: We will use cities from the Romania map as our cities for this problem.
Step55: Next, we need to populate the individual lists inside the dictionary with the manhattan distance between the cities.
Step59: The way neighbours are chosen currently isn't suitable for the travelling salespersons problem.
Step60: An instance of the TSP_problem class will be created.
Step61: We can now generate an approximate solution to the problem by calling hill_climbing.
Step62: The solution looks like this.
Step63: The temperature is gradually decreased over the course of the iteration.
Step64: Next, we'll define a peak-finding problem and try to solve it using Simulated Annealing.
Step65: We want to allow only four directions, namely N, S, E and W.
Step66: Define a problem with these parameters.
Step67: We'll run simulated_annealing a few times and store the solutions in a set.
Step68: Hence, the maximum value is 9.
Step69: Let's use the heatmap function from notebook.py to plot this.
Step70: Let's define the problem.
Step71: We'll solve the problem just like we did last time.
Step72: The peak is at 1.0 which is how gaussian distributions are defined.
Step73: As you can see, Hill-Climbing is about 24 times faster than Simulated Annealing.
Step74: The peak value is 32 at the lower right corner.
Step75: Solution by Hill Climbing
Step76: Solution by Simulated Annealing
Step77: Notice that even though both algorithms started at the same initial state,
Step78: The algorithm takes the following input
Step79: The method picks at random a point and merges the parents (x and y) around it.
Step80: We pick a gene in x to mutate and a gene from the gene pool to replace it with.
Step81: The function takes as input the number of individuals in the population, the gene pool and the length of each individual/state. It creates individuals with random genes and returns the population when done.
Step82: We then need to define our gene pool, i.e the elements which an individual from the population might comprise of. Here, the gene pool contains all uppercase and lowercase letters of the English alphabet and the space character.
Step83: We now need to define the maximum size of each population. Larger populations have more variation but are computationally more expensive to run algorithms on.
Step84: As our population is not very large, we can afford to keep a relatively large mutation rate.
Step85: Great! Now, we need to define the most important metric for the genetic algorithm, i.e the fitness function. This will simply return the number of matching characters between the generated sample and the target phrase.
Step86: Before we run our genetic algorithm, we need to initialize a random population. We will use the init_population function to do this. We need to pass in the maximum population size, the gene pool and the length of each individual, which in this case will be the same as the length of the target phrase.
Step87: We will now define how the individuals in the population should change as the number of generations increases. First, the select function will be run on the population to select two individuals with high fitness values. These will be the parents which will then be recombined using the recombine function to generate the child.
Step88: Next, we need to apply a mutation according to the mutation rate. We call the mutate function on the child with the gene pool and mutation rate as the additional arguments.
Step89: The above lines can be condensed into
Step90: The individual with the highest fitness can then be found using the max function.
Step91: Let's print this out
Step92: We see that this is a list of characters. This can be converted to a string using the join function
Step93: We now need to define the conditions to terminate the algorithm. This can happen in two ways
Step94: To generate ngen number of generations, we run a for loop ngen number of times. After each generation, we calculate the fitness of the best individual of the generation and compare it to the value of f_thres using the fitness_threshold function. After every generation, we print out the best individual of the generation and the corresponding fitness value. Lets now write a function to do this.
Step95: The function defined above is essentially the same as the one defined in search.py with the added functionality of printing out the data of each generation.
Step96: We have defined all the required functions and variables. Let's now create a new population and test the function we wrote above.
Step97: The genetic algorithm was able to converge!
Step98: Edge 'A' connects nodes 0 and 1, edge 'B' connects nodes 0 and 3 etc.
Step99: We created and printed the population. You can see that the genes in the individuals are random and there are 8 individuals each with 4 genes.
Step100: Great! Now we will run the genetic algorithm and see what solution it gives.
Step101: The algorithm converged to a solution. Let's check its score
Step102: The solution has a score of 4. Which means it is optimal, since we have exactly 4 edges in our graph, meaning all are valid!
Step103: We have a population of 100 and each individual has 8 genes. The gene pool is the integers from 0 to 7, in string form. Above you can see the first five individuals.
Step104: Note that the best score achievable is 28. That is because for each queen we only check for the queens after her. For the first queen we check 7 other queens, for the second queen 6 others and so on. In short, the number of checks we make is the sum 7+6+5+...+1. Which is equal to 7*(7+1)/2 = 28.
Step105: Above you can see the solution and its fitness score, which should be no less than 25.
Step106: In csp.ipynb we have seen that the N-Queens problem can be formulated as a CSP and can be solved by
Step107: Let's use depth_first_tree_search first.
Step108: breadth_first_tree_search
Step109: uniform_cost_search
Step110: depth_first_tree_search is almost 20 times faster than breadth_first_tree_search and more than 200 times faster than uniform_cost_search.
Step111: astar_search is faster than both uniform_cost_search and breadth_first_tree_search.
Step112: AND-OR GRAPH SEARCH
Step113: The search is carried out by two functions and_search and or_search that recursively call each other, traversing nodes sequentially.
Step114: ONLINE DFS AGENT
Step115: It maintains two dictionaries untried and unbacktracked.
Step116: H stores the heuristic cost of the paths the agent may travel to.
Step117: Let's define an instance of OnlineSearchProblem.
Step118: Now we initialize a LRTAStarAgent object for the problem we just defined.
Step119: We'll pass the percepts [State_3, State_4, State_3, State_4, State_5] one-by-one to our agent to see what action it comes up with at each timestep.
Step120: If you manually try to see what the optimal action should be at each step, the outputs of the lrta_agent will start to make sense if it doesn't already.
|
15,415 | <ASSISTANT_TASK:>
Python Code:
d = triand['dh'].data
d_cut = (d > 15) & (d < 21)
triand_dist = triand[d_cut]
c_triand = _c_triand[d_cut]
print(len(triand_dist))
plt.hist(triand_dist['<Vmag>'].data)
ptf_triand = ascii.read("/Users/adrian/projects/streams/data/observing/triand.txt")
ptf_c = coord.SkyCoord(ra=ptf_triand['ra']*u.deg, dec=ptf_triand['dec']*u.deg)
print ptf_triand.colnames, len(ptf_triand)
obs_dist = distance(ptf_triand['Vmag'].data)
((obs_dist > 12*u.kpc) & (obs_dist < 25*u.kpc)).sum()
ptf_triand[0]
rrlyr_d = np.genfromtxt("/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt",
skiprows=2, dtype=None, names=['l','b','vhel','vgsr','src','ra','dec','name','dist'])
obs_rrlyr = rrlyr_d[rrlyr_d['src'] == 'PTF']
fig,ax = plt.subplots(1,1,figsize=(10,8))
# ax.plot(c.galactic.l.degree, c.galactic.b.degree, linestyle='none',
# marker='o', markersize=4, alpha=0.75) # ALL RR LYRAE
ax.plot(c_triand.galactic.l.degree, c_triand.galactic.b.degree, linestyle='none',
marker='o', markersize=5, alpha=0.75)
ax.plot(ptf_c.galactic.l.degree, ptf_c.galactic.b.degree, linestyle='none',
marker='o', markerfacecolor='none', markeredgewidth=2, markersize=12, alpha=0.75)
ax.plot(obs_rrlyr['l'], obs_rrlyr['b'], linestyle='none', mec='r',
marker='o', markerfacecolor='none', markeredgewidth=2, markersize=12, alpha=0.75)
# x = np.linspace(-10,40,100)
# x[x < 0] += 360.
# y = np.linspace(30,45,100)
# x,y = map(np.ravel, np.meshgrid(x,y))
# ccc = coord.SkyCoord(ra=x*u.deg,dec=y*u.deg)
# ax.plot(ccc.galactic.l.degree, ccc.galactic.b.degree, linestyle='none')
ax.set_xlim(97,162)
ax.set_ylim(-37,-13)
ax.set_xlabel("$l$ [deg]")
ax.set_ylabel("$b$ [deg]")
fig,ax = plt.subplots(1,1,figsize=(10,8))
ax.plot(c_triand.galactic.l.degree, c_triand.galactic.b.degree, linestyle='none',
marker='o', markersize=4, alpha=0.75)
ax.plot(ptf_c.galactic.l.degree, ptf_c.galactic.b.degree, linestyle='none',
marker='o', markerfacecolor='none', markeredgewidth=2, markersize=8, alpha=0.75)
ax.plot(obs_rrlyr['l'], obs_rrlyr['b'], linestyle='none', mec='r',
marker='o', markerfacecolor='none', markeredgewidth=2, markersize=8, alpha=0.75)
ax.plot(c_triand.galactic.l.degree[10], c_triand.galactic.b.degree[10], linestyle='none',
marker='o', markersize=25, alpha=0.75)
ax.set_xlim(97,162)
ax.set_ylim(-37,-13)
c_triand.icrs[10]
brani = ascii.read("/Users/adrian/projects/triand-rrlyrae/brani_sample/TriAnd.dat")
blaschko = brani[(brani['objectID'] == "13322281016459551106") | (brani['objectID'] == "13879390364114107826")]
for b in blaschko:
row = ptf_triand[np.argmin(np.sqrt((ptf_triand['ra'] - b['ra'])**2 + (ptf_triand['dec'] - b['dec'])**2))]
print(row['name'])
print(coord.SkyCoord(ra=row['ra']*u.deg, dec=row['dec']*u.deg).galactic)
zip(obs_rrlyr['l'], obs_rrlyr['b'])
d = V_to_dist(triand['<Vmag>'].data).to(u.kpc).value
bins = np.arange(1., 60+5, 3)
plt.figure(figsize=(10,8))
n,bins,patches = plt.hist(triand['dh'].data, bins=bins, alpha=0.5, label='Catalina')
for pa in patches:
if pa.xy[0] < 15. or pa.xy[0] > 40.:
pa.set_alpha(0.2)
# other_bins = np.arange(0, 15+2., 2.)
# plt.hist(V_to_dist(triand['<Vmag>'].data), bins=other_bins, alpha=0.2, color='k')
# other_bins = np.arange(40, 60., 2.)
# plt.hist(V_to_dist(triand['<Vmag>'].data), bins=other_bins, alpha=0.2, color='k')
plt.hist(V_to_dist(ptf_triand['Vmag'].data),
bins=bins, alpha=0.5, label='PTF/MDM')
plt.xlabel("Distance [kpc]")
plt.ylabel("Number")
# plt.ylim(0,35)
plt.legend(fontsize=20)
plt.axvline(18.)
plt.axvline(28.)
import emcee
import triangle
from scipy.misc import logsumexp
((distance(triand['<Vmag>'].data) > (15.*u.kpc)) & (distance(triand['<Vmag>'].data) < (40.*u.kpc))).sum()
!head -n3 /Users/adrian/projects/triand-rrlyrae/data/triand_giants.txt
d = np.loadtxt("/Users/adrian/projects/triand-rrlyrae/data/triand_giants.txt", skiprows=1)
d2 = np.genfromtxt("/Users/adrian/projects/triand-rrlyrae/data/TriAnd_Mgiant.txt", skiprows=2)
plt.plot(d[:,0], d[:,2], linestyle='none')
plt.plot(d2[:,0], d2[:,3], linestyle='none')
ix = (d[:,2] < 100) & (d[:,2] > -50)
ix = np.ones_like(ix).astype(bool)
plt.plot(d[ix,0], d[ix,2], linestyle='none')
plt.plot(d[ix,0], -1*d[ix,0] + 170, marker=None)
plt.xlabel('l [deg]')
plt.ylabel('v_r [km/s]')
plt.figure()
plt.plot(d[ix,0], d[ix,1], linestyle='none')
plt.xlabel('l [deg]')
plt.ylabel('b [deg]')
def ln_normal(x, mu, sigma):
return -0.5*np.log(2*np.pi) - np.log(sigma) - 0.5*((x-mu)/sigma)**2
# def ln_prior(p):
# m,b,V = p
# if m > 0. or m < -50:
# return -np.inf
# if b < 0 or b > 500:
# return -np.inf
# if V <= 0.:
# return -np.inf
# return -np.log(V)
# def ln_likelihood(p, l, vr, sigma_vr):
# m,b,V = p
# sigma = np.sqrt(sigma_vr**2 + V**2)
# return ln_normal(vr, m*l + b, sigma)
# mixture model - f_ol is outlier fraction
def ln_prior(p):
m,b,V,f_ol = p
if m > 0. or m < -50:
return -np.inf
if b < 0 or b > 500:
return -np.inf
if V <= 0.:
return -np.inf
if f_ol > 1. or f_ol < 0.:
return -np.inf
return -np.log(V)
def likelihood(p, l, vr, sigma_vr):
m,b,V,f_ol = p
sigma = np.sqrt(sigma_vr**2 + V**2)
term1 = ln_normal(vr, m*l + b, sigma)
term2 = ln_normal(vr, 0., 120.)
return np.array([term1, term2])
def ln_likelihood(p, *args):
m,b,V,f_ol = p
x = likelihood(p, *args)
# coefficients
b = np.zeros_like(x)
b[0] = 1-f_ol
b[1] = f_ol
return logsumexp(x,b=b, axis=0)
def ln_posterior(p, *args):
lnp = ln_prior(p)
if np.isinf(lnp):
return -np.inf
return lnp + ln_likelihood(p, *args).sum()
def outlier_prob(p, *args):
m,b,V,f_ol = p
p1,p2 = likelihood(p, *args)
return f_ol*np.exp(p2) / ((1-f_ol)*np.exp(p1) + f_ol*np.exp(p2))
vr_err = 2 # km/s
nwalkers = 32
sampler = emcee.EnsembleSampler(nwalkers=nwalkers, dim=4, lnpostfn=ln_posterior,
args=(d[ix,0],d[ix,2],vr_err))
p0 = np.zeros((nwalkers,sampler.dim))
p0[:,0] = np.random.normal(-1, 0.1, size=nwalkers)
p0[:,1] = np.random.normal(150, 0.1, size=nwalkers)
p0[:,2] = np.random.normal(25, 0.5, size=nwalkers)
p0[:,3] = np.random.normal(0.1, 0.01, size=nwalkers)
for pp in p0:
lnp = ln_posterior(pp, *sampler.args)
if not np.isfinite(lnp):
print("you suck")
pos,prob,state = sampler.run_mcmc(p0, N=100)
sampler.reset()
pos,prob,state = sampler.run_mcmc(pos, N=1000)
fig = triangle.corner(sampler.flatchain,
labels=[r'$\mathrm{d}v/\mathrm{d}l$', r'$v_0$', r'$\sigma_v$', r'$f_{\rm halo}$'])
figsize = (12,8)
MAP = sampler.flatchain[sampler.flatlnprobability.argmax()]
pout = outlier_prob(MAP, d[ix,0], d[ix,2], vr_err)
plt.figure(figsize=figsize)
cl = plt.scatter(d[ix,0], d[ix,2], c=(1-pout), s=30, cmap='RdYlGn', vmin=0, vmax=1)
cbar = plt.colorbar(cl)
cbar.set_clim(0,1)
# plt.plot(d[ix,0], d[ix,2], linestyle='none', marker='o', ms=4)
plt.xlabel(r'$l\,[{\rm deg}]$')
plt.ylabel(r'$v_r\,[{\rm km\,s}^{-1}]$')
ls = np.linspace(d[ix,0].min(), d[ix,0].max(), 100)
for i in np.random.randint(len(sampler.flatchain), size=100):
m,b,V,f_ol = sampler.flatchain[i]
plt.plot(ls, m*ls+b, color='#555555', alpha=0.1, marker=None)
best_m,best_b,best_V,best_f_ol = MAP
plt.plot(ls, best_m*ls + best_b, color='k', alpha=1, marker=None)
plt.plot(ls, best_m*ls + best_b + best_V, color='k', alpha=1, marker=None, linestyle='--')
plt.plot(ls, best_m*ls + best_b - best_V, color='k', alpha=1, marker=None, linestyle='--')
plt.xlim(ls.max()+2, ls.min()-2)
plt.title("{:.1f}% halo stars".format(best_f_ol*100.))
print(((1-pout) > 0.75).tolist())
print best_m, best_b, best_V
print "MAP velocity dispersion: {:.2f} km/s".format(best_V)
high_p = (1-pout) > 0.8
plt.figure(figsize=figsize)
cl = plt.scatter(d[high_p,0], d[high_p,1], c=d[high_p,2]-d[high_p,2].mean(), s=30, cmap='coolwarm', vmin=-40, vmax=40)
cbar = plt.colorbar(cl)
ax = plt.gca()
ax.set_axis_bgcolor('#555555')
plt.xlim(ls.max()+2,ls.min()-2)
plt.ylim(-50,-10)
plt.xlabel(r'$l\,[{\rm deg}]$')
plt.ylabel(r'$b\,[{\rm deg}]$')
plt.title(r'$P_{\rm TriAnd} > 0.8$', y=1.02)
rrlyr_d = np.genfromtxt("/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt", skiprows=2, dtype=None)
!cat "/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt"
rrlyr_d = np.genfromtxt("/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt", skiprows=2)
rrlyr_vr_err = 10.
MAP = sampler.flatchain[sampler.flatlnprobability.argmax()]
pout = outlier_prob(MAP, rrlyr_d[:,0], rrlyr_d[:,3], rrlyr_vr_err)
plt.figure(figsize=figsize)
cl = plt.scatter(rrlyr_d[:,0], rrlyr_d[:,1], c=(1-pout), s=30, cmap='RdYlGn', vmin=0, vmax=1)
cbar = plt.colorbar(cl)
cbar.set_clim(0,1)
# plt.plot(d[ix,0], d[ix,2], linestyle='none', marker='o', ms=4)
plt.xlabel(r'$l\,[{\rm deg}]$')
plt.ylabel(r'$b\,[{\rm deg}]$')
plt.xlim(ls.max()+2,ls.min()-2)
plt.ylim(-50,-10)
plt.title("RR Lyrae")
MAP = sampler.flatchain[sampler.flatlnprobability.argmax()]
pout = outlier_prob(MAP, rrlyr_d[:,0], rrlyr_d[:,3], rrlyr_vr_err)
plt.figure(figsize=figsize)
cl = plt.scatter(rrlyr_d[:,0], rrlyr_d[:,3], c=(1-pout), s=30, cmap='RdYlGn', vmin=0, vmax=1)
cbar = plt.colorbar(cl)
cbar.set_clim(0,1)
# plt.plot(d[ix,0], d[ix,2], linestyle='none', marker='o', ms=4)
plt.xlabel(r'$l\,[{\rm deg}]$')
plt.ylabel(r'$v_r\,[{\rm km\,s}^{-1}]$')
ls = np.linspace(d[ix,0].min(), d[ix,0].max(), 100)
best_m,best_b,best_V,best_f_ol = MAP
plt.plot(ls, best_m*ls + best_b, color='k', alpha=1, marker=None)
plt.plot(ls, best_m*ls + best_b + best_V, color='k', alpha=1, marker=None, linestyle='--')
plt.plot(ls, best_m*ls + best_b - best_V, color='k', alpha=1, marker=None, linestyle='--')
plt.xlim(ls.max()+2, ls.min()-2)
plt.title("RR Lyrae")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Stars I actually observed
Step2: Data for the observed stars
Step3: Comparison of stars observed with Catalina
Step4: Issues
Step5: Possible Blaschko stars
Step6: For Kathryn's proposal
Step7: Now read in RR Lyrae data, compute prob for each star
|
15,416 | <ASSISTANT_TASK:>
Python Code:
# Perform standard imports
import spacy
nlp = spacy.load('en_core_web_sm')
# From Spacy Basics:
doc = nlp(u'This is the first sentence. This is another sentence. This is the last sentence.')
for sent in doc.sents:
print(sent)
print(doc[1])
print(doc.sents[1])
doc_sents = [sent for sent in doc.sents]
doc_sents
# Now you can access individual sentences:
print(doc_sents[1])
type(doc_sents[1])
print(doc_sents[1].start, doc_sents[1].end)
# Parsing the segmentation start tokens happens during the nlp pipeline
doc2 = nlp(u'This is a sentence. This is a sentence. This is a sentence.')
for token in doc2:
print(token.is_sent_start, ' '+token.text)
# SPACY'S DEFAULT BEHAVIOR
doc3 = nlp(u'"Management is doing things right; leadership is doing the right things." -Peter Drucker')
for sent in doc3.sents:
print(sent)
# ADD A NEW RULE TO THE PIPELINE
def set_custom_boundaries(doc):
for token in doc[:-1]:
if token.text == ';':
doc[token.i+1].is_sent_start = True
return doc
nlp.add_pipe(set_custom_boundaries, before='parser')
nlp.pipe_names
# Re-run the Doc object creation:
doc4 = nlp(u'"Management is doing things right; leadership is doing the right things." -Peter Drucker')
for sent in doc4.sents:
print(sent)
# And yet the new rule doesn't apply to the older Doc object:
for sent in doc3.sents:
print(sent)
# Find the token we want to change:
doc3[7]
# Try to change the .is_sent_start attribute:
doc3[7].is_sent_start = True
nlp = spacy.load('en_core_web_sm') # reset to the original
mystring = u"This is a sentence. This is another.\n\nThis is a \nthird sentence."
# SPACY DEFAULT BEHAVIOR:
doc = nlp(mystring)
for sent in doc.sents:
print([token.text for token in sent])
# CHANGING THE RULES
from spacy.pipeline import SentenceSegmenter
def split_on_newlines(doc):
start = 0
seen_newline = False
for word in doc:
if seen_newline:
yield doc[start:word.i]
start = word.i
seen_newline = False
elif word.text.startswith('\n'): # handles multiple occurrences
seen_newline = True
yield doc[start:] # handles the last group of tokens
sbd = SentenceSegmenter(nlp.vocab, strategy=split_on_newlines)
nlp.add_pipe(sbd)
doc = nlp(mystring)
for sent in doc.sents:
print([token.text for token in sent])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Doc.sents is a generator
Step2: However, you can build a sentence collection by running doc.sents and saving the result to a list
Step3: <font color=green>NOTE
Step4: sents are Spans
Step5: Adding Rules
Step6: <font color=green>Notice we haven't run doc2.sents, and yet token.is_sent_start was set to True on two tokens in the Doc.</font>
Step7: <font color=green>The new rule has to run before the document is parsed. Here we can either pass the argument before='parser' or first=True.
Step8: Why not change the token directly?
Step9: <font color=green>spaCy refuses to change the tag after the document is parsed to prevent inconsistencies in the data.</font>
Step10: <font color=green>While the function split_on_newlines can be named anything we want, it's important to use the name sbd for the SentenceSegmenter.</font>
|
15,417 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
%matplotlib inline
import openpathsampling as paths
import numpy as np
import matplotlib.pyplot as plt
import os
import openpathsampling.visualize as ops_vis
from IPython.display import SVG
# note that this log will overwrite the log from the previous notebook
#import logging.config
#logging.config.fileConfig("logging.conf", disable_existing_loggers=False)
%%time
flexible = paths.AnalysisStorage("ad_tps.nc")
# opening as AnalysisStorage is a little slower, but speeds up the move_summary
engine = flexible.engines[0]
flex_scheme = flexible.schemes[0]
print("File size: {0} for {1} steps, {2} snapshots".format(
flexible.file_size_str,
len(flexible.steps),
len(flexible.snapshots)
))
flex_scheme.move_summary(flexible.steps)
replica_history = ops_vis.ReplicaEvolution(replica=0)
tree = ops_vis.PathTree(
flexible.steps[0:25],
replica_history
)
tree.options.css['scale_x'] = 3
SVG(tree.svg())
# can write to svg file and open with programs that can read SVG
with open("flex_tps_tree.svg", 'w') as f:
f.write(tree.svg())
print("Decorrelated trajectories:", len(tree.generator.decorrelated_trajectories))
%%time
full_history = ops_vis.PathTree(
flexible.steps,
ops_vis.ReplicaEvolution(
replica=0
)
)
n_decorrelated = len(full_history.generator.decorrelated_trajectories)
print("All decorrelated trajectories:", n_decorrelated)
path_lengths = [len(step.active[0].trajectory) for step in flexible.steps]
plt.hist(path_lengths, bins=40, alpha=0.5);
print("Maximum:", max(path_lengths),
"("+(max(path_lengths)*engine.snapshot_timestep).format("%.3f")+")")
print ("Average:", "{0:.2f}".format(np.mean(path_lengths)),
"("+(np.mean(path_lengths)*engine.snapshot_timestep).format("%.3f")+")")
from openpathsampling.numerics import HistogramPlotter2D
psi = flexible.cvs['psi']
phi = flexible.cvs['phi']
deg = 180.0 / np.pi
path_density = paths.PathDensityHistogram(cvs=[phi, psi],
left_bin_edges=(-180/deg,-180/deg),
bin_widths=(2.0/deg,2.0/deg))
path_dens_counter = path_density.histogram([s.active[0].trajectory for s in flexible.steps])
tick_labels = np.arange(-np.pi, np.pi+0.01, np.pi/4)
plotter = HistogramPlotter2D(path_density,
xticklabels=tick_labels,
yticklabels=tick_labels,
label_format="{:4.2f}")
ax = plotter.plot(cmap="Blues")
ops_traj = flexible.steps[1000].active[0].trajectory
traj = ops_traj.to_mdtraj()
traj
# Here's how you would then use NGLView:
#import nglview as nv
#view = nv.show_mdtraj(traj)
#view
flexible.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the file, and from the file pull our the engine (which tells us what the timestep was) and the move scheme (which gives us a starting point for much of the analysis).
Step2: That tell us a little about the file we're dealing with. Now we'll start analyzing the contents of that file. We used a very simple move scheme (only shooting), so the main information that the move_summary gives us is the acceptance of the only kind of move in that scheme. See the MSTIS examples for more complicated move schemes, where you want to make sure that frequency at which the move runs is close to what was expected.
Step3: Replica history tree and decorrelated trajectories
Step4: Path length distribution
Step5: Path density histogram
Step6: Now we've built the path density histogram, and we want to visualize it. We have a convenient plot_2d_histogram function that works in this case, and takes the histogram, desired plot tick labels and limits, and additional matplotlib named arguments to plt.pcolormesh.
Step7: Convert to MDTraj for analysis by external tools
|
15,418 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, decomposition, datasets
from sklearn.metrics import accuracy_score
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
i = y_digits.shape[0]
split = np.random.random(i)
train = split < 0.7
test = split >= 0.7
X_digits_train = X_digits[train]
X_digits_test = X_digits[test]
y_digits_train = y_digits[train]
y_digits_test = y_digits[test]
clf = linear_model.LogisticRegression()
clf.fit(X_digits_train, y_digits_train)
predictions = clf.predict(X_digits_test)
print(accuracy_score(y_digits_test, predictions))
pca = decomposition.PCA()
pca.fit(X_digits_train)
#pca.transform(X_digits_train)[:,:2].shape
z = 2
clf = linear_model.LogisticRegression()
clf.fit(pca.transform(X_digits_train)[:,:z], y_digits_train)
predictions = clf.predict(pca.transform(X_digits_test)[:,:z])
print(accuracy_score(y_digits_test, predictions))
# http://scikit-learn.org/stable/auto_examples/plot_digits_pipe.html#example-plot-digits-pipe-py
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, decomposition, datasets
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
logistic = linear_model.LogisticRegression()
pca = decomposition.PCA()
pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)])
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
###############################################################################
# Plot the PCA spectrum
pca.fit(X_digits)
fig, ax = plt.subplots(1,1)
ax.plot(pca.explained_variance_, linewidth=2)
ax.set_xlabel('n_components')
ax.set_ylabel('explained_variance_')
###############################################################################
# Prediction
n_components = [20, 40, 64]
Cs = np.logspace(-4, 4, 10)
Cs
#Parameters of pipelines can be set using ‘__’ separated parameter names:
estimator = GridSearchCV(pipe,
dict(pca__n_components=n_components,
logistic__C=Cs))
estimator.fit(X_digits, y_digits)
print('# components:', estimator.best_estimator_.named_steps['pca'].n_components)
print('C:', estimator.best_estimator_.named_steps['logistic'].C)
print(estimator)
# http://scikit-learn.org/stable/auto_examples/feature_stacker.html#example-feature-stacker-py
# Author: Andreas Mueller <amueller@ais.uni-bonn.de>
#
# License: BSD 3 clause
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
iris = load_iris()
X, y = iris.data, iris.target
# This dataset is way to high-dimensional. Better do PCA:
pca = PCA(n_components=2)
# Maybe some original features where good, too?
selection = SelectKBest(k=1)
# Build estimator from PCA and Univariate selection:
combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])
# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)
svm = SVC(kernel="linear")
# Do grid search over k, n_components and C:
pipeline = Pipeline([("features", combined_features), ("svm", svm)])
# numpy.arange, numpy.linspace, numpy.logspace, and range are all useful in creating options to evaluate
param_grid = dict(features__pca__n_components=[1,2,3],
features__univ_select__k=[1,2,3],
svm__C=[0.1, 1, 10])
grid_search = GridSearchCV(pipeline, param_grid=param_grid)
grid_search.fit(X, y)
print('PCA components:', grid_search.best_estimator_.named_steps['features'].get_params()['pca'].n_components)
print('Original features used:', grid_search.best_estimator_.named_steps['features'].get_params()['univ_select'].k)
print('C:', grid_search.best_estimator_.named_steps['svm'].C)
print(grid_search.best_estimator_)
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train',
categories=['comp.graphics', 'sci.med'], shuffle=True, random_state=0)
print(twenty_train.target_names)
# Looking at an example
print(twenty_train.data[0])
# The first step is converting the text into numerical data we can work with
# We will use a bag-of-words approach - counting the occurance of words
from sklearn.feature_extraction.text import (CountVectorizer,
TfidfTransformer)
# Using a pipeline
pipe = Pipeline([('counts', CountVectorizer()),
('tfidf', TfidfTransformer())])
output = pipe.fit(twenty_train.data).transform(twenty_train.data)
output
# Adding a classifier
pipe = Pipeline([('counts', CountVectorizer()),
('tfidf', TfidfTransformer()),
('classifier', linear_model.LogisticRegression())])
# We can compare different parameters at each stage
param_grid = dict(counts__ngram_range=[(1,1), (1,2)],
counts__stop_words=[None, 'english'],
classifier__C=[0.1, 1, 10, 30, 100])
X = twenty_train.data[:]
y = twenty_train.target[:]
grid_search = GridSearchCV(pipe, param_grid=param_grid)
grid_search.fit(X, y)
# Getting the best parameters
print('# of words:', grid_search.best_estimator_.named_steps['counts'].ngram_range)
print('Stop words:', grid_search.best_estimator_.named_steps['counts'].stop_words)
print('C:', grid_search.best_estimator_.named_steps['classifier'].C)
import numpy as np
from sklearn.base import TransformerMixin
class ModelTransformer(TransformerMixin):
Wrap a classifier model so that it can be used in a pipeline
def __init__(self, model):
self.model = model
def fit(self, *args, **kwargs):
self.model.fit(*args, **kwargs)
return self
def transform(self, X, **transform_params):
return self.model.predict_proba(X)
def predict_proba(self, X, **transform_params):
return self.transform(X, **transform_params)
class VarTransformer(TransformerMixin):
Compute the variance
def transform(self, X, **transform_params):
var = X.var(axis=1)
return var.reshape((var.shape[0],1))
def fit(self, X, y=None, **fit_params):
return self
class MedianTransformer(TransformerMixin):
Compute the median
def transform(self, X, **transform_params):
median = np.median(X, axis=1)
return median.reshape((median.shape[0],1))
def fit(self, X, y=None, **fit_params):
return self
class ChannelExtractor(TransformerMixin):
Extract a single channel for downstream processing
def __init__(self, channel):
self.channel = channel
def transform(self, X, **transformer_params):
return X[:,:,self.channel]
def fit(self, X, y=None, **fit_params):
return self
class FFTTransformer(TransformerMixin):
Convert to the frequency domain and then sum over bins
def transform(self, X, **transformer_params):
fft = np.fft.rfft(X, axis=1)
fft = np.abs(fft)
fft = np.cumsum(fft, axis=1)
bin_size = 10
max_freq = 60
return np.column_stack([fft[:,i] - fft[:,i-bin_size]
for i in range(bin_size, max_freq, bin_size)])
def fit(self, X, y=None, **fit_params):
return self
This cell is not expected to run correctly. We don't have all the packages needed.
If you want to run this example download the repository and the source data.
import numpy as np
import os
import pickle
from sklearn.cross_validation import cross_val_score, StratifiedShuffleSplit
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
import get_traces
import transformers as trans
def build_pipeline(X):
Helper function to build the pipeline of feature transformations.
We do the same thing to each channel so rather than manually copying changes
for all channels this is automatically generated
channels = X.shape[2]
pipeline = Pipeline([
('features', FeatureUnion([
('select_%d_pipeline' % i,
Pipeline([('select_%d' % i, trans.ChannelExtractor(i)),
('channel features', FeatureUnion([
('var', trans.VarTransformer()),
('median', trans.MedianTransformer()),
('fft', trans.FFTTransformer()),
])),
])
) for i in range(channels)])),
('classifier', trans.ModelTransformer(RandomForestClassifier(
n_estimators=500,
max_depth=None,
min_samples_split=1,
random_state=0))),
])
return pipeline
def get_transformed_data(patient, func=get_traces.get_training_traces):
Load in all the data
X = []
channels = get_traces.get_num_traces(patient)
# Reading in 43 Gb of data . . .
for i in range(channels):
x, y = func(patient, i)
X.append(x)
return (np.dstack(X), y)
all_labels = []
all_predictions = np.array([])
folders = [i for i in os.listdir(get_traces.directory) if i[0] != '.']
folders.sort()
for folder in folders:
print('Starting %s' % folder)
print('getting data')
X, y = get_transformed_data(folder)
print(X.shape)
print('stratifiedshufflesplit')
cv = StratifiedShuffleSplit(y,
n_iter=5,
test_size=0.2,
random_state=0,)
print('cross_val_score')
pipeline = build_pipeline(X)
# Putting this in a list is unnecessary for just one pipeline - use to compare multiple pipelines
scores = [
cross_val_score(pipeline, X, y, cv=cv, scoring='roc_auc')
]
print('displaying results')
for score, label in zip(scores, ['pipeline',]):
print("AUC: {:.2%} (+/- {:.2%}), {:}".format(score.mean(),
score.std(), label))
clf = pipeline
print('Fitting full model')
clf.fit(X, y)
print('Getting test data')
testing_data, files = get_transformed_data(folder,
get_traces.get_testing_traces)
print('Generating predictions')
predictions = clf.predict_proba(testing_data)
print(predictions.shape, len(files))
with open('%s_randomforest_predictions.pkl' % folder, 'wb') as f:
pickle.dump((files, predictions[:,1]), f)
from sklearn import datasets
diabetes = datasets.load_diabetes()
# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html
# Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements
# were obtained for each of n = 442 diabetes patients,
# as well as the response of interest, a quantitative measure of disease progression one year after baseline.
X = diabetes.data # independent variables
y = diabetes.target # dependent val
print(X.shape)
print(y.shape)
import pandas as pd
data = pd.DataFrame(X, columns=['age', 'sex', 'bmi', 'map',
'tc', 'ldl', 'hdl', 'tch', 'ltg', 'glu'])
data.info()
from sklearn import linear_model
bmi = X[:, 2].reshape(-1, 1)
outcome = y
reg = linear_model.LinearRegression()
reg.fit(bmi, outcome)
predicted_outcome = reg.predict(bmi)
plt.plot(predicted_outcome, outcome, 'k.')
plt.xlabel("Predicted outcome")
plt.ylabel("Clinical outcome")
print('Directly trained model predictions:', predicted_outcome[:10])
from sklearn.externals import joblib
joblib.dump(reg, 'diabetes_prediction_model.pkl')
reg2 = joblib.load('diabetes_prediction_model.pkl')
predicted_outcome2 = reg2.predict(bmi)
print('Saved model predictions:', predicted_outcome[:10])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In previous weeks we have covered preprocessing our data, dimensionality reduction, clustering, regression and classification. This week we will be pulling these processes together into a complete project.
Step2: Scikit learn includes functionality for structuring our code and easily exploring the impact of different parameters not only in the machine learning algorithm we choose but at every stage of our solution.
Step3: FeatureUnion
Step4: Text classification
Step13: Advanced Pipeline
Step14: Saving a model
|
15,419 | <ASSISTANT_TASK:>
Python Code:
# Declaring both Boolean values
a = True
b = False
# Capturing True from an expression
x = 2 < 3
# Capturing False from an expression
y = 5 > 9
# Example of assigning None, and changing it.
some_obj = None
if 2 < 3:
some_obj = True
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To stay practical, it is important to understand that you won't be assigning True and False values to variables as much as you will be receiving them. We talked in the "Comparison Operators" series about said operators, and what they return. How do we capture True and False from an expression?
Step2: The general format for if-else control flow in Python is the following
|
15,420 | <ASSISTANT_TASK:>
Python Code:
rst(O.publish)
def emit(obs):
log('.........EMITTING........')
sleep(0.1)
obs.on_next(rand())
obs.on_completed()
rst(title='Reminder: 2 subscribers on a cold stream:')
s = O.create(emit)
d = subs(s), subs(s.delay(100))
rst(title='Now 2 subscribers on a PUBLISHED (hot) stream', sleep=0.4)
sp = s.publish()
subs(sp, name='subs1')
subs(sp.delay(100), name='subs2')
log('now connect')
# this creates a 'single, intermediate subscription between stream and subs'
d = sp.connect()
# will only see the finish, since subscribed too late
d = subs(sp, name='subs3')
rst(O.publish_value)
def sideeffect(*x):
log('sideffect', x)
print('Everybody gets the initial value and the events, sideeffect only once per ev')
src = O.interval(500).take(20).do_action(sideeffect)
published = src.publish_value(42)
subs(published), subs(published.delay(100))
d = published.connect()
sleep(1.3)
log('disposing now')
d.dispose()
# not yet in RXPy
rst(O.multicast)
# show actions on intermediate subject:
show = False
def emit(obs):
'instead of range we allow some logging:'
for i in (1, 2):
v = rand()
log('emitting', v)
obs.on_next(v)
log('complete')
obs.on_completed()
class MySubject:
def __init__(self):
self.rx_subj = Subject()
if show:
log('New Subject %s created' % self)
def __str__(self):
return str(hash(self))[-4:]
def __getattr__(self, a):
'called at any attr. access, logging it'
if not a.startswith('__') and show:
log('RX called', a, 'on MySub\n')
return getattr(self.rx_subj, a)
subject1 = MySubject()
subject2 = MySubject()
source = O.create(emit).multicast(subject2)
# a "subscription" *is* a disposable
# (the normal d we return all the time):
d, observer = subs(source, return_subscriber=True)
ds1 = subject1.subscribe(observer)
ds2 = subject2.subscribe(observer)
print ('we have now 3 subscriptions, only two will see values.')
print('start multicast stream (calling connect):')
connected = source.connect()
d.dispose()
rst(O.let)
# show actions on intermediate subject:
show = True
def emit(obs):
'instead of range we allow some logging:'
v = rand()
log('emitting', v)
obs.on_next(v)
log('complete')
obs.on_completed()
source = O.create(emit)
# following the RXJS example:
header("without let")
d = subs(source.concat(source))
d = subs(source.concat(source))
header("now with let")
d = subs(source.let(lambda o: o.concat(o)))
d = subs(source.let(lambda o: o.concat(o)))
# TODO: Not understood:
# "This operator allows for a fluent style of writing queries that use the same sequence multiple times."
# ... I can't verify this, the source sequence is not duplicated but called every time like a cold obs.
rst(O.replay)
def emit(obs):
'continuous emission'
for i in range(0, 5):
v = 'nr %s, value %s' % (i, rand())
log('emitting', v, '\n')
obs.on_next(v)
sleep(0.2)
def sideeffect(*v):
log("sync sideeffect (0.2s)", v, '\n')
sleep(0.2)
log("end sideeffect", v, '\n')
def modified_stream(o):
log('modified_stream (take 2)')
return o.map(lambda x: 'MODIFIED FOR REPLAY: %s' % x).take(2)
header("playing and replaying...")
subject = Subject()
cold = O.create(emit).take(3).do_action(sideeffect)
assert not getattr(cold, 'connect', None)
hot = cold.multicast(subject)
connect = hot.connect # present now.
#d, observer = subs(hot, return_subscriber=True, name='normal subscriber\n')
#d1 = subject.subscribe(observer)
published = hot.replay(modified_stream, 1000, 50000)
d2 = subs(published, name='Replay Subs 1\n')
#header("replaying again")
#d = subs(published, name='Replay Subs 2\n')
log('calling connect now...')
d3 = hot.connect()
def mark(x):
return 'marked %x' % x
def side_effect(x):
log('sideeffect %s\n' % x)
for i in 1, 2:
s = O.interval(100).take(3).do_action(side_effect)
if i == 2:
sleep(1)
header("now with publish - no more sideeffects in the replays")
s = s.publish()
reset_start_time()
published = s.replay(lambda o: o.map(mark).take(3).repeat(2), 3)
d = subs(s, name='Normal\n')
d = subs(published, name='Replayer A\n')
d = subs(published, name='Replayer B\n')
if i == 2:
d = s.connect()
rst(O.interval(1).publish)
publ = O.interval(1000).take(2).publish().ref_count()
# be aware about potential race conditions here
subs(publ)
subs(publ)
rst(O.interval(1).share)
def sideffect(v):
log('sideeffect %s\n' % v)
publ = O.interval(200).take(2).do_action(sideeffect).share()
'''
When the number of observers subscribed to published observable goes from
0 to 1, we connect to the underlying observable sequence.
published.subscribe(createObserver('SourceA'));
When the second subscriber is added, no additional subscriptions are added to the
underlying observable sequence. As a result the operations that result in side
effects are not repeated per subscriber.
'''
subs(publ, name='SourceA')
subs(publ, name='SourceB')
rst(O.interval(1).publish().connect)
published = O.create(emit).publish()
def emit(obs):
for i in range(0, 10):
log('emitting', i, obs.__class__.__name__, hash(obs))
# going nowhere
obs.on_next(i)
sleep(0.1)
import thread
thread.start_new_thread(published.connect, ())
sleep(0.5)
d = subs(published, scheduler=new_thread_scheduler)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ... and then only emits the last item in its sequence publish_last
Step2: ... via multicast
Step3: ... and then emits the complete sequence, even to those who subscribe after the sequence has begun replay
Step4: If you apply the Replay operator to an Observable
Step5: ... but I want it to go away once all of its subscribers unsubscribe ref_count, share
Step6: ... and then I want to ask it to start connect
|
15,421 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import pydot
# Create a graph and set defaults
dot = pydot.Dot()
dot.set('rankdir', 'TB')
dot.set('concentrate', True)
dot.set_node_defaults(shape='record')
# Add nodes and edges
node = pydot.Node(1, label="FROM")
dot.add_node(node)
node = pydot.Node(2, label="TO")
dot.add_node(node)
dot.add_edge( pydot.Edge(1,2))
from IPython.core.display import SVG
img = dot.create_svg()
SVG( data=img )
for n in dot.get_nodes():
n.set('style', 'filled')
n.set('fillcolor', 'aliceblue')
n.set('fontsize', '10')
n.set('fontname', 'Trebuchet MS, Tahoma, Verdana, Arial, Helvetica, sans-serif')
SVG( data=dot.create_svg() )
from IPython.core.display import Image
Image( data=dot.create_png() )
import graphviz as gv
g1 = gv.Graph(format='svg')
g1.node('A', 'Node A', tooltip='tooltip for node A')
g1.node('B')
g1.edge('A', 'B')
# Render into "example.svg" file
g1.render( filename="example")
# Create a graph
dot = gv.Digraph(comment='The Round Table', engine='dot')
dot.node('A', 'King Arthur', color="blue", fillcolor="lightgray", style="filled", fontcolor="red", fontname="Verdana")
dot.node('B', 'Sir Bedevere the Wise')
dot.node('L', 'Sir Lancelot the Brave', shape="rectangle")
dot.edges(['AB', 'AL'])
dot.edge('B', 'L', constraint='false', color="blue")
# Render in notebook by just outputting the graph as the result of a cell
dot
src = gv.Source('digraph "countdown" { rankdir=LR; 3 -> 2 -> 1 -> "Go!" }')
# Again, the result can be rendered directly in the Notebook
src
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There are many available Python packages providing APIs for Graphviz. In no particular order
Step2: Render
Step3: We can also set properties on the graph nodes
Step4: It is also possible to render as PNG and display as image (though the quality, in general, will be lower)
Step5: Graphviz
Step6: Render
Step7: Furthermode, the package provides a useful facility for notebooks
Step8: It is also possible to directly provide a buffer containing a graph written in dot language, by using the Source class
|
15,422 | <ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import pandas
from scipy.optimize import curve_fit
import scipy.linalg
import scipy.stats
from scipy.interpolate import interp1d,splev,splrep
from scipy.ndimage import map_coordinates,gaussian_filter
import matplotlib.pyplot as plt
import matplotlib.colors
from matplotlib.ticker import LogFormatter
import seaborn as sns
import astropy.units as u
import astropy.constants as const
import hissw
from sunpy.map import Map,GenericMap
import h5py
from ChiantiPy.tools import filters as ch_filters
import synthesizAR
from synthesizAR.instruments import InstrumentHinodeEIS
from synthesizAR.util import EISCube,EMCube
from synthesizAR.atomic import EmissionModel
%matplotlib inline
eis = InstrumentHinodeEIS([7.5e3,1.25e4]*u.s)
frequencies = [250,750,'750-ion',2500,5000]
temperature_bin_edges = 10.**(np.arange(5.6, 7.0, 0.05))*u.K
emission_model = EmissionModel.restore('/data/datadrive1/ar_forward_modeling/systematic_ar_study/emission_model1109_full/')
resolved_wavelengths = np.sort(u.Quantity([rw for ion in emission_model.ions for rw in ion.resolved_wavelengths]))
pressure_const = 1e15*u.K*u.cm**(-3)
class FakeLoop(object):
electron_temperature = np.logspace(5.5,7.5,100)*u.K
density = pressure_const/electron_temperature
fake_loop = FakeLoop()
i_temperature,i_density = emission_model.interpolate_to_mesh_indices(fake_loop)
contribution_functions = {}
line_names = {}
for ion in emission_model.ions:
for rw in ion.resolved_wavelengths:
i_rw = np.where(ion.wavelength==rw)[0][0]
emiss = map_coordinates(ion.emissivity[:,:,i_rw].value,
np.vstack([i_temperature,i_density]),order=3)*ion.emissivity.unit
ioneq = splev(fake_loop.electron_temperature.value,
splrep(emission_model.temperature_mesh[:,0].value,
ion.fractional_ionization[:,0].value,k=1),ext=1)
line_names[rw] = '{} {}'.format(ion.chianti_ion.meta['name'],rw.value)
contribution_functions[rw] = (1./(np.pi*4.*u.steradian)*0.83
*ioneq*ion.chianti_ion.abundance*emiss/fake_loop.density
*(const.h.cgs*const.c.cgs)/rw.to(u.cm)/u.photon)
line_intensities = {'{}'.format(freq):{} for freq in frequencies}
for freq in frequencies:
for channel in eis.channels:
tmp = EISCube('../data/eis_intensity_{}_tn{}_t7500-12500.h5'.format(channel['name'],freq))
if type(freq) == int:
tmp.data = (gaussian_filter(tmp.data.value,(channel['gaussian_width']['y'].value,
channel['gaussian_width']['x'].value,0.)))*tmp.data.unit
for rw in resolved_wavelengths:
i_center = np.where(np.isclose(tmp.wavelength.value,rw.value,atol=1.1e-2,rtol=0.))[0]
if len(i_center) == 0:
continue
line_intensities['{}'.format(freq)][rw] = tmp[i_center-5:i_center+5].integrated_intensity
fig = plt.figure(figsize=(17,15))
plt.subplots_adjust(right=0.85)
cax = fig.add_axes([0.88, 0.12, 0.025, 0.75])
for i,rw in enumerate(resolved_wavelengths):
tmp = (line_intensities['750'][rw]
.submap(u.Quantity((270,450),u.arcsec),u.Quantity((90,360),u.arcsec))
)
ax = fig.add_subplot(5,5,i+1,projection=tmp)
im = tmp.plot(axes=ax,annotate=False,title=False,
norm=matplotlib.colors.SymLogNorm(1,vmin=1e2,vmax=1e5)
)
ax.set_title(r'{:.3f} {}'.format(rw.value,rw.unit.to_string(format='latex')))
cbar = fig.colorbar(im,cax=cax)
k_matrix = []
intensity_matrix = []
line_names = []
for rw in resolved_wavelengths:
line_name = '{}_{}'.format(rw.value,rw.unit)
line_names.append(line_name)
k_matrix.append(contribution_functions[rw].value.tolist())
for freq in frequencies:
line_intensities['{}'.format(freq)][rw].save('../data/eis_integrated_intensity_{}_{}.fits'.format(freq,line_name))
demreg_runner = hissw.ScriptMaker(extra_paths=['/home/wtb2/Documents/codes/demreg/idl/'],
ssw_path_list=['vobs','ontology'])
static_input_vars = {
'log_temperature':np.log10(fake_loop.electron_temperature.value).tolist(),
'temperature_bins':temperature_bin_edges.value.tolist(),
'k_matrix':k_matrix,
'names':line_names,
'error_ratio':0.25,
'gloci':1,'reg_tweak':1,'timed':1
}
save_vars = ['dem','edem','elogt','chisq','dn_reg']
demreg_script =
; load intensity from each channel/line
names = {{ names }}
eis_file_list = find_file('{{ fits_file_glob }}')
read_sdo,eis_file_list,ind,intensity
; load the contribution functions or response functions (called K in Hannah and Kontar 2012)
k_matrix = {{ k_matrix }}
; load temperature array over which K is computed
log_temperature = {{ log_temperature }}
; temperature bins
temperature_bins = {{ temperature_bins }}
; crude estimate of intensity errors
intensity_errors = intensity*{{ error_ratio }}
; inversion method parameters
reg_tweak={{ reg_tweak }}
timed={{ timed }}
gloci={{ gloci }}
; run the inversion method
dn2dem_pos_nb,intensity,intensity_errors,$
k_matrix,log_temperature,temperature_bins,$
dem,edem,elogt,chisq,dn_reg,$
timed=timed,gloci=gloci,reg_tweak=reg_tweak
for freq in frequencies:
input_vars = static_input_vars.copy()
input_vars['fits_file_glob'] = '/home/wtb2/Documents/projects/loops-workshop-2017-talk/data/eis_integrated_intensity_{}_*.fits'.format(freq)
tmp = demreg_runner.run([(demreg_script,input_vars)],save_vars=save_vars,cleanup=True,verbose=True)
tmp_cube = EMCube(np.swapaxes(tmp['dem'].T,0,1)*np.diff(temperature_bin_edges.value)*(u.cm**(-5)),
(line_intensities['{}'.format(freq)][resolved_wavelengths[0]].meta),temperature_bin_edges)
tmp_cube.save('../data/em_cubes_demreg_tn{}_t7500-12500.h5'.format(freq))
foo = EMCube.restore('../data/em_cubes_demreg_tn750_t7500-12500.h5')
fig = plt.figure(figsize=(20,15))
plt.subplots_adjust(right=0.87)
cax = fig.add_axes([0.88, 0.12, 0.025, 0.75])
plt.subplots_adjust(hspace=0.1)
for i in range(foo.temperature_bin_edges.shape[0]-1):
# apply a filter to the
tmp = foo[i].submap(u.Quantity([250,500],u.arcsec),u.Quantity([150,400],u.arcsec))
#tmp.data = gaussian_filter(tmp.data,
# eis.channels[0]['gaussian_width']['x'].value
# )
# set up axes properly and add plot
ax = fig.add_subplot(6,5,i+1,projection=tmp)
im = tmp.plot(axes=ax,
annotate=False,
cmap=matplotlib.cm.get_cmap('magma'),
norm=matplotlib.colors.SymLogNorm(1, vmin=1e25, vmax=1e29)
)
# set title and labels
ax.set_title(r'${t0:.2f}-{t1:.2f}$ {uni}'.format(t0=np.log10(tmp.meta['temp_a']),
t1=np.log10(tmp.meta['temp_b']),uni='K'))
if i<25:
ax.coords[0].set_ticklabel_visible(False)
else:
ax.set_xlabel(r'$x$ ({})'.format(u.Unit(tmp.meta['cunit1'])))
if i%5==0:
ax.set_ylabel(r'$y$ ({})'.format(u.Unit(tmp.meta['cunit2'])))
else:
ax.coords[1].set_ticklabel_visible(False)
cbar = fig.colorbar(im,cax=cax)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, load the emission model.
Step2: List the resolved wavelengths for convenience.
Step3: and calculate the contribution functions.
Step4: Now, load the time-average intensities and slice in wavelength and integrate at the desired indicies corresponding to the resolved wavelength. This gives us a map of integrated intensity for each of the lines that we are interested in.
Step5: Peek at the integrated intensities.
Step6: Reshape the data so that it can be passed to the demreg script.
Step8: Now run the inversion code.
|
15,423 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.preprocessing import scale
houses = pd.read_csv('house_prices.csv')
plt.figure(1)
plt.subplot(211)
plt.xlabel('sq. feet')
plt.ylabel('price (\'000)')
plt.scatter(houses['sqft'], houses['price'])
plt.subplot(212)
plt.xlabel('no. of rooms')
plt.ylabel('price (\'000)')
plt.scatter(houses['rooms'], houses['price'])
plt.tight_layout()
X = houses[['sqft', 'rooms']].as_matrix()
X = np.column_stack([np.ones([X.shape[0]]), X])
y = houses[['price']].as_matrix().ravel()
# Hypothesis function
def h(theta, X):
return np.matmul(X, theta)
# Cost function
def J(theta, X, y):
d = h(theta, X) - y
return 0.5 * np.dot(d, d.T)
# One step of gradient descent
def descend(theta, X, y, alpha=0.01):
error = h(theta, X) - y
t = theta - alpha * np.matmul(X.T, error)
return t, np.dot(error, error.T)
theta = np.zeros([X.shape[1]])
for i in range(50):
theta, cost = descend(theta, X, y)
if i % 10 == 0:
print("epoch: {0}, cost: {1}".format(i, cost))
print("epoch: {0}, cost: {1}".format(i, cost))
print("theta: {0}".format(theta))
X_scaled = scale(X)
y_scaled = scale(y)
plt.figure(1)
plt.subplot(211)
plt.xlabel('sq. feet')
plt.ylabel('price')
plt.scatter(X_scaled[:, 1], y_scaled)
plt.subplot(212)
plt.xlabel('no. of rooms')
plt.ylabel('price')
plt.scatter(X_scaled[:, 2], y_scaled)
plt.tight_layout()
def fit(X, y):
theta = np.zeros([X.shape[1]])
theta, cost = descend(theta, X, y)
for i in range(10000):
cost_ = cost
theta, cost = descend(theta, X, y)
if cost_ - cost < 1e-7:
break
if i % 10 == 0:
print("epoch: {0}, cost: {1}".format(i, cost))
print("epoch: {0}, cost: {1}".format(i, cost))
print("theta: {0}".format(theta))
fit(X_scaled, y_scaled)
from sklearn.linear_model import LinearRegression
l = LinearRegression(fit_intercept=False)
l.fit(X_scaled, y_scaled)
l.coef_
# One step of gradient descent
def stochastic_descend(theta, X, y, alpha=0.01):
X_sample = np.random.choice(X.shape[0], 47)
error = h(theta, X[X_sample]) - y[X_sample]
t = theta - alpha * np.matmul(X[X_sample].T, error)
return t, np.dot(error, error.T)
X[4]
def stochastic_fit(X, y):
theta = np.zeros([X.shape[1]])
theta, cost = descend(theta, X, y)
for i in range(10000):
cost_ = cost
theta, cost = descend(theta, X, y)
if cost_ - cost < 1e-7:
break
if i % 10 == 0:
print("epoch: {0}, cost: {1}".format(i, cost))
print("epoch: {0}, cost: {1}".format(i, cost))
print("theta: {0}".format(theta))
stochastic_fit(X_scaled, y_scaled)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A little searching leads us to the Portland housing prices dataset that's used as an example in the lecture. We load the dataset from the CSV file.
Step2: Let's plot the output variable (the price of a house) against each of the input variables (area in sq. feet, number of bedrooms) to get a little intuition about the data.
Step3: Let's transform our data into the right matrix format. Note that we add a column of one's to the $X$ matrix, to be multiplied with $\theta_0$.
Step4: Next we implement the hypothesis and cost functions and the parameter update using gradient descent.
Step5: We are now ready to fit the model using gradient descent. Let's initialize our parameters to 0 and run 50 iterations of gradient descent to see how it behaves.
Step6: That doesn't look good. We expected the cost to steadily decrease as gradient descent progressed. Instead, the cost function diverged so much it exceeded our ability to represent it as a floating-point number. What happened?
Step7: We can plot the data again to visualize the effect of the scaling operation.
Step8: Let us write a function to fit the model such that it automatically stops once the improvement in the value of the cost function is below a certain threshold.
Step9: Let's try to fit the model again with our scaled input and output matrices.
Step10: Success!
Step11: The parameters are close enough to consider our solution correct.
|
15,424 | <ASSISTANT_TASK:>
Python Code:
import h2o
h2o.init()
import os.path
PATH = os.path.expanduser("~/h2o-3/")
test_df = h2o.import_file(PATH + "bigdata/laptop/mnist/test.csv.gz")
train_df = h2o.import_file(PATH + "/bigdata/laptop/mnist/train.csv.gz")
y = "C785"
x = train_df.names[0:784]
train_df[y] = train_df[y].asfactor()
test_df[y] = test_df[y].asfactor()
def lenet(num_classes):
import mxnet as mx
data = mx.symbol.Variable('data')
# first conv
conv1 = mx.symbol.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1 = mx.symbol.Activation(data=conv1, act_type="tanh")
pool1 = mx.symbol.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2))
# second conv
conv2 = mx.symbol.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2 = mx.symbol.Activation(data=conv2, act_type="tanh")
pool2 = mx.symbol.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2))
# first fullc
flatten = mx.symbol.Flatten(data=pool2)
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3 = mx.symbol.Activation(data=fc1, act_type="tanh")
# second fullc
fc2 = mx.symbol.FullyConnected(data=tanh3, num_hidden=num_classes)
# loss
lenet = mx.symbol.SoftmaxOutput(data=fc2, name='softmax')
return lenet
nclasses = 10
mxnet_model = lenet(nclasses)
model_filename="/tmp/symbol_lenet-py.json"
mxnet_model.save(model_filename)
# pip install graphviz
# sudo apt-get install graphviz
import mxnet as mx
import graphviz
mx.viz.plot_network(mxnet_model, shape={"data":(1, 1, 28, 28)}, node_attrs={"shape":'rect',"fixedsize":'false'})
!head -n 20 $model_filename
from h2o.estimators.deepwater import H2ODeepWaterEstimator
lenet_model = H2ODeepWaterEstimator(
epochs=10,
learning_rate=1e-3,
mini_batch_size=64,
network_definition_file=model_filename,
# network='lenet', ## equivalent pre-configured model
image_shape=[28,28],
problem_type='dataset', ## Not 'image' since we're not passing paths to image files, but raw numbers
ignore_const_cols=False, ## We need to keep all 28x28=784 pixel values, even if some are always 0
channels=1
)
lenet_model.train(x=train_df.names, y=y, training_frame=train_df, validation_frame=test_df)
error = lenet_model.model_performance(valid=True).mean_per_class_error()
print "model error:", error
def cnn(num_classes):
import mxnet as mx
data = mx.symbol.Variable('data')
inputdropout = mx.symbol.Dropout(data=data, p=0.1)
# first convolution
conv1 = mx.symbol.Convolution(data=data, kernel=(5,5), num_filter=50)
tanh1 = mx.symbol.Activation(data=conv1, act_type="relu")
pool1 = mx.symbol.Pooling(data=tanh1, pool_type="max", pad=(1,1), kernel=(3,3), stride=(2,2))
# second convolution
conv2 = mx.symbol.Convolution(data=pool1, kernel=(5,5), num_filter=100)
tanh2 = mx.symbol.Activation(data=conv2, act_type="relu")
pool2 = mx.symbol.Pooling(data=tanh2, pool_type="max", pad=(1,1), kernel=(3,3), stride=(2,2))
# first fully connected layer
flatten = mx.symbol.Flatten(data=pool2)
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=1024)
relu3 = mx.symbol.Activation(data=fc1, act_type="relu")
inputdropout = mx.symbol.Dropout(data=fc1, p=0.5)
# second fully connected layer
flatten = mx.symbol.Flatten(data=relu3)
fc2 = mx.symbol.FullyConnected(data=flatten, num_hidden=1024)
relu4 = mx.symbol.Activation(data=fc2, act_type="relu")
inputdropout = mx.symbol.Dropout(data=fc2, p=0.5)
# third fully connected layer
fc3 = mx.symbol.FullyConnected(data=relu4, num_hidden=num_classes)
# loss
cnn = mx.symbol.SoftmaxOutput(data=fc3, name='softmax')
return cnn
nclasses = 10
mxnet_model = cnn(nclasses)
model_filename="/tmp/symbol_cnn-py.json"
mxnet_model.save(model_filename)
from h2o.estimators.deepwater import H2ODeepWaterEstimator
print("Importing the lenet model architecture for training in H2O")
model = H2ODeepWaterEstimator(
epochs=20,
learning_rate=1e-3,
mini_batch_size=64,
network_definition_file=model_filename,
image_shape=[28,28],
channels=1,
ignore_const_cols=False ## We need to keep all 28x28=784 pixel values, even if some are always 0
)
model.train(x=train_df.names, y=y, training_frame=train_df, validation_frame=test_df)
error = model.model_performance(valid=True).mean_per_class_error()
print "model error:", error
%matplotlib inline
import matplotlib
import numpy as np
import scipy.io
import matplotlib.pyplot as plt
from IPython.display import Image, display
import warnings
warnings.filterwarnings("ignore")
df = test_df.as_data_frame()
import numpy as np
image = df.T[int(np.random.random()*784)]
image.shape
plt.imshow(image[:-1].reshape(28, 28), plt.cm.gray);
print image[-1]
image_hf = h2o.H2OFrame.from_python(image.to_dict())
prediction = model.predict(image_hf)
prediction['predict']
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify the response and predictor columns
Step2: Convert the number to a class
Step3: Train Deep Learning model and validate on test set
Step4: Here we instantiate our lenet model using 10 classes
Step5: To import the model inside the DeepWater training engine we need to save the model to a file
Step6: The model is just the structure of the network expressed as a json dict
Step7: Importing the LeNET model architecture for training in H2O
Step8: A More powerful Architecture
Step9: Visualizing the results
|
15,425 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.graphics.api import qqplot
print(sm.datasets.sunspots.NOTE)
dta = sm.datasets.sunspots.load_pandas().data
dta.index = pd.Index(pd.date_range("1700", end="2009", freq="A-DEC"))
del dta["YEAR"]
dta.plot(figsize=(12,4));
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)
arma_mod20 = sm.tsa.statespace.SARIMAX(dta, order=(2,0,0), trend='c').fit(disp=False)
print(arma_mod20.params)
arma_mod30 = sm.tsa.statespace.SARIMAX(dta, order=(3,0,0), trend='c').fit(disp=False)
print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)
print(arma_mod30.params)
print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)
sm.stats.durbin_watson(arma_mod30.resid)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(111)
ax = plt.plot(arma_mod30.resid)
resid = arma_mod30.resid
stats.normaltest(resid)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(111)
fig = qqplot(resid, line='q', ax=ax, fit=True)
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
r,q,p = sm.tsa.acf(resid, fft=True, qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
predict_sunspots = arma_mod30.predict(start='1990', end='2012', dynamic=True)
fig, ax = plt.subplots(figsize=(12, 8))
dta.loc['1950':].plot(ax=ax)
predict_sunspots.plot(ax=ax, style='r');
def mean_forecast_err(y, yhat):
return y.sub(yhat).mean()
mean_forecast_err(dta.SUNACTIVITY, predict_sunspots)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sunspots Data
Step2: Does our model obey the theory?
Step3: This indicates a lack of fit.
|
15,426 | <ASSISTANT_TASK:>
Python Code:
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
!pip install biopython
import Bio
Bio.__version__
from Bio.Seq import Seq
my_seq = Seq("AGTACACATTG")
my_seq
my_seq.complement()
my_seq.reverse_complement()
!wget https://raw.githubusercontent.com/biopython/biopython/master/Doc/examples/ls_orchid.fasta
from Bio import SeqIO
for seq_record in SeqIO.parse('ls_orchid.fasta', 'fasta'):
print(seq_record.id)
print(repr(seq_record.seq))
print(len(seq_record))
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC
my_seq = Seq("ACAGTAGAC", IUPAC.unambiguous_dna)
my_seq
my_seq.alphabet
my_prot = Seq("AAAAA", IUPAC.protein) # Alanine pentapeptide
my_prot
my_prot.alphabet
print(len(my_prot))
my_prot[0]
my_prot[0:3]
my_prot + my_prot
my_prot + my_seq
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC
coding_dna = Seq("ATGATCTCGTAA", IUPAC.unambiguous_dna)
coding_dna
template_dna = coding_dna.reverse_complement()
template_dna
messenger_rna = coding_dna.transcribe()
messenger_rna
messenger_rna.back_transcribe()
coding_dna.translate()
coding_dna = Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG", IUPAC.unambiguous_dna)
coding_dna.translate()
coding_dna.translate(to_stop=True)
from Bio.Alphabet import generic_dna
gene = Seq("GTGAAAAAGATGCAATCTATCGTACTCGCACTTTCCCTGGTTCTGGTCGCTCCCATGGCA" + \
"GCACAGGCTGCGGAAATTACGTTAGTCCCGTCAGTAAAATTACAGATAGGCGATCGTGAT" + \
"AATCGTGGCTATTACTGGGATGGAGGTCACTGGCGCGACCACGGCTGGTGGAAACAACAT" + \
"TATGAATGGCGAGGCAATCGCTGGCACCTACACGGACCGCCGCCACCGCCGCGCCACCAT" + \
"AAGAAAGCTCCTCATGATCATCACGGCGGTCATGGTCCAGGCAAACATCACCGCTAA",
generic_dna)
# We specify a "table" to use a different translation table for bacterial proteins
gene.translate(table="Bacterial")
gene.translate(table="Bacterial", to_stop=True)
from Bio.SeqRecord import SeqRecord
help(SeqRecord)
from Bio.SeqRecord import SeqRecord
simple_seq = Seq("GATC")
simple_seq_r = SeqRecord(simple_seq)
simple_seq_r.id = "AC12345"
simple_seq_r.description = "Made up sequence"
print(simple_seq_r.id)
print(simple_seq_r.description)
!wget https://raw.githubusercontent.com/biopython/biopython/master/Tests/GenBank/NC_005816.fna
from Bio import SeqIO
record = SeqIO.read("NC_005816.fna", "fasta")
record
record.id
record.name
record.description
!wget https://raw.githubusercontent.com/biopython/biopython/master/Tests/GenBank/NC_005816.gb
from Bio import SeqIO
record = SeqIO.read("NC_005816.gb", "genbank")
record
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll use pip to install biopython
Step2: Parsing Sequence Records
Step3: Let's take a look at what the contents of this file look like
Step4: Sequence Objects
Step5: If we want to code a protein sequence, we can do that just as easily.
Step6: We can take the length of sequences and index into them like strings.
Step7: You can also use slice notation on sequences to get subsequences.
Step8: You can concatenate sequences if they have the same type so this works.
Step9: But this fails
Step10: Transcription
Step11: Note that these sequences match those in the image below. You might be confused about why the template_dna sequence is shown reversed. The reason is that by convention, the template strand is read in the reverse direction.
Step12: We can also perform a "back-transcription" to recover the original coding strand from the messenger RNA.
Step13: Translation
Step14: Let's now consider a longer genetic sequence that has some more interesting structure for us to look at.
Step15: In both of the sequences above, '*' represents the stop codon. A stop codon is a sequence of 3 nucleotides that turns off the protein machinery. In DNA, the stop codons are 'TGA', 'TAA', 'TAG'. Note that this latest sequence has multiple stop codons. It's possible to run the machinery up to the first stop codon and pause too.
Step16: We're going to introduce a bit of terminology here. A complete coding sequence CDS is a nucleotide sequence of messenger RNA which is made of a whole number of codons (that is, the length of the sequence is a multiple of 3), starts with a "start codon" and ends with a "stop codon". A start codon is basically the opposite of a stop codon and is mostly commonly the sequence "AUG", but can be different (especially if you're dealing with something like bacterial DNA).
Step17: Handling Annotated Sequences
Step18: Let's write a bit of code involving SeqRecord and see how it comes out looking.
Step19: Let's now see how we can use SeqRecord to parse a large fasta file. We'll pull down a file hosted on the biopython site.
Step20: Note how there's a number of annotations attached to the SeqRecord object!
Step21: Let's now look at the same sequence, but downloaded from GenBank. We'll download the hosted file from the biopython tutorial website as before.
|
15,427 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-3', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
15,428 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
# Add your filename and uncomment the following line:
Image(filename='bad graph.jpg')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Violations of graphical excellence and integrity
|
15,429 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import pandas as pd
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
train_path = tf.keras.utils.get_file(
"iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv")
test_path = tf.keras.utils.get_file(
"iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv")
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)
train.head()
train_y = train.pop('Species')
test_y = test.pop('Species')
# The label column has now been removed from the features.
train.head()
def input_evaluation_set():
features = {'SepalLength': np.array([6.4, 5.0]),
'SepalWidth': np.array([2.8, 2.3]),
'PetalLength': np.array([5.6, 3.3]),
'PetalWidth': np.array([2.2, 1.0])}
labels = np.array([2, 1])
return features, labels
def input_fn(features, labels, training=True, batch_size=256):
An input function for training or evaluating
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
# Shuffle and repeat if you are in training mode.
if training:
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
# Feature columns describe how to use the input.
my_feature_columns = []
for key in train.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
# Build a DNN with 2 hidden layers with 30 and 10 hidden nodes each.
classifier = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
# Two hidden layers of 30 and 10 nodes respectively.
hidden_units=[30, 10],
# The model must choose between 3 classes.
n_classes=3)
# Train the Model.
classifier.train(
input_fn=lambda: input_fn(train, train_y, training=True),
steps=5000)
eval_result = classifier.evaluate(
input_fn=lambda: input_fn(test, test_y, training=False))
print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
# Generate predictions from the model
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
'SepalLength': [5.1, 5.9, 6.9],
'SepalWidth': [3.3, 3.0, 3.1],
'PetalLength': [1.7, 4.2, 5.4],
'PetalWidth': [0.5, 1.5, 2.1],
}
def input_fn(features, batch_size=256):
An input function for prediction.
# Convert the inputs to a Dataset without labels.
return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)
predictions = classifier.predict(
input_fn=lambda: input_fn(predict_x))
for pred_dict, expec in zip(predictions, expected):
class_id = pred_dict['class_ids'][0]
probability = pred_dict['probabilities'][class_id]
print('Prediction is "{}" ({:.1f}%), expected "{}"'.format(
SPECIES[class_id], 100 * probability, expec))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 事前作成された Estimator
Step2: データセット
Step3: 次に、Keras と Pandas を使用して、Iris データセットをダウンロードして解析します。トレーニング用とテスト用に別々のデータセットを維持することに注意してください。
Step4: データを検査し、4 つの浮動小数型の特徴量カラムと 1 つの int32 ラベルがあることを確認します。
Step5: 各データセットに対し、モデルが予測するようにトレーニングされるラベルを分割します。
Step6: Estimator を使ったプログラミングの概要
Step8: 入力関数を自分で作成すれば、features ディクショナリと label リストを好みに合わせて生成できるようにすることができますが、あらゆる種類のデータを解析できる TensorFlow の Dataset API を使用することをお勧めします。
Step9: 特徴量カラムを定義する
Step10: 特徴量カラムは、ここに示すものよりもはるかに高度なものに構築することができます。特徴量カラムの詳細については、こちらのガイドをご覧ください。
Step11: トレーニングして評価して予測する
Step12: Estimator が期待するとおり、引数を取らない入力関数を指定しながら、input_fn 呼び出しを lambda にラッピングして引数をキャプチャするところに注意してください。steps 引数はメソッドに対して、あるトレーニングステップ数を完了した後にトレーニングを停止するように指定しています。
Step14: train メソッドへの呼び出しとは異なり、評価するsteps 引数を渡していません。eval の input_fn データの単一のエポックのみを返します。
Step15: predict メソッドは Python イテラブルを返し、各サンプルの予測結果のディクショナリを生成します。次のコードを使って、予測とその確率を出力します。
|
15,430 | <ASSISTANT_TASK:>
Python Code:
from sympy import *
init_session()
def const_model(E, nu, const="plane_stress"):
if const == "plane_stress":
fac = E/(1 - nu**2)
C = fac*Matrix([
[1, nu, 0],
[nu, 1, 0],
[0, 0, (1 - nu)/2]])
elif const == "plane_strain":
fac = E*(1 - nu)/((1 - 2*nu)*(1 + nu))
C = fac*Matrix([
[1, nu/(1 - nu), 0],
[nu/(1 - nu), 1, 0],
[0, 0, (1 - 2*nu)/(2*(1 - nu))]])
return C
r, s = symbols("r s")
N = S(1)/4 *Matrix([
[(1 - r)*(1 - s)],
[(1 + r)*(1 - s)],
[(1 + r)*(1 + s)],
[(1 - r)*(1 + s)]])
display(N)
H = zeros(2, 8)
for cont in range(4):
H[0, 2*cont] = N[cont]
H[1, 2*cont + 1] = N[cont]
display(H.T)
dHdr = zeros(2, 4)
for cont in range(4):
dHdr[0, cont] = diff(N[cont], r)
dHdr[1, cont] = diff(N[cont], s)
display(dHdr)
def gauss_int2d(f, x, y):
acu = 0
pts = [-1/sqrt(3), 1/sqrt(3)]
w = [1, 1]
for i in range(2):
for j in range(2):
acu += f.subs({x: pts[i], y: pts[j]})*w[i]*w[j]
return acu
def jaco(dHdr, coord_el):
return simplify(dHdr * coord_el)
def jaco_inv(dHdr, coord_el):
jac = jaco(dHdr, coord_el)
return Matrix([[jac[1, 1], -jac[0, 1]], [-jac[1, 0], jac[0, 0]]])/jac.det()
def B_mat(dHdr, coord_el):
dHdx = jaco_inv(dHdr, coord_el) * dHdr
B = zeros(3, 8)
for cont in range(4):
B[0, 2*cont] = dHdx[0, cont]
B[1, 2*cont + 1] = dHdx[1, cont]
B[2, 2*cont] = dHdx[1, cont]
B[2, 2*cont + 1] = dHdx[0, cont]
return simplify(B)
def local_mass(H, coord_el, rho):
det = jaco(dHdr, coord_el).det()
integrand = rho * det * expand(H.T * H)
mass_mat = zeros(8, 8)
for row in range(8):
for col in range(row, 8):
mass_mat[row, col] = gauss_int2d(integrand[row, col], r, s)
mass_mat[col, row] = mass_mat[row, col]
return mass_mat
def local_stiff(dHdr, coord_el, C):
det = jaco(dHdr, coord_el).det()
B = B_mat(dHdr, coord_el)
integrand = det * expand(B.T * C * B)
stiff_mat = zeros(8, 8)
for row in range(8):
for col in range(row, 8):
stiff_mat[row, col] = gauss_int2d(integrand[row, col], r, s)
stiff_mat[col, row] = stiff_mat[row, col]
return stiff_mat
def assembler(coords, elems, mat_props, const="plane_stress"):
ncoords = coords.shape[0]
stiff_glob = zeros(2*ncoords, 2*ncoords)
mass_glob = zeros(2*ncoords, 2*ncoords)
for el_cont, elem in enumerate(elems):
E, nu, rho = mat_props[el_cont]
C = const_model(E, nu, const=const)
coord_el = coords[elem, :]
stiff_loc = local_stiff(dHdr, coord_el, C)
mass_loc = local_mass(H, coord_el, rho)
for row in range(4):
for col in range(4):
row_glob = elem[row]
col_glob = elem[col]
# Stiffness matrix
stiff_glob[2*row_glob, 2*col_glob] += stiff_loc[2*row, 2*col]
stiff_glob[2*row_glob, 2*col_glob + 1] += stiff_loc[2*row, 2*col + 1]
stiff_glob[2*row_glob + 1, 2*col_glob] += stiff_loc[2*row + 1, 2*col]
stiff_glob[2*row_glob + 1, 2*col_glob + 1] += stiff_loc[2*row + 1, 2*col + 1]
# Mass matrix
mass_glob[2*row_glob, 2*col_glob] += mass_loc[2*row, 2*col]
mass_glob[2*row_glob, 2*col_glob + 1] += mass_loc[2*row, 2*col + 1]
mass_glob[2*row_glob + 1, 2*col_glob] += mass_loc[2*row + 1, 2*col]
mass_glob[2*row_glob + 1, 2*col_glob + 1] += mass_loc[2*row + 1, 2*col + 1]
return stiff_glob, mass_glob
coords = Matrix([
[-1, -1],
[1, -1],
[1, 1],
[-1, 1]])
elems = [[0, 1, 2, 3]]
mat_props = [[S(8)/3, S(1)/3, 1]]
stiff, mass = assembler(coords, elems, mat_props, const="plane_strain")
stiff
mass
coords = Matrix([
[-1, -1],
[0, -1],
[1, -1],
[-1, 0],
[0, 0],
[1, 0],
[-1, 1],
[0, 1],
[1, 1]])
elems = [[0, 1, 4, 3],
[1, 2, 5, 4],
[3, 4, 7, 6],
[4, 5, 8, 7]]
mat_props = [[16, S(1)/3, 1]]*4
stiff, _ = assembler(coords, elems, mat_props)
stiff_exact = Matrix([
[8, 3, -5, 0, 0, 0, 1, 0, -4, -3, 0, 0, 0, 0, 0, 0, 0, 0],
[3, 8, 0, 1, 0, 0, 0, -5, -3, -4, 0, 0, 0, 0, 0, 0, 0, 0],
[-5, 0, 16, 0, -5, 0, -4, 3, 2, 0, -4, -3, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 16, 0, 1, 3, -4, 0, -10, -3, -4, 0, 0, 0, 0, 0, 0],
[0, 0, -5, 0, 8, -3, 0, 0, -4, 3, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, -3, 8, 0, 0, 3, -4, 0, -5, 0, 0, 0, 0, 0, 0],
[1, 0, -4, 3, 0, 0, 16, 0, -10, 0, 0, 0, 1, 0, -4, -3, 0, 0],
[0, -5, 3, -4, 0, 0, 0, 16, 0, 2, 0, 0, 0, -5, -3, -4, 0, 0],
[-4, -3, 2, 0, -4, 3, -10, 0, 32, 0, -10, 0, -4, 3, 2, 0, -4, -3],
[-3, -4, 0, -10, 3, -4, 0, 2, 0, 32, 0, 2, 3, -4, 0, -10, -3, -4],
[0, 0, -4, -3, 1, 0, 0, 0, -10, 0, 16, 0, 0, 0, -4, 3, 1, 0],
[0, 0, -3, -4, 0, -5, 0, 0, 0, 2, 0, 16, 0, 0, 3, -4, 0, -5],
[0, 0, 0, 0, 0, 0, 1, 0, -4, 3, 0, 0, 8, -3, -5, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, -5, 3, -4, 0, 0, -3, 8, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, -4, -3, 2, 0, -4, 3, -5, 0, 16, 0, -5, 0],
[0, 0, 0, 0, 0, 0, -3, -4, 0, -10, 3, -4, 0, 1, 0, 16, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, -4, -3, 1, 0, 0, 0, -5, 0, 8, 3],
[0, 0, 0, 0, 0, 0, 0, 0, -3, -4, 0, -5, 0, 0, 0, 1, 3, 8]])
stiff_exact - stiff
coords = Matrix([
[0, 0],
[1, 0],
[2, 0],
[0, 1],
[1, 1],
[2, 1]])
elems = [[0, 1, 4, 3],
[1, 2, 5, 4]]
mat_props = [[1, S(3)/10, 1]]*4
stiff, mass = assembler(coords, elems, mat_props, const="plane_stress")
load = zeros(12, 1)
load[5] = -S(1)/2
load[11] = -S(1)/2
load
stiff2 = stiff.copy()
stiff2[0, :] = eye(12)[0, :]
stiff2[:, 0] = eye(12)[:, 0]
stiff2[1, :] = eye(12)[1, :]
stiff2[:, 1] = eye(12)[:, 1]
stiff2[6, :] = eye(12)[6, :]
stiff2[:, 6] = eye(12)[:, 6]
stiff2[7, :] = eye(12)[7, :]
stiff2[:, 7] = eye(12)[:, 7]
sol = linsolve((stiff2, load))
sol
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interpolation functions and matrices
Step2: The interpolation matrix is a matrix with the interpolation
Step3: The local derivatives matrix is formed with the derivatives of the interpolation functions
Step4: Gauss integration
Step5: Local matrices generation
Step6: We can re-arrange the derivatives of the interpolation function as a matrix that
Step7: With these elements we can form the local stiffness and mass matrices.
Step8: Assembly process
Step9: Example
Step10: Example
Step11: Example
|
15,431 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from astropy.table import Table, join
from astropy import units as u
from astropy.coordinates import SkyCoord, search_around_sky
from IPython.display import clear_output
import pickle
import os
import sys
sys.path.append("..")
from mltier1 import (get_center, Field, MultiMLEstimator, MultiMLEstimatorOld,
parallel_process, get_sigma_all, get_sigma_all_old, describe)
%load_ext autoreload
%autoreload
from IPython.display import clear_output
%pylab inline
save_intermediate = True
plot_intermediate = True
idp = "../idata/final_pdf_v0.9"
if not os.path.isdir(idp):
os.makedirs(idp)
# Busy week Edinburgh 2017
ra_down = 172.09
ra_up = 187.5833
dec_down = 46.106
dec_up = 56.1611
# Busy week Hatfield 2017
ra_down = 170.
ra_up = 190.
dec_down = 46.8
dec_up = 55.9
# Full field July 2017
ra_down = 160.
ra_up = 232.
dec_down = 42.
dec_up = 62.
field = Field(170.0, 190.0, 46.8, 55.9)
field_full = Field(160.0, 232.0, 42.0, 62.0)
combined_all = Table.read("../pw.fits")
lofar_all = Table.read("../data/LOFAR_HBA_T1_DR1_catalog_v0.9.srl.fixed.fits")
#lofar_all = Table.read("data/LOFAR_HBA_T1_DR1_merge_ID_optical_v0.8.fits")
np.array(combined_all.colnames)
np.array(lofar_all.colnames)
lofar = field_full.filter_catalogue(lofar_all, colnames=("RA", "DEC"))
combined = field_full.filter_catalogue(combined_all,
colnames=("ra", "dec"))
combined["colour"] = combined["i"] - combined["W1mag"]
combined_aux_index = np.arange(len(combined))
coords_combined = SkyCoord(combined['ra'],
combined['dec'],
unit=(u.deg, u.deg),
frame='icrs')
coords_lofar = SkyCoord(lofar['RA'],
lofar['DEC'],
unit=(u.deg, u.deg),
frame='icrs')
combined_matched = (~np.isnan(combined["i"]) & ~np.isnan(combined["W1mag"])) # Matched i-W1 sources
combined_panstarrs = (~np.isnan(combined["i"]) & np.isnan(combined["W1mag"])) # Sources with only i-band
combined_wise =(np.isnan(combined["i"]) & ~np.isnan(combined["W1mag"])) # Sources with only W1-band
combined_i = combined_matched | combined_panstarrs
combined_w1 = combined_matched | combined_wise
#combined_only_i = combined_panstarrs & ~combined_matched
#combined_only_w1 = combined_wise & ~combined_matched
print("Total - ", len(combined))
print("i and W1 - ", np.sum(combined_matched))
print("Only i - ", np.sum(combined_panstarrs))
print("With i - ", np.sum(combined_i))
print("Only W1 - ", np.sum(combined_wise))
print("With W1 - ", np.sum(combined_w1))
colour_limits = [0.0, 0.5, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.5, 4.0]
# Start with the W1-only, i-only and "less than lower colour" bins
colour_bin_def = [{"name":"only W1", "condition": combined_wise},
{"name":"only i", "condition": combined_panstarrs},
{"name":"-inf to {}".format(colour_limits[0]),
"condition": (combined["colour"] < colour_limits[0])}]
# Get the colour bins
for i in range(len(colour_limits)-1):
name = "{} to {}".format(colour_limits[i], colour_limits[i+1])
condition = ((combined["colour"] >= colour_limits[i]) &
(combined["colour"] < colour_limits[i+1]))
colour_bin_def.append({"name":name, "condition":condition})
# Add the "more than higher colour" bin
colour_bin_def.append({"name":"{} to inf".format(colour_limits[-1]),
"condition": (combined["colour"] >= colour_limits[-1])})
combined["category"] = np.nan
for i in range(len(colour_bin_def)):
combined["category"][colour_bin_def[i]["condition"]] = i
np.sum(np.isnan(combined["category"]))
numbers_combined_bins = np.array([np.sum(a["condition"]) for a in colour_bin_def])
numbers_combined_bins
bin_list, centers, Q_0_colour, n_m, q_m = pickle.load(open("../lofar_params.pckl", "rb"))
likelihood_ratio_function = MultiMLEstimator(Q_0_colour, n_m, q_m, centers)
likelihood_ratio_function_old = MultiMLEstimatorOld(Q_0_colour, n_m, q_m, centers)
radius = 15
selection = ~np.isnan(combined["category"]) # Avoid the dreaded sources with no actual data
catalogue = combined[selection]
def apply_ml(i, likelihood_ratio_function):
idx_0 = idx_i[idx_lofar == i]
d2d_0 = d2d[idx_lofar == i]
category = catalogue["category"][idx_0].astype(int)
mag = catalogue["i"][idx_0]
mag[category == 0] = catalogue["W1mag"][idx_0][category == 0]
lofar_ra = lofar[i]["RA"]
lofar_dec = lofar[i]["DEC"]
lofar_pa = lofar[i]["PA"]
lofar_maj_err = lofar[i]["E_Maj"]
lofar_min_err = lofar[i]["E_Min"]
c_ra = catalogue["ra"][idx_0]
c_dec = catalogue["dec"][idx_0]
c_ra_err = catalogue["raErr"][idx_0]
c_dec_err = catalogue["decErr"][idx_0]
sigma, sigma_maj, sigma_min = get_sigma_all(lofar_maj_err, lofar_min_err, lofar_pa,
lofar_ra, lofar_dec,
c_ra, c_dec, c_ra_err, c_dec_err)
lr_0 = likelihood_ratio_function(mag, d2d_0.arcsec, sigma, sigma_maj, sigma_min, category)
chosen_index = np.argmax(lr_0)
result = [combined_aux_index[selection][idx_0[chosen_index]], # Index
(d2d_0.arcsec)[chosen_index], # distance
lr_0[chosen_index]] # LR
return result
from mltier1 import fr_u, fr_u_old
def check_ml(i, likelihood_ratio_function, likelihood_ratio_function_old, verbose=True):
idx_0 = idx_i[idx_lofar == i]
d2d_0 = d2d[idx_lofar == i]
category = catalogue["category"][idx_0].astype(int)
mag = catalogue["i"][idx_0]
mag[category == 0] = catalogue["W1mag"][idx_0][category == 0]
lofar_ra = lofar[i]["RA"]
lofar_dec = lofar[i]["DEC"]
lofar_pa = lofar[i]["PA"]
lofar_maj_err = lofar[i]["E_Maj"]
lofar_min_err = lofar[i]["E_Min"]
c_ra = catalogue["ra"][idx_0]
c_dec = catalogue["dec"][idx_0]
c_ra_err = catalogue["raErr"][idx_0]
c_dec_err = catalogue["decErr"][idx_0]
sigma, sigma_maj, sigma_min = get_sigma_all_old(lofar_maj_err, lofar_min_err, lofar_pa,
lofar_ra, lofar_dec,
c_ra, c_dec, c_ra_err, c_dec_err)
sigma_0_0, det_sigma = get_sigma_all(lofar_maj_err, lofar_min_err, lofar_pa,
lofar_ra, lofar_dec,
c_ra, c_dec, c_ra_err, c_dec_err)
fr = fr_u(d2d_0.arcsec, sigma_0_0, det_sigma)
fr_old = np.array(fr_u_old(d2d_0.arcsec, sigma, sigma_maj, sigma_min))
if verbose:
print("NEW - s00: {}; sdet: {}; fr: {}".format(sigma_0_0, det_sigma, fr))
print("OLD - s: {}; smin: {}; smaj: {}; fr: {}".format(
np.array(sigma), np.array(sigma_maj), np.array(sigma_min), fr_old))
lr_0 = likelihood_ratio_function(mag, d2d_0.arcsec, sigma_0_0, det_sigma, category)
lr_0_old = likelihood_ratio_function_old(mag, d2d_0.arcsec, sigma, sigma_maj, sigma_min, category)
chosen_index = np.argmax(lr_0)
chosen_index_old = np.argmax(lr_0_old)
ix, dist, lr = (combined_aux_index[selection][idx_0[chosen_index]], # Index
(d2d_0.arcsec)[chosen_index], # distance
lr_0[chosen_index])
ix_old, dist_old, lr_old = (combined_aux_index[selection][idx_0[chosen_index_old]], # Index
(d2d_0.arcsec)[chosen_index_old], # distance
lr_0[chosen_index_old] )
if verbose:
print("NEW res - Ix: {}; dist: {}; LR: {}".format(ix, dist, lr)) # LR
print("OLD res - Ix: {}; dist: {}; LR: {}".format(ix_old, dist_old, lr_old))
return (sigma_0_0, det_sigma, fr, np.array(sigma), np.array(sigma_maj), np.array(sigma_min),
fr_old, ix, dist, lr, ix_old, dist_old, lr_old, (lofar_maj_err, lofar_min_err, lofar_pa,
lofar_ra, lofar_dec, c_ra, c_dec, c_ra_err, c_dec_err))
idx_lofar, idx_i, d2d, d3d = search_around_sky(
coords_lofar, coords_combined[selection], radius*u.arcsec)
idx_lofar_unique = np.unique(idx_lofar)
list_i = [141, 235, 396, 412, 418, 711, 858, 887, 932, 965, 1039, 1389, 1680, 1699,
1787, 1927, 2168, 2267, 2339, 2410, 2548, 2838, 2969, 3136, 3163, 3265,
3348, 3353, 3401]
for i in range(100000):
s00, det_s, fr, s, s_maj, s_min, fr_o, ix, dist, lr, ix_o, dist_o, lr_o, p = check_ml(idx_lofar_unique[i],
likelihood_ratio_function, likelihood_ratio_function_old, verbose=False)
if (ix != ix_o) and ((lr > 6) or (lr_o > 6)):
print(i)
#print(ix, dist, lr)
#print(ix_o, dist_o, lr_o)
#print(s00, det_s, fr, s, s_maj, s_min, fr_o)
#print(p)
list_i = [141, 235, 396, 412, 418, 711, 858, 887, 932, 965, 1039, 1389, 1680, 1699,
1787, 1927, 2168, 2267, 2339, 2410, 2548, 2838, 2969, 3136, 3163, 3265,
3348, 3353, 3401, 3654, 3687, 4022, 4074, 4083, 4164, 4263]
for i in list_i:
s00, det_s, fr, s, s_maj, s_min, fr_o, ix, dist, lr, ix_o, dist_o, lr_o, p = check_ml(idx_lofar_unique[i],
likelihood_ratio_function, likelihood_ratio_function_old, verbose=False)
if ix != ix_o:
print(i)
print(ix, dist, lr)
print(ix_o, dist_o, lr_o)
print(s00, det_s, fr, s, s_maj, s_min, fr_o)
print(p)
import multiprocessing
n_cpus_total = multiprocessing.cpu_count()
n_cpus = max(1, n_cpus_total-1)
def ml(i):
return apply_ml(i, likelihood_ratio_function)
res = parallel_process(idx_lofar_unique, ml, n_jobs=n_cpus)
lofar["lr"] = np.nan # Likelihood ratio
lofar["lr_dist"] = np.nan # Distance to the selected source
lofar["lr_index"] = np.nan # Index of the PanSTARRS source in combined
(lofar["lr_index"][idx_lofar_unique],
lofar["lr_dist"][idx_lofar_unique],
lofar["lr"][idx_lofar_unique]) = list(map(list, zip(*res)))
total_sources = len(idx_lofar_unique)
combined_aux_index = np.arange(len(combined))
lofar["lrt"] = lofar["lr"]
lofar["lrt"][np.isnan(lofar["lr"])] = 0
q0 = np.sum(Q_0_colour)
def completeness(lr, threshold, q0):
n = len(lr)
lrt = lr[lr < threshold]
return 1. - np.sum((q0 * lrt)/(q0 * lrt + (1 - q0)))/float(n)/q0
def reliability(lr, threshold, q0):
n = len(lr)
lrt = lr[lr > threshold]
return 1. - np.sum((1. - q0)/(q0 * lrt + (1 - q0)))/float(n)/q0
completeness_v = np.vectorize(completeness, excluded=[0])
reliability_v = np.vectorize(reliability, excluded=[0])
n_test = 100
threshold_mean = np.percentile(lofar["lrt"], 100*(1 - q0))
thresholds = np.arange(0., 10., 0.01)
thresholds_fine = np.arange(0.1, 1., 0.001)
completeness_t = completeness_v(lofar["lrt"], thresholds, q0)
reliability_t = reliability_v(lofar["lrt"], thresholds, q0)
average_t = (completeness_t + reliability_t)/2
completeness_t_fine = completeness_v(lofar["lrt"], thresholds_fine, q0)
reliability_t_fine = reliability_v(lofar["lrt"], thresholds_fine, q0)
average_t_fine = (completeness_t_fine + reliability_t_fine)/2
threshold_sel = thresholds_fine[np.argmax(average_t_fine)]
plt.rcParams["figure.figsize"] = (15,6)
subplot(1,2,1)
plot(thresholds, completeness_t, "r-")
plot(thresholds, reliability_t, "g-")
plot(thresholds, average_t, "k-")
vlines(threshold_sel, 0.9, 1., "k", linestyles="dashed")
vlines(threshold_mean, 0.9, 1., "y", linestyles="dashed")
ylim([0.9, 1.])
xlabel("Threshold")
ylabel("Completeness/Reliability")
subplot(1,2,2)
plot(thresholds_fine, completeness_t_fine, "r-")
plot(thresholds_fine, reliability_t_fine, "g-")
plot(thresholds_fine, average_t_fine, "k-")
vlines(threshold_sel, 0.9, 1., "k", linestyles="dashed")
#vlines(threshold_mean, 0.9, 1., "y", linestyles="dashed")
ylim([0.97, 1.])
xlabel("Threshold")
ylabel("Completeness/Reliability")
print(threshold_sel)
plt.rcParams["figure.figsize"] = (15,6)
subplot(1,2,1)
hist(lofar[lofar["lrt"] != 0]["lrt"], bins=200)
vlines([threshold_sel], 0, 5000)
ylim([0,5000])
subplot(1,2,2)
hist(np.log10(lofar[lofar["lrt"] != 0]["lrt"]+1), bins=200)
vlines(np.log10(threshold_sel+1), 0, 5000)
ticks, _ = xticks()
xticks(ticks, ["{:.1f}".format(10**t-1) for t in ticks])
ylim([0,5000]);
lofar["lr_index_sel"] = lofar["lr_index"]
lofar["lr_index_sel"][lofar["lrt"] < threshold_sel] = np.nan
combined["lr_index_sel"] = combined_aux_index.astype(float)
pwl = join(lofar, combined,
join_type='left',
keys='lr_index_sel',
uniq_col_name='{col_name}{table_name}',
table_names=['_input', ''])
pwl_columns = pwl.colnames
for col in pwl_columns:
fv = pwl[col].fill_value
if (isinstance(fv, np.float64) and (fv != 1e+20)):
print(col, fv)
pwl[col].fill_value = 1e+20
columns_save = ['Source_Name', 'RA', 'E_RA', 'DEC', 'E_DEC',
'Peak_flux', 'E_Peak_flux', 'Total_flux', 'E_Total_flux',
'Maj', 'E_Maj', 'Min', 'E_Min', 'PA', 'E_PA', 'Isl_rms', 'S_Code', 'Mosaic_ID',
'AllWISE', 'objID', 'ra', 'dec', 'raErr', 'decErr',
'W1mag', 'W1magErr', 'i', 'iErr', 'colour', 'category',
'lr', 'lr_dist']
pwl[columns_save].filled().write('lofar_pw_pdf.fits', format="fits")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: General configuration
Step2: Area limits
Step3: Load data
Step4: Filter catalogues
Step5: Additional data
Step6: Sky coordinates
Step7: Class of sources in the combined catalogue
Step8: Colour categories
Step9: We get the number of sources of the combined catalogue in each colour category. It will be used at a later stage to compute the $Q_0$ values
Step10: Maximum Likelihood
Step11: ML match
Step12: Run the cross-match
Step13: Run the ML matching
Step14: Threshold and selection
Step15: Save combined catalogue
|
15,432 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import netCDF4 as nc
import climlab
ncep_filename = 'air.mon.1981-2010.ltm.nc'
# This will try to read the data over the internet.
#ncep_url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/"
#ncep_air = nc.Dataset( ncep_url + 'pressure/' + ncep_filename )
# Or to read from local disk
ncep_air = nc.Dataset( ncep_filename )
level = ncep_air.variables['level'][:]
lat = ncep_air.variables['lat'][:]
# A log-pressure height coordinate
zstar = -np.log(level/1000)
# Take averages of the temperature data
Tzon = np.mean(ncep_air.variables['air'][:],axis=(0,3))
Tglobal = np.average( Tzon , weights=np.cos(np.deg2rad(lat)), axis=1) + climlab.constants.tempCtoK
# Note the useful conversion factor. climlab.constants has lots of commonly used constant pre-defined
# Here we are plotting with respect to log(pressure) but labeling the axis in pressure units
fig = plt.figure( figsize=(8,6) )
ax = fig.add_subplot(111)
ax.plot( Tglobal , zstar )
yticks = np.array([1000., 750., 500., 250., 100., 50., 20., 10.])
ax.set_yticks(-np.log(yticks/1000.))
ax.set_yticklabels(yticks)
ax.set_xlabel('Temperature (K)', fontsize=16)
ax.set_ylabel('Pressure (hPa)', fontsize=16 )
ax.set_title('Global, annual mean sounding from NCEP Reanalysis', fontsize = 24)
ax.grid()
# Repeating code from previous notebook ... set up a column model with observed temperatures.
# initialize a grey radiation model with 30 levels
col = climlab.GreyRadiationModel()
# interpolate to 30 evenly spaced pressure levels
lev = col.lev
Tinterp = np.interp(lev, np.flipud(level), np.flipud(Tglobal))
# Initialize model with observed temperatures
col.Ts[:] = Tglobal[0]
col.Tatm[:] = Tinterp
# The tuned value of absorptivity
eps = 0.0534
# set it
col.subprocess.LW.absorptivity = eps
# Pure radiative equilibrium
# Make a clone of our first model
re = climlab.process_like(col)
# Run out to equilibrium
re.integrate_years(2.)
# Check for energy balance
re.ASR - re.OLR
# And set up a RadiativeConvective model,
rce = climlab.RadiativeConvectiveModel(adj_lapse_rate=6.)
# Set our tuned absorptivity value
rce.subprocess.LW.absorptivity = eps
# Run out to equilibrium
rce.integrate_years(2.)
# Check for energy balance
rce.ASR - rce.OLR
# A handy re-usable routine for making a plot of the temperature profiles
# We will plot temperatures with respect to log(pressure) to get a height-like coordinate
def plot_sounding(collist):
color_cycle=['r', 'g', 'b', 'y', 'm']
# col is either a column model object or a list of column model objects
if isinstance(collist, climlab.Process):
# make a list with a single item
collist = [collist]
fig = plt.figure()
ax = fig.add_subplot(111)
for i, col in enumerate(collist):
zstar = -np.log(col.lev/climlab.constants.ps)
ax.plot(col.Tatm, zstar, color=color_cycle[i])
ax.plot(col.Ts, 0, 'o', markersize=12, color=color_cycle[i])
#ax.invert_yaxis()
yticks = np.array([1000., 750., 500., 250., 100., 50., 20., 10.])
ax.set_yticks(-np.log(yticks/1000.))
ax.set_yticklabels(yticks)
ax.set_xlabel('Temperature (K)')
ax.set_ylabel('Pressure (hPa)')
ax.grid()
return ax
# Make a plot to compare observations and Radiative-Convective Equilibrium
plot_sounding([col, re, rce])
# To read from internet
#datapath = "http://ramadda.atmos.albany.edu:8080/repository/opendap/latest/Top/Users/Brian+Rose/CESM+runs/som_input/"
#endstr = "/entry.das"
ozone_filename = 'ozone_1.9x2.5_L26_2000clim_c091112.nc'
datapath = ''
endstr = ''
ozone = nc.Dataset( datapath + ozone_filename + endstr )
print ozone.variables['O3']
lat = ozone.variables['lat'][:]
lon = ozone.variables['lon'][:]
lev = ozone.variables['lev'][:]
print lev
O3_zon = np.mean( ozone.variables['O3'],axis=(0,3) )
O3_global = np.sum( O3_zon * np.cos(np.deg2rad(lat)), axis=1 ) / np.sum( np.cos(np.deg2rad(lat) ) )
O3_global.shape
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(1,2,1)
cax = ax1.contourf(lat, np.log(lev/climlab.constants.ps), O3_zon * 1.E6)
ax1.invert_yaxis()
ax1.set_xlabel('Latitude', fontsize=16)
ax1.set_ylabel('Pressure (hPa)', fontsize=16 )
yticks = np.array([1000., 500., 250., 100., 50., 20., 10., 5.])
ax1.set_yticks( np.log(yticks/1000.) )
ax1.set_yticklabels( yticks )
ax1.set_title('Ozone concentration (annual mean)', fontsize = 16);
plt.colorbar(cax)
ax2 = fig.add_subplot(1,2,2)
ax2.plot( O3_global * 1.E6, np.log(lev/climlab.constants.ps) )
ax2.invert_yaxis()
ax2.set_xlabel('Ozone (ppm)', fontsize=16)
ax2.set_ylabel('Pressure (hPa)', fontsize=16 )
yticks = np.array([1000., 500., 250., 100., 50., 20., 10., 5.])
ax2.set_yticks( np.log(yticks/1000.) )
ax2.set_yticklabels( yticks )
ax2.set_title('Global mean ozone concentration', fontsize = 16);
# Create the column with appropriate vertical coordinate, surface albedo and convective adjustment
band1 = climlab.BandRCModel(lev=lev, adj_lapse_rate=6)
print band1
band1.absorber_vmr
band1.state
band1.integrate_years(2)
# Check for energy balance
band1.ASR - band1.OLR
# Add another line to our graph!
plot_sounding([col, re, rce, band1])
band2 = climlab.process_like(band1)
print band2
band2.absorber_vmr['O3'] = O3_global
band2.absorber_vmr
# Run the model out to equilibrium!
band2.integrate_years(2.)
# Add another line to our graph!
plot_sounding([col, re, rce, band1, band2])
band2.absorber_vmr['CO2']
# Let's double CO2 and calculate radiative forcing
band3 = climlab.process_like(band2)
band3.absorber_vmr['CO2'] *= 2.
band3.absorber_vmr['CO2']
band3.compute_diagnostics()
print 'The radiative forcing for doubling CO2 is %f W/m2.' % (band2.OLR - band3.OLR)
# and make another copy, which we will integrate out to equilibrium
band4 = climlab.process_like(band3)
band4.integrate_years(3)
band4.ASR - band4.OLR
DeltaT = band4.Ts - band2.Ts
print 'The Equilibrium Climate Sensitivity is %f K.' % DeltaT
# We multiply the H2O mixing ratio by 1000 to get units of g / kg
# (amount of water vapor per mass of air)
plt.plot( band2.absorber_vmr['H2O'] *1000, lev, label='before 2xCO2')
plt.plot( band4.absorber_vmr['H2O'] * 1000., lev, label='after 2xCO2')
plt.xlabel('g/kg H2O')
plt.ylabel('pressure (hPa)')
plt.legend(loc='upper right')
plt.grid()
# This reverses the axis so pressure decreases upward
plt.gca().invert_yaxis()
# First, make a new clone
noh2o = climlab.process_like(band2)
# See what absorbing gases are currently present
noh2o.absorber_vmr
# double the CO2 !
noh2o.absorber_vmr['CO2'] *= 2
# Check out the list of subprocesses
print noh2o
# Remove the process that changes the H2O
noh2o.remove_subprocess('H2O')
print noh2o
noh2o.absorber_vmr
noh2o.integrate_years(3)
noh2o.ASR - noh2o.OLR
# Repeat the same plot of water vapor profiles, but add a third curve
# We'll plot the new model as a dashed line to make it easier to see.
plt.plot( band2.absorber_vmr['H2O'] *1000, lev, label='before 2xCO2')
plt.plot( band4.absorber_vmr['H2O'] * 1000., lev, label='after 2xCO2')
plt.plot( noh2o.absorber_vmr['H2O'] * 1000., lev, linestyle='--', label='No H2O feedback')
plt.xlabel('g/kg H2O')
plt.ylabel('pressure (hPa)')
plt.legend(loc='upper right')
plt.grid()
# This reverses the axis so pressure decreases upward
plt.gca().invert_yaxis()
DeltaT_noh2o = noh2o.Ts - band2.Ts
print 'The Equilibrium Climate Sensitivity with water vapor feedback is %f K.' % DeltaT_noh2o
DeltaT - DeltaT_noh2o
# Create the column with appropriate vertical coordinate, surface albedo and convective adjustment
tuneband = climlab.BandRCModel(lev=lev, adj_lapse_rate=6, albedo_sfc=0.1)
print tuneband
tuneband.absorber_vmr['O3'] = O3_global
tuneband.absorber_vmr
tuneband.compute_diagnostics()
tuneband.absorber_vmr
tuneband.subprocess.LW.absorption_cross_section
# This set of parameters gives correct Ts and ASR
# But seems weird... the radiative forcing for doubling CO2 is tiny
#tuneband.subprocess.SW.reflectivity[15] = 0.255
#tuneband.subprocess.LW.absorption_cross_section['CO2'][1] = 2.
#tuneband.subprocess.LW.absorption_cross_section['H2O'][2] = 0.45
# Just tune the surface albedo, won't get correct ASR but get correct Ts
tuneband = climlab.BandRCModel(lev=lev, adj_lapse_rate=6, albedo_sfc=0.22)
tuneband.integrate_converge()
tuneband.Ts
tuneband.ASR
tuneband.OLR
tuneband.param
tband2 = climlab.process_like(tuneband)
tband2.subprocess['LW'].absorber_vmr['CO2'] *= 2
tband2.compute_diagnostics()
tband2.OLR
tband2.step_forward()
tband2.OLR
tband2.integrate_converge()
tband2.Ts
ncep_air
ncep_air.variables.keys()
latmodel = climlab.BandRCModel(lat=lat, lev=lev, adj_lapse_rate=6, albedo_sfc=0.22)
print latmodel
latmodel.Tatm.shape
ncep_air.variables['air'].shape
tuneband.Ts
tuneband.subprocess.SW.albedo_sfc = 0.4
tuneband.integrate_converge()
tuneband.Ts
tuneband.
# make a model on the same grid as the ozone
model = climlab.BandRCModel(lev=lev, lat=lat, albedo_sfc=0.22)
insolation = climlab.radiation.insolation.AnnualMeanInsolation(domains=model.Ts.domain)
model.add_subprocess('insolation', insolation)
model.subprocess.SW.flux_from_space = model.subprocess.insolation.insolation
print model
model.subprocess.insolation.insolation
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, let's just repeat the calculations we did in the previous notebook RadiativeConvectiveEquilibrium.ipynb
Step2: Stratospheric ozone
Step3: The pressure levels in this dataset are
Step4: Take the global average of the ozone climatology, and plot it as a function of pressure (or height)
Step5: This shows that most of the ozone is indeed in the stratosphere, and peaks near the top of the stratosphere.
Step6: Check out the list of subprocesses.
Step7: Ozone and CO2 are both specified in the model. The default, as you see above, is zero ozone, and constant (well-mixed) CO2 at a volume mixing ratio of 3.8E-4 or 380 ppm.
Step8: Now put in the ozone
Step9: Once we include ozone we get a well-defined stratosphere.
Step10: We've just increased CO2 from 380 ppm to 760 ppm.
Step11: Investigating the role of water vapor feedback in climate sensitivity
Step12: Water vapor decreases from the surface upward mostly because the temperature decreases!
Step13: Notice that the subprocess labelled 'H2O' is gone from the list.
Step14: But this will be held fixed now as the climate changes in noh2o.
Step15: Indeed, the water vapor is identical in the new equilibrium climate to the old pre-CO2-increase model.
Step16: So the effect of the water vapor feedback on the climate sensitivity to doubled CO2 is
Step17: We get about an additional degree of warming from the water vapor increase.
|
15,433 | <ASSISTANT_TASK:>
Python Code:
import sisl
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
graphene = sisl.geom.graphene()
print(graphene)
H = sisl.Hamiltonian(graphene)
print(H)
H[0, 0] = 0.0
H[1, 1] = 0.0
H[0, 1] = -2.7
H[1, 0] = -2.7
H[0, 1, (-1, 0)] = -2.7
H[0, 1, (0, -1)] = -2.7
H[1, 0, (1, 0)] = -2.7
H[1, 0, (0, 1)] = -2.7
print(H)
print("Gamma:", H.eigh())
print("K:", H.eigh(k=[2./3,1./3,0]))
for ia, io in H.geometry.iter_orbitals(local=False):
# This loops over all atoms and the orbitals
# corresponding to the atom.
# In this case the geometry has one orbital per atom, hence
# ia == io
# in all cases.
# In order to figure out which atoms atom `ia` is connected
# to, we must find those atoms.
# To do this we access the geometry attached to the
# Hamiltonian (H.geom)
# and use a function called `close` which returns ALL
# atomic indices within certain ranges of a given point or atom
idx = H.geometry.close(ia, R = [0.1, 1.43])
# the argument R has two entries:
# 0.1 and 1.43
# Each value represents a radii of a sphere.
# The `close` function will then return
# a list of equal length of the R argument (i.e. a list with
# two values).
# idx[0] is the first element and is also a list
# of all atoms within a sphere of 0.1 AA of atom `ia`.
# This should obviously only contain the atom it-self.
# The second element, idx[1], contains all atoms within a sphere
# with radius of 1.43 AA, but not including those within 0.1 AA.
# In this case this is then all atoms that are the nearest neighbour
# atoms.
# Now we know the on-site atoms (idx[0]) and the nearest neighbour
# atoms (idx[1]), all we need to do is set the Hamiltonian
# elements:
# on-site (0. eV)
H[io, idx[0]] = 0.
# nearest-neighbour (-2.7 eV)
H[io, idx[1]] = -2.7
print(H)
print("Gamma:", H.eigh())
print("K:", H.eigh(k=[2./3,1./3,0]))
band = sisl.BandStructure(H, [[0., 0.], [2./3, 1./3],
[1./2, 1./2], [1., 1.]], 301,
[r'$\Gamma$', 'K', 'M', r'$\Gamma$'])
eigs = band.apply.array.eigh()
# Retrieve the tick-marks and the linear k points
xtick, xtick_label = band.lineartick()
lk = band.lineark()
plt.plot(lk, eigs)
plt.ylabel('Eigenspectrum [eV]')
plt.gca().xaxis.set_ticks(xtick)
plt.gca().set_xticklabels(xtick_label)
# Also plot x-major lines at the ticks
ymin, ymax = plt.gca().get_ylim()
for tick in xtick:
plt.plot([tick,tick], [ymin,ymax], 'k')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Instead of manually defining the graphene system with associated atomic coordinates and lattice vectors we use the build-in sisl capability of defining the graphene structure with a default atomic distance of $d = 1.42\,A$.
Step2: Some basic information of the geometry is shown above
Step3: Now H is a Hamiltonian object. It works equivalently to a matrix and one may assign elements and extract elements as though it were a matrix, we will return to the intricate details of the Hamiltonian object later.
Step4: Now we need to set the coupling elements
Step5: This will only couple the first and second atom in the primary unit-cell. But we also require couplings from the primary unit-cell to the neighbouring supercells. Remember that nsc = [3, 3, 1].
Step6: Now all matrix elements are set, i.e. 2 on-site and 6 nearest neighbour couplings, lets assert this
Step7: We find 8 non-zero elements, as there should be. Remark, even though we set the on-site terms to $0$, they are interpreted as non-zero elements due to explicitly setting them.
Step8: Looping the atoms and orbitals in the Hamiltonian
Step9: The above loop is equivalent to the previously explicitly set values, so printing the structure will yield the same information, we have just specified all values again.
Step10: After having setup the Hamilton, we may easily calculate the eigenvalues at any $\mathbf k$ (in reduced coordinates $\mathbf k\in]-0.5
Step11: We may also create a bandstructure of the Hamiltonian.
Step12: Now eigs contains all the eigenvalues of the Hamiltonian object for all the $k$-points.
|
15,434 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# %load depthprobe_ex.py
import numpy as np
import bornagain as ba
from bornagain import deg, angstrom, nm
# layer thicknesses in angstroms
t_Ti = 130.0 * angstrom
t_Pt = 320.0 * angstrom
t_Ti_top = 100.0 * angstrom
t_TiO2 = 30.0 * angstrom
# beam data
ai_min = 0.0 * deg # minimum incident angle
ai_max = 1.0 * deg # maximum incident angle
n_ai_bins = 500 # number of bins in incident angle axis
beam_sample_ratio = 0.01 # beam-to-sample size ratio
wl = 10 * angstrom # wavelength in angstroms
# angular beam divergence from https://mlz-garching.de/maria
d_ang = np.degrees(3.0e-03)*deg # spread width for incident angle
n_points = 50 # number of points to convolve over
n_sig = 3 # number of sigmas to convolve over
# wavelength divergence from https://mlz-garching.de/maria
d_wl = 0.1*wl # spread width for the wavelength
n_points_wl = 50
n_sig_wl = 2
# depth position span
z_min = -100 * nm # 300 nm to the sample and substrate
z_max = 100 * nm # 100 nm to the ambient layer
n_z_bins = 500
def get_sample():
Constructs a sample with one resonating Ti/Pt layer
# define materials
m_Si = ba.MaterialBySLD("Si", 2.07e-06, 2.38e-11)
m_Ti = ba.MaterialBySLD("Ti", 2.8e-06, 5.75e-10)
m_Pt = ba.MaterialBySLD("Pt", 6.36e-06, 1.9e-09)
m_TiO2 = ba.MaterialBySLD("TiO2", 2.63e-06, 5.4e-10)
m_D2O = ba.MaterialBySLD("D2O", 6.34e-06, 1.13e-13)
# create layers
l_Si = ba.Layer(m_Si)
l_Ti = ba.Layer(m_Ti, 130.0 * angstrom)
l_Pt = ba.Layer(m_Pt, 320.0 * angstrom)
l_Ti_top = ba.Layer(m_Ti, 100.0 * angstrom)
l_TiO2 = ba.Layer(m_TiO2, 30.0 * angstrom)
l_D2O = ba.Layer(m_D2O)
# construct sample
sample = ba.MultiLayer()
sample.addLayer(l_Si)
# put your code here (1 line), take care of correct indents
sample.addLayer(l_Ti)
sample.addLayer(l_Pt)
sample.addLayer(l_Ti_top)
sample.addLayer(l_TiO2)
sample.addLayer(l_D2O)
return sample
def get_simulation():
Returns a depth-probe simulation.
footprint = ba.FootprintFactorSquare(beam_sample_ratio)
simulation = ba.DepthProbeSimulation()
simulation.setBeamParameters(wl, n_ai_bins, ai_min, ai_max, footprint)
simulation.setZSpan(n_z_bins, z_min, z_max)
fwhm2sigma = 2*np.sqrt(2*np.log(2))
# add angular beam divergence
# put your code here (2 lines)
# add wavelength divergence
# put your code here (2 lines)
return simulation
def run_simulation():
Runs simulation and returns its result.
sample = get_sample()
simulation = get_simulation()
simulation.setSample(sample)
simulation.runSimulation()
return simulation.result()
if __name__ == '__main__':
result = run_simulation()
ba.plot_simulation_result(result)
%load depthprobe.py
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Evanescent wave intensity
Step4: Solution
|
15,435 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return (x - x.min())/(x.max() - x.min())
tests.test_normalize(normalize)
import sklearn.preprocessing
label_binarizer = sklearn.preprocessing.LabelBinarizer()
label_binarizer.fit(range(10))
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
return label_binarizer.transform(x)
tests.test_one_hot_encode(one_hot_encode)
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], 'x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, [None, n_classes], 'y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='keep_prob')
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, ):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([conv_num_outputs]))
conv1 = tf.nn.relu(tf.nn.bias_add(tf.nn.conv2d(x_tensor, weights, [1, conv_strides[0], conv_strides[1], 1], 'SAME'), bias))
return tf.nn.max_pool(conv1, [1, pool_ksize[0], pool_ksize[1], 1], [1, pool_strides[0], pool_strides[1], 1], 'SAME')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
size = x_tensor.get_shape().as_list()
return tf.reshape(x_tensor, shape=[-1, size[1] * size[2] * size[3]])
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
return tf.nn.relu(tf.nn.bias_add(tf.matmul(x_tensor, weights), bias))
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
return tf.nn.bias_add(tf.matmul(x_tensor, weights), bias)
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv1 = conv2d_maxpool(x, 64, [2, 2], [1, 1], [2, 2], [2, 2])
conv1 = conv2d_maxpool(conv1, 128, [3, 3], [1, 1], [2, 2], [2, 2])
conv1 = conv2d_maxpool(conv1, 256, [3, 3], [1, 1], [2, 2], [2, 2])
conv1 = flatten(conv1)
conv1 = fully_conn(conv1, 2500)
conv1 = tf.nn.dropout(conv1, keep_prob)
return output(conv1, 10)
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch , keep_prob: keep_probability})
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
cost_val = session.run(cost, feed_dict={x: feature_batch, y: label_batch , keep_prob: 1})
accuracy_val = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels , keep_prob: 1})
print("Cost: {} Accuracy: {}".format(cost_val, accuracy_val))
# TODO: Tune Parameters
epochs = 50
batch_size = 256
keep_probability = 0.75
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step4: Implement Preprocess Functions
Step6: One-hot encode
Step7: Randomize Data
Step8: Check Point
Step12: Build the network
Step15: Convolution and Max Pooling Layer
Step17: Flatten Layer
Step19: Fully-Connected Layer
Step22: Output Layer
Step24: Create Convolutional Model
Step26: Show Stats
Step27: Hyperparameters
Step28: Train on a Single CIFAR-10 Batch
Step29: Fully Train the Model
Step31: Checkpoint
|
15,436 | <ASSISTANT_TASK:>
Python Code:
import re
import time
import pprint
from qumulo.rest_client import RestClient
rc = RestClient("qumulo.test", 8000)
rc.login("admin", "*********");
def create_policy_for_diff(rc, policy_name, path='/', minutes=10):
try:
dets = rc.fs.get_file_attr(path=path)
except:
print("!!! Unable to find directory: %s" % path)
return
policy = rc.snapshot.create_policy(
name = policy_name,
directory_id = dets['id'],
schedule_info = {"creation_schedule":{
"frequency":"SCHEDULE_HOURLY_OR_LESS",
"fire_every":1,
"fire_every_interval":"FIRE_IN_MINUTES",
"window_start_hour":0,
"window_start_minute":0,
"window_end_hour":23,
"window_end_minute":59,
"on_days":["MON","TUE","WED","THU","FRI","SAT","SUN"],
"timezone":"America/Los_Angeles"
},
"expiration_time_to_live":"%sminutes" % minutes
}
)
print("Created policy on directory '%s': %s expires after %s" % (
path,
policy['name'],
policy['schedules'][0]['expiration_time_to_live']))
def diff_snaps(rc, policy_name):
snap_count = 2 # set up for the 1st loop
paths = []
while snap_count >= 2:
all_snaps = rc.snapshot.list_snapshot_statuses()['entries']
short_list = filter(lambda s: s['name'] == policy_name, all_snaps)
snaps = sorted(short_list, key=lambda s: s['id'])
if len(snaps) < 2:
break
print("Diff times: %s -> %s" % (snaps[0]['timestamp'][0:19],
snaps[1]['timestamp'][0:19]))
diff = rc.snapshot.get_all_snapshot_tree_diff(snaps[1]['id'],
snaps[0]['id'])
for d in diff:
for e in d['entries']:
if e['path'][-1] == "/":
continue # it's a directory
sz = None
owner = None
try:
dets = rc.fs.get_file_attr(e['path'])
sz = dets['size']
owner = dets['owner_details']['id_value']
except:
pass
if e['op'] == 'DELETE' and sz is not None:
continue # don't add deletes for existing files
paths.append({'op': e['op'],
'path': e['path'],
'size': sz,
'owner': owner,
'snapshot_id': snaps[1]['id']})
# delete the oldest snapshot
rc.snapshot.delete_snapshot(snaps[0]['id'])
snap_count = len(snaps) - 1
return paths
create_policy_for_diff(rc, 'EveryMinuteForDiffs')
diff_list = diff_snaps(rc, 'EveryMinuteForDiffs')
print("Found %s file changes." % len(diff_list))
owners = {}
ops = {}
diffs = {}
for d in diff_list:
owners[d['owner']] = 1
if d['op'] not in ops:
ops[d['op']] = 1
ops[d['op']] += 1
diffs[d['snapshot_id']] = 1
print("Ops: %s" % ' | '.join(["%s:%s" % (k,v) for k, v in ops.items()]))
print("Diff count: %s" % len(diffs))
print("Owner count: %s" % len(owners))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Snapshot Policy
Step2: Diff all snapshots in a policy
|
15,437 | <ASSISTANT_TASK:>
Python Code:
from keras.datasets import imdb
idx = imdb.get_word_index()
idx_arr = sorted(idx, key=idx.get)
idx_arr[:10]
idx2word = {v: k for k, v in idx.iteritems()}
path = get_file('imdb_full.pkl',
origin='https://s3.amazonaws.com/text-datasets/imdb_full.pkl',
md5_hash='d091312047c43cf9e4e38fef92437263')
f = open(path, 'rb')
(x_train, labels_train), (x_test, labels_test) = pickle.load(f)
len(x_train)
', '.join(map(str, x_train[0]))
idx2word[23022]
' '.join([idx2word[o] for o in x_train[0]])
labels_train[:10]
vocab_size = 5000
trn = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_train]
test = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_test]
lens = np.array(map(len, trn))
(lens.max(), lens.min(), lens.mean())
seq_len = 500
trn = sequence.pad_sequences(trn, maxlen=seq_len, value=0)
test = sequence.pad_sequences(test, maxlen=seq_len, value=0)
trn.shape
model = Sequential([
Embedding(vocab_size, 32, input_length=seq_len),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.summary()
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64, verbose=2)
conv1 = Sequential([
Embedding(vocab_size, 32, input_length=seq_len, dropout=0.2),
Dropout(0.2),
Convolution1D(64, 5, border_mode='same', activation='relu'),
Dropout(0.2),
MaxPooling1D(),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
conv1.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
conv1.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64, verbose=2)
conv1.save_weights(model_path + 'conv1.h5')
conv1.load_weights(model_path + 'conv1.h5')
def get_glove_dataset(dataset):
Download the requested glove dataset from files.fast.ai
and return a location that can be passed to load_vectors.
# see wordvectors.ipynb for info on how these files were
# generated from the original glove data.
md5sums = {'6B.50d': '8e1557d1228decbda7db6dfd81cd9909',
'6B.100d': 'c92dbbeacde2b0384a43014885a60b2c',
'6B.200d': 'af271b46c04b0b2e41a84d8cd806178d',
'6B.300d': '30290210376887dcc6d0a5a6374d8255'}
glove_path = os.path.abspath('data/glove/results')
%mkdir -p $glove_path
return get_file(dataset,
'http://files.fast.ai/models/glove/' + dataset + '.tgz',
cache_subdir=glove_path,
md5_hash=md5sums.get(dataset, None),
untar=True)
def load_vectors(loc):
return (load_array(loc+'.dat'),
pickle.load(open(loc+'_words.pkl','rb')),
pickle.load(open(loc+'_idx.pkl','rb')))
vecs, words, wordidx = load_vectors(get_glove_dataset('6B.50d'))
def create_emb():
n_fact = vecs.shape[1]
emb = np.zeros((vocab_size, n_fact))
for i in range(1,len(emb)):
word = idx2word[i]
if word and re.match(r"^[a-zA-Z0-9\-]*$", word):
src_idx = wordidx[word]
emb[i] = vecs[src_idx]
else:
# If we can't find the word in glove, randomly initialize
emb[i] = normal(scale=0.6, size=(n_fact,))
# This is our "rare word" id - we want to randomly initialize
emb[-1] = normal(scale=0.6, size=(n_fact,))
emb/=3
return emb
emb = create_emb()
model = Sequential([
Embedding(vocab_size, 50, input_length=seq_len, dropout=0.2,
weights=[emb], trainable=False),
Dropout(0.25),
Convolution1D(64, 5, border_mode='same', activation='relu'),
Dropout(0.25),
MaxPooling1D(),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64, verbose=2)
model.layers[0].trainable=True
model.optimizer.lr=1e-4
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=1, batch_size=64, verbose = 2)
model.save_weights(model_path+'glove50.h5')
from keras.layers import Merge
graph_in = Input ((vocab_size, 50))
convs = [ ]
for fsz in range (3, 6):
x = Convolution1D(64, fsz, border_mode='same', activation="relu")(graph_in)
x = MaxPooling1D()(x)
x = Flatten()(x)
convs.append(x)
out = Merge(mode="concat")(convs)
graph = Model(graph_in, out)
emb = create_emb()
model = Sequential ([
Embedding(vocab_size, 50, input_length=seq_len, dropout=0.2, weights=[emb]),
Dropout (0.2),
graph,
Dropout (0.5),
Dense (100, activation="relu"),
Dropout (0.7),
Dense (1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64, verbose=2)
model.layers[0].trainable=False
model.optimizer.lr=1e-5
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64, verbose=2)
model = Sequential([
Embedding(vocab_size, 32, input_length=seq_len, mask_zero=True,
W_regularizer=l2(1e-6), dropout=0.2),
LSTM(100, consume_less='gpu'),
Dense(1, activation='sigmoid')])
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=5, batch_size=64, verbose=2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is the word list
Step2: ...and this is the mapping from id to word
Step3: We download the reviews using code copied from keras.datasets
Step4: Here's the 1st review. As you see, the words have been replaced by ids. The ids can be looked up in idx2word.
Step5: The first word of the first review is 23022. Let's see what that is.
Step6: Here's the whole review, mapped from ids to words.
Step7: The labels are 1 for positive, 0 for negative.
Step8: Reduce vocab size by setting rare words to max index.
Step9: Look at distribution of lengths of sentences.
Step10: Pad (with zero) or truncate each sentence to make consistent length.
Step11: This results in nice rectangular matrices that can be passed to ML algorithms. Reviews shorter than 500 words are pre-padded with zeros, those greater are truncated.
Step12: Create simple models
Step13: The stanford paper that this dataset is from cites a state of the art accuracy (without unlabelled data) of 0.883. So we're short of that, but on the right track.
Step14: That's well past the Stanford paper's accuracy - another win for CNNs!
Step16: Pre-trained vectors
Step17: The glove word ids and imdb word ids use different indexes. So we create a simple function that creates an embedding matrix using the indexes from imdb, and the embeddings from glove (where they exist).
Step18: We pass our embedding matrix to the Embedding constructor, and set it to non-trainable.
Step19: We already have beaten our previous model! But let's fine-tune the embedding weights - especially since the words we couldn't find in glove just have random embeddings.
Step20: As expected, that's given us a nice little boost.
Step21: Multi-size CNN
Step22: We use the functional API to create multiple conv layers of different sizes, and then concatenate them.
Step23: We then replace the conv/max-pool layer in our original CNN with the concatenated conv layers.
Step24: Interestingly, I found that in this case I got best results when I started the embedding layer as being trainable, and then set it to non-trainable after a couple of epochs. I have no idea why!
Step25: This more complex architecture has given us another boost in accuracy.
|
15,438 | <ASSISTANT_TASK:>
Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
import pinkfish as pf
import strategy
# Format price data.
pd.options.display.float_format = '{:0.2f}'.format
pd.set_option('display.max_rows', None)
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
# symbol: (description, multiplier)
equities = {
'ES=F': ('E-Mini S&P 500 Futures', 50),
'YM=F': ('Mini Dow Jones Futures', 5),
'NQ=F': ('Nasdaq 100 Futures', 20),
}
interest_rates = {
'ZB=F': ('U.S. Treasury Bond Futures', 1000),
'ZN=F': ('10-Year T-Note Futures', 1000),
'ZF=F': ('Five-Year US Treasury Note Futures', 1000),
'ZT=F': ('2-Year T-Note Futures', 2000)
}
# https://www.cmegroup.com/markets/agriculture.html#products
agricultural_commodities = {
# Grains
'ZC=F': ('Corn Futures', 5000),
'KE=F': ('KC HRW Wheat Futures', 5000),
'ZS=F': ('Soybean Futures', 50),
'KE=F': ('KC HRW Wheat Futures', 50),
'ZR=F': ('Rough Rice Futures', 2000),
'ZM=F': ('Soybean Meal Futures', 100),
'ZL=F': ('Soybean Oil Futures', 600),
'GF=F': ('Feeder Cattle Futures', 500),
'HE=F': ('Lean Hogs Futures', 400),
'CC=F': ('Cocoa Futures', 10),
'KC=F': ('Coffee Futures', 375),
'CT=F': ('Cotton Futures', 50000),
'LBS=F': ('Lumber Futures', 110),
'SB=F': ('Sugar #11 Futures', 1120)
}
non_agricultural_commodities = {
'GC=F': ('Gold Futures', 100),
'SI=F': ('Silver Futures', 5000),
'PL=F': ('Platinum Futures', 50),
'HG=F': ('Copper Futures', 25000),
'PA=F': ('Palladium Futures', 100),
'CL=F': ('Crude Oil Futures', 1000),
'HO=F': ('Heating Oil Futures', 42000),
'NG=F': ('Natural Gas Futures', 10000),
'RB=F': ('RBOB Gasoline Futures', 42000)
}
# https://www.cmegroup.com/markets/fx.html#products
currency_multiplier = 100
currencies = {
# G10
'DX=F': ('U.S. Dollar Index', currency_multiplier),
'6E=F': ('Euro FX Futures', currency_multiplier),
'6J=F': ('Japanese Yen Futures', currency_multiplier),
'6A=F': ('Australian Dollar Futures', currency_multiplier),
'6B=F': ('British Pound Futures', currency_multiplier),
'6C=F': ('Canadian Dollar Futures', currency_multiplier),
'6S=F': ('Swiss Franc Futures', currency_multiplier),
'6N=F': ('New Zealand Dollar Futures', currency_multiplier),
}
futures = {**equities, **interest_rates, **agricultural_commodities,
**non_agricultural_commodities, **currencies}
ten_largest = ['ZN=F', 'ES=F', 'CL=F', 'GC=F', 'ZC=F', 'KC=F', 'CT=F', 'DX=F']
symbols = list(ten_largest)
#symbols = ['ES=F', 'GC=F', 'CL=F']
capital = 100_000
start = datetime.datetime(1900, 1, 1)
end = datetime.datetime.now()
options = {
'use_adj' : False,
'use_cache' : True,
'sell_short' : False,
'force_stock_market_calendar' : True,
'margin' : 2,
'sma_timeperiod_slow': 100,
'sma_timeperiod_fast': 50,
'use_vola_weight' : True
}
s = strategy.Strategy(symbols, capital, start, end, options=options)
s.run()
s.rlog.head()
s.tlog.head()
s.dbal.tail()
pf.print_full(s.stats)
weights = {symbol: 1 / len(symbols) for symbol in symbols}
totals = s.portfolio.performance_per_symbol(weights=weights)
totals
corr_df = s.portfolio.correlation_map(s.ts)
corr_df
benchmark = pf.Benchmark('SPY', s.capital, s.start, s.end, use_adj=True)
benchmark.run()
pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal)
df = pf.plot_bar_graph(s.stats, benchmark.stats)
df
kelly = pf.kelly_criterion(s.stats, benchmark.stats)
kelly
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Investment Universe
Step2: Run Strategy
Step3: View log DataFrames
Step4: Generate strategy stats - display all available stats
Step5: View Performance by Symbol
Step6: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
Step7: Plot Equity Curves
Step8: Bar Graph
Step9: Analysis
|
15,439 | <ASSISTANT_TASK:>
Python Code:
%%bash
source activate root # you need to change here to your env name
pip install kaggle-cli
%%bash
source activate root # you need to change here to your env name
rm -rf data
mkdir -p data
pushd data
kg download
unzip -q train.zip
unzip -q test.zip
popd
from glob import glob
import numpy as np
from shutil import move, copyfile
%mkdir -p data/train
%mkdir -p data/valid
%mkdir -p data/sample/train
%mkdir -p data/sample/valid
%pushd data/train
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(200): copyfile(shuf[i], '../sample/train/' + shuf[i])
shuf = np.random.permutation(g)
for i in range(200): copyfile(shuf[i], '../sample/valid/' + shuf[i])
# validation files are moved
shuf = np.random.permutation(g)
for i in range(1000): move(shuf[i], '../valid/' + shuf[i])
%popd
%pushd data/train
% mkdir cat dog
% mv cat*.jpg cat
% mv dog*.jpg dog
%popd
%pushd data/valid
% mkdir cat dog
% mv cat*.jpg cat
% mv dog*.jpg dog
%popd
%pushd data/sample/train
% mkdir cat dog
% mv cat*.jpg cat
% mv dog*.jpg dog
%popd
%pushd data/sample/valid
% mkdir cat dog
% mv cat*.jpg cat
% mv dog*.jpg dog
%popd
%pushd data/test
% mkdir unknown
% mv *.jpg unknown
%popd
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: download dataset
Step2: Copy files to valid and sample
Step3: Arrangement files
|
15,440 | <ASSISTANT_TASK:>
Python Code:
def get_val(line):
Get the value after the key for a RIS formatted line
>>> get_val('AU - Garcia-Pino, Abel')
'Garcia-Pino, Abel'
>>> get_val('AU - Uversky, Vladimir N.')
'Uversky, Vladimir N.'
>>> get_val('SP - 6933')
'6933'
>>> get_val('EP - 6947')
'6947'
#Finish...
import doctest
doctest.testmod()
filein = open('data/achs_chreay114_6557.ris', 'r')
articles = []
for line in filein:
if line.strip()=='':
articles.append(dict())
if line.startswith('AU -'):
articles[-1].setdefault('authors', []).append(get_val(line).strip())
if line.startswith('SP -'):
#Finish...
if line.startswith('EP -'):
#finish...
filein.close()
articles = #Finish...
import numpy as np
page_lengths = [*FINISH* for d in articles]
page_lengths
"The average number of pages is {:.1f} and its standard deviation {:.1f}".format(*FINISH*)
author_dict = {}
for d in articles:
for author in d['authors']:
author_dict[author] = author_dict.setdefault(author, 0) + 1
author_dict.values()
papers_authored = set([p for p in author_dict.values() if p>1]) # Remove repeated elements
papers_authored = # We need to convert to a list to sort
#Sort the list...
# Print the results
def get_pages(name, articles):
Get the total number of pages for a given author.
Return 0 if the author is not in the article authors.
pages = 0
for art in articles:
if name in art['authors']:
pages += art['ep']-art['sp']+1
return pages
#Finish...
from collections import Counter
author_count = Counter()
for d in articles:
author_count.update(d['authors'])
for author, val in author_count.most_common():
if val > 1:
pages = get_pages(author, articles)
print("{} published {} papers adding up to {} pages".format(author, val, pages))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Text Parsing. Counting authors in a journal issue
Step2: Now we scan all the text file. For each empty line we create a new entry (we'll correct for the blank lines at the beggining of the file later).
Step3: Because there were some blank lines at the begining of the file, that created some empty dictionaries that we can now remove (using the fact that empty object evaluate as False)
Step4: Analysing the data
Step5: Now let's count how many papers in this issue has contributed each author. We'll create a dictionary of authors keys and number of papers as values.
Step6: So one author published 7 papers in that issue, another one 3 papers and several have authored 2 papers! Let's see who authored more than one paper. We will print the results sorted by the number of papers authored. We want to get a list such as
Step8: Now let's see how many pages they have (presumably) written or supervised... Get something like
Step9: As usual, some documentation search would have shown us a module that could have eased the coding. The collections module has a Counter type that is useful for counting things. When fed with a list, Counter counts its elements and stores something similar to a dictionary.
Step10: author_dict and author_count are similar objects. But Counters has some useful counting methods, so that we do not need the papers_authored list. Using the Counter object the code is simpler. This prints our final list in a single loop
|
15,441 | <ASSISTANT_TASK:>
Python Code:
import random as rnd
class Supplier():
def __init__(self):
self.wta = []
# the supplier has n quantities that they can sell
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self,n,l,u):
for i in range(n):
p = rnd.uniform(l,u)
self.wta.append(p)
# return the dictionary of willingness to ask
def get_ask(self):
return self.wta
class Buyer():
def __init__(self):
self.wtp = []
# the supplier has n quantities that they can buy
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self,n,l,u):
for i in range(n):
p = rnd.uniform(l,u)
self.wtp.append(p)
# return list of willingness to pay
def get_bid(self):
return self.wtp
class Market():
count = 0
last_price = ''
b = []
s = []
def __init__(self,b,s):
# buyer list sorted in descending order
self.b = sorted(b, reverse=True)
# seller list sorted in ascending order
self.s = sorted(s, reverse=False)
# return the price at which the market clears
# assume equal numbers of sincere buyers and sellers
def get_clearing_price(self):
# buyer makes a bid, starting with the buyer which wants it most
for i in range(len(self.b)):
if (self.b[i] > self.s[i]):
self.count +=1
self.last_price = self.b[i]
return self.last_price
def get_units_cleared(self):
return self.count
# make a supplier and get the asks
supplier = Supplier()
supplier.set_quantity(60,10,30)
ask = supplier.get_ask()
# make a buyer and get the bids (n,l,u)
buyer = Buyer()
buyer.set_quantity(60,10,30)
bid = buyer.get_bid()
# make a market where the buyers and suppliers can meet
# the bids and asks are a list of prices
market = Market(bid,ask)
price = market.get_clearing_price()
quantity = market.get_units_cleared()
# output the results of the market
print("Goods cleared for a price of ",price)
print("Units sold are ", quantity)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example Market
|
15,442 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn
import warnings
warnings.filterwarnings('ignore')
data = pd.read_csv("data.csv", header=None, skiprows=1, names=['age', 'workclass', 'fnlwgt',
'education-categorical', 'educ',
'marital-status', 'occupation',
'relationship', 'race', 'sex',
'captial-gain', 'capital-loss',
'hours', 'native-country',
'income'])
data
data = data[~pd.isnull(data['income'])]
data[data['native-country']==" United-States"]
income = 1 * (data['income'] == " >50K")
age2 = np.square(data['age'])
data = data[['age', 'educ', 'hours']]
data['age2'] = age2
data['income'] = income
income.value_counts()
with pm.Model() as model:
pm.glm.glm('income ~ age + age2 + educ + hours', data)
trace = pm.sample(2000, pm.NUTS(), progressbar=True)
trace = trace[1000:]
plt.figure(figsize=(9,7))
plt.hist2d(trace['age'], trace['educ'], alpha=.5, cmap="Reds", bins=25)
plt.colorbar()
seaborn.kdeplot(trace['age'], trace['educ'])
plt.xlabel("beta_age")
plt.ylabel("beta_educ")
plt.show()
# Linear model with hours == 50 and educ == 12
lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*12 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 16
lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*16 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 19
lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*19 +
samples['hours']*50)))
# Plot the posterior predictive distributions of P(income > $50K) vs. age
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15)
plt.ylim([.4,.8])
plt.ylabel("P(Income > $50K)")
plt.xlabel("Age")
plt.show()
b = trace['educ']
plt.hist(np.exp(b), bins=25, normed=True)
plt.xlabel("Odds Ratio")
plt.show()
lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5)
print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(3*lb),np.exp(3*ub)))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Adult Data Set is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.
Step2: The model
Step3: Some results
Step4: So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models
Step5: Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values.
Step6: Finally, we can find a confidence interval for this quantity. This may be the best part about Bayesian statistics
|
15,443 | <ASSISTANT_TASK:>
Python Code:
%load exercises/3.1-colors.py
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, , t, t**2, , t, t**3, )
plt.show()
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, , t, t**2, , t, t**3, )
plt.show()
xs, ys = np.mgrid[:4, 9:0:-1]
markers = [".", "+", ",", "x", "o", "D", "d", "", "8", "s", "p", "*", "|", "_", "h", "H", 0, 4, "<", "3",
1, 5, ">", "4", 2, 6, "^", "2", 3, 7, "v", "1", "None", None, " ", ""]
descripts = ["point", "plus", "pixel", "cross", "circle", "diamond", "thin diamond", "",
"octagon", "square", "pentagon", "star", "vertical bar", "horizontal bar", "hexagon 1", "hexagon 2",
"tick left", "caret left", "triangle left", "tri left", "tick right", "caret right", "triangle right", "tri right",
"tick up", "caret up", "triangle up", "tri up", "tick down", "caret down", "triangle down", "tri down",
"Nothing", "Nothing", "Nothing", "Nothing"]
fig, ax = plt.subplots(1, 1, figsize=(14, 4))
for x, y, m, d in zip(xs.T.flat, ys.T.flat, markers, descripts):
ax.scatter(x, y, marker=m, s=100)
ax.text(x + 0.1, y - 0.1, d, size=14)
ax.set_axis_off()
plt.show()
%load exercises/3.2-markers.py
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, , t, t**2, , t, t**3, )
plt.show()
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, '-', t, t**2, '--', t, t**3, '-.', t, -t, ':')
plt.show()
fig, ax = plt.subplots(1, 1)
ax.bar([1, 2, 3, 4], [10, 20, 15, 13], ls='dashed', ec='r', lw=5)
plt.show()
t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.show()
%load exercises/3.3-properties.py
t = np.arange(0.0, 5.0, 0.1)
a = np.exp(-t) * np.cos(2*np.pi*t)
plt.plot(t, a, )
plt.show()
%load http://matplotlib.org/mpl_examples/color/colormaps_reference.py # For those with v1.2 or higher
Reference for colormaps included with Matplotlib.
This reference example shows all colormaps included with Matplotlib. Note that
any colormap listed here can be reversed by appending "_r" (e.g., "pink_r").
These colormaps are divided into the following categories:
Sequential:
These colormaps are approximately monochromatic colormaps varying smoothly
between two color tones---usually from low saturation (e.g. white) to high
saturation (e.g. a bright blue). Sequential colormaps are ideal for
representing most scientific data since they show a clear progression from
low-to-high values.
Diverging:
These colormaps have a median value (usually light in color) and vary
smoothly to two different color tones at high and low values. Diverging
colormaps are ideal when your data has a median value that is significant
(e.g. 0, such that positive and negative values are represented by
different colors of the colormap).
Qualitative:
These colormaps vary rapidly in color. Qualitative colormaps are useful for
choosing a set of discrete colors. For example::
color_list = plt.cm.Set3(np.linspace(0, 1, 12))
gives a list of RGB colors that are good for plotting a series of lines on
a dark background.
Miscellaneous:
Colormaps that don't fit into the categories above.
import numpy as np
import matplotlib.pyplot as plt
cmaps = [('Sequential', ['binary', 'Blues', 'BuGn', 'BuPu', 'gist_yarg',
'GnBu', 'Greens', 'Greys', 'Oranges', 'OrRd',
'PuBu', 'PuBuGn', 'PuRd', 'Purples', 'RdPu',
'Reds', 'YlGn', 'YlGnBu', 'YlOrBr', 'YlOrRd']),
('Sequential (2)', ['afmhot', 'autumn', 'bone', 'cool', 'copper',
'gist_gray', 'gist_heat', 'gray', 'hot', 'pink',
'spring', 'summer', 'winter']),
('Diverging', ['BrBG', 'bwr', 'coolwarm', 'PiYG', 'PRGn', 'PuOr',
'RdBu', 'RdGy', 'RdYlBu', 'RdYlGn', 'seismic']),
('Qualitative', ['Accent', 'Dark2', 'hsv', 'Paired', 'Pastel1',
'Pastel2', 'Set1', 'Set2', 'Set3', 'spectral']),
('Miscellaneous', ['gist_earth', 'gist_ncar', 'gist_rainbow',
'gist_stern', 'jet', 'brg', 'CMRmap', 'cubehelix',
'gnuplot', 'gnuplot2', 'ocean', 'rainbow',
'terrain', 'flag', 'prism'])]
nrows = max(len(cmap_list) for cmap_category, cmap_list in cmaps)
gradient = np.linspace(0, 1, 256)
gradient = np.vstack((gradient, gradient))
def plot_color_gradients(cmap_category, cmap_list):
fig, axes = plt.subplots(nrows=nrows)
fig.subplots_adjust(top=0.95, bottom=0.01, left=0.2, right=0.99)
axes[0].set_title(cmap_category + ' colormaps', fontsize=14)
for ax, name in zip(axes, cmap_list):
ax.imshow(gradient, aspect='auto', cmap=plt.get_cmap(name))
pos = list(ax.get_position().bounds)
x_text = pos[0] - 0.01
y_text = pos[1] + pos[3]/2.
fig.text(x_text, y_text, name, va='center', ha='right', fontsize=10)
# Turn off *all* ticks & spines, not just the ones with colormaps.
for ax in axes:
ax.set_axis_off()
for cmap_category, cmap_list in cmaps:
plot_color_gradients(cmap_category, cmap_list)
plt.show()
fig, (ax1, ax2) = plt.subplots(1, 2)
z = np.random.random((10, 10))
ax1.imshow(z, interpolation='none', cmap='gray')
ax2.imshow(z, interpolation='none', cmap='coolwarm')
plt.show()
plt.scatter([1, 2, 3, 4], [4, 3, 2, 1])
plt.title(r'$\sigma_i=15$', fontsize=20)
plt.show()
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
plt.plot(t, s, lw=2)
plt.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=dict(facecolor='black', shrink=0.05))
plt.ylim(-2, 2)
plt.show()
import matplotlib.patches as mpatches
styles = mpatches.ArrowStyle.get_styles()
ncol = 2
nrow = (len(styles)+1) // ncol
figheight = (nrow+0.5)
fig = plt.figure(figsize=(4.0*ncol/0.85, figheight/0.85))
fontsize = 0.4 * 70
ax = fig.add_axes([0, 0, 1, 1])
ax.set_xlim(0, 4*ncol)
ax.set_ylim(0, figheight)
def to_texstring(s):
s = s.replace("<", r"$<$")
s = s.replace(">", r"$>$")
s = s.replace("|", r"$|$")
return s
for i, (stylename, styleclass) in enumerate(sorted(styles.items())):
x = 3.2 + (i//nrow)*4
y = (figheight - 0.7 - i%nrow)
p = mpatches.Circle((x, y), 0.2, fc="w")
ax.add_patch(p)
ax.annotate(to_texstring(stylename), (x, y),
(x-1.2, y),
ha="right", va="center",
size=fontsize,
arrowprops=dict(arrowstyle=stylename,
patchB=p,
shrinkA=50,
shrinkB=5,
fc="w", ec="k",
connectionstyle="arc3,rad=-0.25",
),
bbox=dict(boxstyle="square", fc="w"))
ax.set_axis_off()
plt.show()
%load exercises/3.4-arrows.py
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
plt.plot(t, s, lw=2)
plt.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=dict())
plt.ylim(-2, 2)
plt.show()
bars = plt.bar([1, 2, 3, 4], [10, 12, 15, 17])
plt.setp(bars[0], hatch='x', facecolor='w')
plt.setp(bars[1], hatch='xx-', facecolor='orange')
plt.setp(bars[2], hatch='+O.', facecolor='c')
plt.setp(bars[3], hatch='*', facecolor='y')
plt.show()
import matplotlib
print(matplotlib.matplotlib_fname())
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcdefaults() # for when re-running this cell
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot([1, 2, 3, 4])
mpl.rc('lines', linewidth=2, linestyle='-.')
# Equivalent older, but still valid syntax
#mpl.rcParams['lines.linewidth'] = 2
#mpl.rcParams['lines.linestyle'] = '-.'
ax2.plot([1, 2, 3, 4])
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Markers
Step2: Exercise 3.2
Step3: Linestyles
Step4: It is a bit confusing, but the line styles mentioned above are only valid for lines. Whenever you are dealing with the linestyles of the edges of "Patch" objects, you will need to use words instead of the symbols. So "solid" instead of "-", and "dashdot" instead of "-.". This issue will be fixed for the v2.1 release and allow these specifications to be used interchangably.
Step5: Plot attributes
Step6: | Property | Value Type
Step8: Colormaps
Step9: When colormaps are created in mpl, they get "registered" with a name. This allows one to specify a colormap to use by name.
Step10: Mathtext
Step11: Annotations and Arrows
Step12: There are all sorts of boxes for your text, and arrows you can use, and there are many different ways to connect the text to the point that you want to annotate. For a complete tutorial on this topic, go to the Annotation Guide. In the meantime, here is a table of the kinds of arrows that can be drawn
Step13: Exercise 3.4
Step14: Hatches
Step15: Transforms
Step16: You can also change the rc settings during runtime within a python script or interactively from the python shell. All of the rc settings are stored in a dictionary-like variable called matplotlib.rcParams, which is global to the matplotlib package. rcParams can be modified directly. Newer versions of matplotlib can use rc(), for example
|
15,444 | <ASSISTANT_TASK:>
Python Code:
!pip install ott-jax
!pip install POT
# import JAX and OTT
import jax
import jax.numpy as jnp
import ott
from ott.geometry import pointcloud
from ott.core import sinkhorn
# import OT, from POT
import numpy as np
import ot
# misc
import matplotlib.pyplot as plt
plt.rc('font', size = 20)
import mpl_toolkits.axes_grid1
import timeit
def solve_ot(a, b, x, y, 𝜀, threshold):
_, log = ot.sinkhorn(a, b, ot.dist(x,y), 𝜀, stopThr=threshold,
method='sinkhorn_stabilized', log=True,
numItermax=1000)
f, g = 𝜀 * log['logu'], 𝜀 * log['logv']
f, g = f - np.mean(f), g + np.mean(f) # center variables, useful if one wants to compare them
reg_ot = np.sum(f * a) + np.sum(g * b) if log['err'][-1] < threshold else np.nan
return f, g, reg_ot
@jax.jit
def solve_ott(a, b, x, y, 𝜀, threshold):
out = sinkhorn.sinkhorn(pointcloud.PointCloud(x, y, epsilon=𝜀),
a, b, threshold=threshold, lse_mode=True, jit=False,
max_iterations=1000)
f, g = out.f, out.g
f, g = f - np.mean(f), g + np.mean(f) # center variables, useful if one wants to compare them
reg_ot = jnp.where(out.converged, jnp.sum(f * a) + jnp.sum(g * b), jnp.nan)
return f, g, reg_ot
dim = 3
def run_simulation(rng, n, 𝜀, threshold, solver_spec):
# setting global variables helps avoir a timeit bug.
global solver_
global a, b, x , y
# extract specificities of solver.
solver_, env, name = solver_spec
# draw data at random using JAX
rng, *rngs = jax.random.split(rng, 5)
x = jax.random.uniform(rngs[0], (n, dim))
y = jax.random.uniform(rngs[1], (n, dim)) + 0.1
a = jax.random.uniform(rngs[2], (n,))
b = jax.random.uniform(rngs[3], (n,))
a = a / jnp.sum(a)
b = b / jnp.sum(b)
# map to numpy if needed
if env == 'np':
a, b, x, y = map(np.array,(a, b, x, y))
timeit_res = %timeit -o solver_(a, b, x, y, 𝜀, threshold)
out = solver_(a, b, x, y, 𝜀, threshold)
exec_time = np.nan if np.isnan(out[-1]) else timeit_res.best
return exec_time, out
POT = (solve_ot, 'np', 'POT')
OTT = (solve_ott, 'jax', 'OTT')
rng = jax.random.PRNGKey(0)
solvers = (POT, OTT)
n_range = 2 ** np.arange(8, 13)
𝜀_range = 10 ** np.arange(-2.0, 0.0)
threshold = 1e-2
exec_time = {}
reg_ot = {}
for solver_spec in solvers:
solver, env, name = solver_spec
print('----- ', name)
exec_time[name] = np.ones((len(n_range), len(𝜀_range))) * np.nan
reg_ot[name] = np.ones((len(n_range), len(𝜀_range))) * np.nan
for i, n in enumerate(n_range):
for j, 𝜀 in enumerate(𝜀_range):
exec, out = run_simulation(rng, n, 𝜀, threshold, solver_spec)
exec_time[name][i, j] = exec
reg_ot[name][i, j] = out[-1]
list_legend = []
fig = plt.figure(figsize=(14,8))
for solver_spec, marker, col in zip(solvers,('p','o'), ('blue','red')):
solver, env, name = solver_spec
p = plt.plot(exec_time[name], marker=marker, color=col,
markersize=16, markeredgecolor='k', lw=3)
p[0].set_linestyle('dotted')
p[1].set_linestyle('solid')
list_legend += [name + r' $\varepsilon $=' + "{:.2g}".format(𝜀) for 𝜀 in 𝜀_range]
plt.xticks(ticks=np.arange(len(n_range)), labels=n_range)
plt.legend(list_legend)
plt.yscale('log')
plt.xlabel('dimension $n$')
plt.ylabel('time (s)')
plt.title(r'Execution Time vs Dimension for OTT and POT for two $\varepsilon$ values')
plt.show()
fig = plt.figure(figsize=(12,8))
ax = plt.gca()
im = ax.imshow(reg_ot['OTT'].T - reg_ot['POT'].T)
plt.xticks(ticks=np.arange(len(n_range)), labels=n_range)
plt.yticks(ticks=np.arange(len(𝜀_range)), labels=𝜀_range)
plt.xlabel('dimension $n$')
plt.ylabel(r'regularization $\varepsilon$')
plt.title('Gap in objective, >0 when OTT is better')
divider = mpl_toolkits.axes_grid1.make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
plt.colorbar(im, cax=cax)
plt.show()
for name in ('POT','OTT'):
print('----', name)
print('Objective')
print(reg_ot[name])
print('Execution Time')
print(exec_time[name])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ... and import them, along with their numerical environments, jax and numpy.
Step2: Regularized OT in a nutshell
Step3: To test both solvers, we run simulations using a random seed to generate random point clouds of size $n$. Random generation is carried out using jax.random, to ensure reproducibility. A solver provides three pieces of info
Step4: Defines the two solvers used in this experiment
Step5: Runs simulations with varying $n$ and $\varepsilon$
Step6: Plots results in terms of time and difference in objective.
Step7: For good measure, we also show the differences in objectives between the two solvers. We substract the objective returned by POT to that returned by OTT.
|
15,445 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@test {"skip": true}
!pip install tensorflow-lattice pydot
import tensorflow as tf
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
from tensorflow import feature_column as fc
logging.disable(sys.maxsize)
# UCI Statlog (Heart) dataset.
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
training_data_df = pd.read_csv(csv_file).sample(
frac=1.0, random_state=41).reset_index(drop=True)
training_data_df.head()
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 100
# Lattice layer expects input[i] to be within [0, lattice_sizes[i] - 1.0], so
lattice_sizes = [3, 2, 2, 2, 2, 2, 2]
combined_calibrators = tfl.layers.ParallelCombination()
# ############### age ###############
calibrator = tfl.layers.PWLCalibration(
# Every PWLCalibration layer must have keypoints of piecewise linear
# function specified. Easiest way to specify them is to uniformly cover
# entire input range by using numpy.linspace().
input_keypoints=np.linspace(
training_data_df['age'].min(), training_data_df['age'].max(), num=5),
# You need to ensure that input keypoints have same dtype as layer input.
# You can do it by setting dtype here or by providing keypoints in such
# format which will be converted to desired tf.dtype by default.
dtype=tf.float32,
# Output range must correspond to expected lattice input range.
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
)
combined_calibrators.append(calibrator)
# ############### sex ###############
# For boolean features simply specify CategoricalCalibration layer with 2
# buckets.
calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
# Initializes all outputs to (output_min + output_max) / 2.0.
kernel_initializer='constant')
combined_calibrators.append(calibrator)
# ############### cp ###############
calibrator = tfl.layers.PWLCalibration(
# Here instead of specifying dtype of layer we convert keypoints into
# np.float32.
input_keypoints=np.linspace(1, 4, num=4, dtype=np.float32),
output_min=0.0,
output_max=lattice_sizes[2] - 1.0,
monotonicity='increasing',
# You can specify TFL regularizers as a tuple ('regularizer name', l1, l2).
kernel_regularizer=('hessian', 0.0, 1e-4))
combined_calibrators.append(calibrator)
# ############### trestbps ###############
calibrator = tfl.layers.PWLCalibration(
# Alternatively, you might want to use quantiles as keypoints instead of
# uniform keypoints
input_keypoints=np.quantile(training_data_df['trestbps'],
np.linspace(0.0, 1.0, num=5)),
dtype=tf.float32,
# Together with quantile keypoints you might want to initialize piecewise
# linear function to have 'equal_slopes' in order for output of layer
# after initialization to preserve original distribution.
kernel_initializer='equal_slopes',
output_min=0.0,
output_max=lattice_sizes[3] - 1.0,
# You might consider clamping extreme inputs of the calibrator to output
# bounds.
clamp_min=True,
clamp_max=True,
monotonicity='increasing')
combined_calibrators.append(calibrator)
# ############### chol ###############
calibrator = tfl.layers.PWLCalibration(
# Explicit input keypoint initialization.
input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
dtype=tf.float32,
output_min=0.0,
output_max=lattice_sizes[4] - 1.0,
# Monotonicity of calibrator can be decreasing. Note that corresponding
# lattice dimension must have INCREASING monotonicity regardless of
# monotonicity direction of calibrator.
monotonicity='decreasing',
# Convexity together with decreasing monotonicity result in diminishing
# return constraint.
convexity='convex',
# You can specify list of regularizers. You are not limited to TFL
# regularizrs. Feel free to use any :)
kernel_regularizer=[('laplacian', 0.0, 1e-4),
tf.keras.regularizers.l1_l2(l1=0.001)])
combined_calibrators.append(calibrator)
# ############### fbs ###############
calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[5] - 1.0,
# For categorical calibration layer monotonicity is specified for pairs
# of indices of categories. Output for first category in pair will be
# smaller than output for second category.
#
# Don't forget to set monotonicity of corresponding dimension of Lattice
# layer to '1'.
monotonicities=[(0, 1)],
# This initializer is identical to default one('uniform'), but has fixed
# seed in order to simplify experimentation.
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=0.0, maxval=lattice_sizes[5] - 1.0, seed=1))
combined_calibrators.append(calibrator)
# ############### restecg ###############
calibrator = tfl.layers.CategoricalCalibration(
num_buckets=3,
output_min=0.0,
output_max=lattice_sizes[6] - 1.0,
# Categorical monotonicity can be partial order.
monotonicities=[(0, 1), (0, 2)],
# Categorical calibration layer supports standard Keras regularizers.
kernel_regularizer=tf.keras.regularizers.l1_l2(l1=0.001),
kernel_initializer='constant')
combined_calibrators.append(calibrator)
lattice = tfl.layers.Lattice(
lattice_sizes=lattice_sizes,
monotonicities=[
'increasing', 'none', 'increasing', 'increasing', 'increasing',
'increasing', 'increasing'
],
output_min=0.0,
output_max=1.0)
model = tf.keras.models.Sequential()
model.add(combined_calibrators)
model.add(lattice)
features = training_data_df[[
'age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg'
]].values.astype(np.float32)
target = training_data_df[['target']].values.astype(np.float32)
model.compile(
loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.Adagrad(learning_rate=LEARNING_RATE))
model.fit(
features,
target,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_split=0.2,
shuffle=False,
verbose=0)
model.evaluate(features, target)
# We are going to have 2-d embedding as one of lattice inputs.
lattice_sizes = [3, 2, 2, 3, 3, 2, 2]
model_inputs = []
lattice_inputs = []
# ############### age ###############
age_input = tf.keras.layers.Input(shape=[1], name='age')
model_inputs.append(age_input)
age_calibrator = tfl.layers.PWLCalibration(
# Every PWLCalibration layer must have keypoints of piecewise linear
# function specified. Easiest way to specify them is to uniformly cover
# entire input range by using numpy.linspace().
input_keypoints=np.linspace(
training_data_df['age'].min(), training_data_df['age'].max(), num=5),
# You need to ensure that input keypoints have same dtype as layer input.
# You can do it by setting dtype here or by providing keypoints in such
# format which will be converted to desired tf.dtype by default.
dtype=tf.float32,
# Output range must correspond to expected lattice input range.
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
monotonicity='increasing',
name='age_calib',
)(
age_input)
lattice_inputs.append(age_calibrator)
# ############### sex ###############
# For boolean features simply specify CategoricalCalibration layer with 2
# buckets.
sex_input = tf.keras.layers.Input(shape=[1], name='sex')
model_inputs.append(sex_input)
sex_calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
# Initializes all outputs to (output_min + output_max) / 2.0.
kernel_initializer='constant',
name='sex_calib',
)(
sex_input)
lattice_inputs.append(sex_calibrator)
# ############### cp ###############
cp_input = tf.keras.layers.Input(shape=[1], name='cp')
model_inputs.append(cp_input)
cp_calibrator = tfl.layers.PWLCalibration(
# Here instead of specifying dtype of layer we convert keypoints into
# np.float32.
input_keypoints=np.linspace(1, 4, num=4, dtype=np.float32),
output_min=0.0,
output_max=lattice_sizes[2] - 1.0,
monotonicity='increasing',
# You can specify TFL regularizers as tuple ('regularizer name', l1, l2).
kernel_regularizer=('hessian', 0.0, 1e-4),
name='cp_calib',
)(
cp_input)
lattice_inputs.append(cp_calibrator)
# ############### trestbps ###############
trestbps_input = tf.keras.layers.Input(shape=[1], name='trestbps')
model_inputs.append(trestbps_input)
trestbps_calibrator = tfl.layers.PWLCalibration(
# Alternatively, you might want to use quantiles as keypoints instead of
# uniform keypoints
input_keypoints=np.quantile(training_data_df['trestbps'],
np.linspace(0.0, 1.0, num=5)),
dtype=tf.float32,
# Together with quantile keypoints you might want to initialize piecewise
# linear function to have 'equal_slopes' in order for output of layer
# after initialization to preserve original distribution.
kernel_initializer='equal_slopes',
output_min=0.0,
output_max=lattice_sizes[3] - 1.0,
# You might consider clamping extreme inputs of the calibrator to output
# bounds.
clamp_min=True,
clamp_max=True,
monotonicity='increasing',
name='trestbps_calib',
)(
trestbps_input)
lattice_inputs.append(trestbps_calibrator)
# ############### chol ###############
chol_input = tf.keras.layers.Input(shape=[1], name='chol')
model_inputs.append(chol_input)
chol_calibrator = tfl.layers.PWLCalibration(
# Explicit input keypoint initialization.
input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
output_min=0.0,
output_max=lattice_sizes[4] - 1.0,
# Monotonicity of calibrator can be decreasing. Note that corresponding
# lattice dimension must have INCREASING monotonicity regardless of
# monotonicity direction of calibrator.
monotonicity='decreasing',
# Convexity together with decreasing monotonicity result in diminishing
# return constraint.
convexity='convex',
# You can specify list of regularizers. You are not limited to TFL
# regularizrs. Feel free to use any :)
kernel_regularizer=[('laplacian', 0.0, 1e-4),
tf.keras.regularizers.l1_l2(l1=0.001)],
name='chol_calib',
)(
chol_input)
lattice_inputs.append(chol_calibrator)
# ############### fbs ###############
fbs_input = tf.keras.layers.Input(shape=[1], name='fbs')
model_inputs.append(fbs_input)
fbs_calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[5] - 1.0,
# For categorical calibration layer monotonicity is specified for pairs
# of indices of categories. Output for first category in pair will be
# smaller than output for second category.
#
# Don't forget to set monotonicity of corresponding dimension of Lattice
# layer to '1'.
monotonicities=[(0, 1)],
# This initializer is identical to default one ('uniform'), but has fixed
# seed in order to simplify experimentation.
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=0.0, maxval=lattice_sizes[5] - 1.0, seed=1),
name='fbs_calib',
)(
fbs_input)
lattice_inputs.append(fbs_calibrator)
# ############### restecg ###############
restecg_input = tf.keras.layers.Input(shape=[1], name='restecg')
model_inputs.append(restecg_input)
restecg_calibrator = tfl.layers.CategoricalCalibration(
num_buckets=3,
output_min=0.0,
output_max=lattice_sizes[6] - 1.0,
# Categorical monotonicity can be partial order.
monotonicities=[(0, 1), (0, 2)],
# Categorical calibration layer supports standard Keras regularizers.
kernel_regularizer=tf.keras.regularizers.l1_l2(l1=0.001),
kernel_initializer='constant',
name='restecg_calib',
)(
restecg_input)
lattice_inputs.append(restecg_calibrator)
lattice = tfl.layers.Lattice(
lattice_sizes=lattice_sizes,
monotonicities=[
'increasing', 'none', 'increasing', 'increasing', 'increasing',
'increasing', 'increasing'
],
output_min=0.0,
output_max=1.0,
name='lattice',
)(
lattice_inputs)
model_output = tfl.layers.PWLCalibration(
input_keypoints=np.linspace(0.0, 1.0, 5),
name='output_calib',
)(
lattice)
model = tf.keras.models.Model(
inputs=model_inputs,
outputs=model_output)
tf.keras.utils.plot_model(model, rankdir='LR')
feature_names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg']
features = np.split(
training_data_df[feature_names].values.astype(np.float32),
indices_or_sections=len(feature_names),
axis=1)
target = training_data_df[['target']].values.astype(np.float32)
model.compile(
loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.Adagrad(LEARNING_RATE))
model.fit(
features,
target,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_split=0.2,
shuffle=False,
verbose=0)
model.evaluate(features, target)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TFL レイヤーを使用した Keras モデルの作成
Step2: 必要なパッケージをインポートします。
Step3: UCI Statlog (心臓) データセットをダウンロードします。
Step4: このガイドのトレーニングに使用するデフォルト値を設定します。
Step5: Sequential Keras モデル
Step6: <code>tfl.layers.ParallelCombination</code>レイヤーを使用して、Sequential モデルを作成できるようにするために並行して実行する必要があるキャリブレーションレイヤーをグループ化します。
Step7: 各特徴のキャリブレーションレイヤーを作成し、それを並行コンビネーションレイヤーに追加します。数値特徴の場合はtfl.layers.PWLCalibration、カテゴリカル特徴の場合はtfl.layers.CategoricalCalibrationを使用します。
Step8: 次に、キャリブレータの出力を非線形に融合するラティスレイヤーを作成します。
Step9: 次に、キャリブレータとラティスレイヤーを組み合わせて Sequential モデルを作成します。
Step10: トレーニングは、他の Keras モデルと同じように機能します。
Step11: Functional Keras モデル
Step12: 各特徴ごとに入力レイヤーを作成してからキャリブレーションレイヤーを作成する必要があります。数値特徴の場合はtfl.layers.PWLCalibration、カテゴリカル特徴の場合はtfl.layers.CategoricalCalibrationを使用します。
Step13: 次に、キャリブレータの出力を非線形に融合するラティスレイヤーを作成します。
Step14: モデルに柔軟性を追加するために、出力キャリブレーションレイヤーを追加します。
Step15: 入力と出力を使用してモデルを作成できるようになりました。
Step16: トレーニングは、他の Keras モデルと同じように機能します。このセットアップでは、入力された特徴が個別のテンソルとして渡されることに注意してください。
|
15,446 | <ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import zipfile
from matplotlib import pylab
from six.moves import range
from six.moves.urllib.request import urlretrieve
from sklearn.manifold import TSNE
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
def read_data(filename):
Extract the first file enclosed in a zip file as a list of words
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filename)
print('Data size %d' % len(words))
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index_into_count = dictionary[word]
else:
index_into_count = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index_into_count)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10])
del words # Hint to reduce memory.
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:8]])
for num_skips, skip_window in [(2, 1), (4, 2)]:
data_index = 0
batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)
print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
print(' batch:', [reverse_dictionary[bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
# Note: The optimizer will optimize the softmax_weights AND the embeddings.
# This is because the embeddings are defined as a variable quantity and the
# optimizer's `minimize` method will by default modify all variable quantities
# that contribute to the tensor it is passed.
# See docs on `tf.train.Optimizer.minimize()` for more details.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
# why ruclidean distance here, and not cosine?
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)
data_index = 0
def generate_batch(batch_size, skip_window):
assert skip_window == 1 # Handling of this value is hard-coded here.
global data_index
batch = np.ndarray(shape=(batch_size, 2), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2*skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size):
target = skip_window # target label at the center of the buffer
batch[i, 0] = buffer[skip_window-1]
batch[i, 1] = buffer[skip_window+1]
labels[i, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:8]])
for skip_window in [1]:
data_index = 0
batch, labels = generate_batch(batch_size=8, skip_window=skip_window)
print('\nwith skip_window = %d:' % skip_window)
print(' batch:', [[reverse_dictionary[m] for m in bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
span = 2*skip_window + 1 # [ skip_window target skip_window ]
train_dataset = tf.placeholder(tf.int32, shape=[batch_size, (span-1)])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
assert skip_window == 1 # Handling of this value is hard-coded here.
embed0 = tf.nn.embedding_lookup(embeddings, train_dataset[:,0])
embed1 = tf.nn.embedding_lookup(embeddings, train_dataset[:,1])
embed = (embed0 + embed1)/(span-1)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in xrange(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in xrange(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Download the data from the source website if necessary.
Step4: Read the data into a string.
Step5: Build the dictionary and replace rare words with UNK token.
Step6: Function to generate a training batch for the skip-gram model.
Step7: Train a skip-gram model.
Step8: Problem
|
15,447 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
csvframe=pd.read_csv('myCSV_01.csv')
csvframe
# 也可以通过read_table来读写数据
pd.read_table('myCSV_01.csv',sep=',')
pd.read_csv('myCSV_02.csv',header=None)
pd.read_csv('myCSV_02.csv',names=['white','red','blue','green','animal'])
pd.read_csv('myCSV_03.csv',index_col=['colors','status'])
pd.read_csv('myCSV_04.csv',sep='\s+')
pd.read_csv('myCSV_05.csv',sep='\D*',header=None,engine='python')
pd.read_table('myCSV_06.csv',sep=',',skiprows=[0,1,3,6])
pd.read_csv('myCSV_02.csv',skiprows=[2],nrows=3,header=None)
out = pd.Series()
i=0
pieces = pd.read_csv('myCSV_01.csv',chunksize=3)
for piece in pieces:
print piece
out.set_value(i,piece['white'].sum())
i += 1
out
frame = pd.DataFrame(np.arange(4).reshape((2,2)))
print frame.to_html()
frame = pd.DataFrame(np.random.random((4,4)),
index=['white','black','red','blue'],
columns=['up','down','left','right'])
frame
s = ['<HTML>']
s.append('<HEAD><TITLE>MY DATAFRAME</TITLE></HEAD>')
s.append('<BODY>')
s.append(frame.to_html())
s.append('</BODY></HTML>')
html=''.join(s)
with open('myFrame.html','w') as html_file:
html_file.write(html)
web_frames = pd.read_html('myFrame.html')
web_frames[0]
# 以网址作为参数
ranking = pd.read_html('http://www.meccanismocomplesso.org/en/meccanismo-complesso-sito-2/classifica-punteggio/')
ranking[0]
from lxml import objectify
xml = objectify.parse('books.xml')
xml
root =xml.getroot()
root.Book.Author
root.Book.PublishDate
root.getchildren()
[child.tag for child in root.Book.getchildren()]
[child.text for child in root.Book.getchildren()]
def etree2df(root):
column_names=[]
for i in range(0,len(root.getchildren()[0].getchildren())):
column_names.append(root.getchildren()[0].getchildren()[i].tag)
xml_frame = pd.DataFrame(columns=column_names)
for j in range(0,len(root.getchildren())):
obj = root.getchildren()[j].getchildren()
texts = []
for k in range(0,len(column_names)):
texts.append(obj[k].text)
row = dict(zip(column_names,texts))
row_s=pd.Series(row)
row_s.name=j
xml_frame = xml_frame.append(row_s)
return xml_frame
etree2df(root)
pd.read_excel('data.xlsx')
pd.read_excel('data.xlsx','Sheet2')
frame = pd.DataFrame(np.random.random((4,4)),
index=['exp1','exp2','exp3','exp4'],
columns=['Jan2015','Feb2015','Mar2015','Apr2015'])
frame
frame.to_excel('data2.xlsx')
frame = pd.DataFrame(np.arange(16).reshape((4,4)),
index=['white','black','red','blue'],
columns=['up','down','right','left'])
frame.to_json('frame.json')
# 读取json
pd.read_json('frame.json')
from pandas.io.pytables import HDFStore
store = HDFStore('mydata.h5')
store['obj1']=frame
store['obj1']
frame.to_pickle('frame.pkl')
pd.read_pickle('frame.pkl')
frame=pd.DataFrame(np.arange(20).reshape((4,5)),
columns=['white','red','blue','black','green'])
frame
from sqlalchemy import create_engine
enegine=create_engine('sqlite:///foo.db')
frame.to_sql('colors',enegine)
pd.read_sql('colors',enegine)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 读取没有head的数据
Step2: 可以指定header
Step3: 创建一个具有等级结构的DataFrame对象,可以添加index_col选项,数据文件格式
Step4: Regexp 解析TXT文件
Step5: 读取有字母分隔的数据
Step6: 读取文本文件跳过一些不必要的行
Step7: 从TXT文件中读取部分数据
Step8: 实例 :
Step9: 写入文件
Step10: 创建复杂的DataFrame
Step11: HTML读表格
Step12: 读写xml文件
Step13: 读写Excel文件
Step14: JSON数据
Step15: HDF5数据
Step16: pickle数据
Step17: 数据库连接
|
15,448 | <ASSISTANT_TASK:>
Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
data_dir = "data/"
training_file = data_dir + "train.p"
validation_file = data_dir + "valid.p"
testing_file = data_dir + "test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of validation examples
n_validation = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).size
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import mpl_toolkits.axes_grid1.inset_locator as insetLoc
from mpl_toolkits.axes_grid1 import ImageGrid
import scipy.stats as stats
import pandas as pd
import csv
# Visualizations will be shown in the notebook.
%matplotlib inline
y_all = np.concatenate([y_train, y_valid, y_test])
# Prepare a figure to display data summary
fig, axes = plt.subplots(1, 5, figsize=(15, 20), sharey = 'all')
plot_data = [X_test, y_all, y_train, y_valid, y_test]
titles = ['Examples', 'All', 'Training', 'Validation', 'Test']
classes, class_indices = np.unique(y_train, return_inverse = True)
# Prepare grid for display of class examples in first subplot
class_labels = [];
with open('signnames.csv', newline='\n') as csvfile:
nameReader = csv.reader(csvfile, delimiter=',')
for row in nameReader:
class_labels.append(row[1])
class_labels = class_labels[1:]
axes[0].set_xticks([])
axes[0].set_title(titles[0])
n_examples = 5
grid = ImageGrid(fig, 151,
nrows_ncols=(n_classes, n_examples),
axes_pad=0.025)
# Get indices of class examples
class_examples = []
for i in range(n_classes):
example_indices = np.where(y_test == classes[i])
class_examples.extend(example_indices[0][0:n_examples])
class_examples.reverse()
# Display class examples
for i in range(n_classes*n_examples):
grid[i].imshow(plot_data[0][class_examples[i]])
grid[i].axis('off')
grid[i].set_xticks([])
grid[i].set_yticks([])
# Display histogram for each data set and compare distributions
for i in range(1, len(axes)):
arr = axes[i].hist(plot_data[i], bins = range(n_classes+1), normed = 1,
orientation = 'horizontal', rwidth = 0.95,
)
axes[i].set_title(titles[i])
for j in range(len(arr[0])):
axes[i].text(arr[0][j],arr[1][j]+0.5,"{:.0f}".format(arr[0][j]*len(plot_data[i])))
if i > 1:
observed = np.bincount(plot_data[i])/len(plot_data[i])
expected = np.bincount(y_all)/len(y_all)
chisq, p = stats.chisquare(f_obs= observed, f_exp= expected)
axes[i].set_xlabel('Proportion of Set\n(chisq = ' + str(chisq) + ',\np = ' + str(p) + ')')
axes[i].set_ylim((0, 42))
axes[i].set_yticks([])
axes[1].set_xlabel('Proportion of Samples')
axes[0].set_ylim(0, 43)
axes[0].set_yticks(np.arange(0, n_classes)+0.5)
axes[0].set_yticklabels(class_labels)
print(' ') # for some reason this supresses output from somewhere else
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
import cv2
def preprocess_images(images):
# Convert image color to Lab
images = [cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) for image in images]
# Image correction
clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8, 8))
#for i in range(len(images)):
# l, a, b = cv2.split(images[i])
# l = clahe.apply(l)
# images[i] = cv2.merge((l, a, b))
images = [clahe.apply(image) for image in images]
# Convert image color back to RGB
#images = [cv2.cvtColor(image, cv2.COLOR_LAB2RGB) for image in images]
# Add third (singleton) dimension
images = [np.expand_dims(image, axis = 2) for image in images]
# Normalize values for input to range [-1, 1]
images = np.divide(np.subtract(np.array(images, dtype = 'float'), 128), 128)
return images
def show_examples(images, labels, n_examples):
# Select n_examples examples from each class
classes = np.unique(labels)
class_examples_indices = []
for i in range(len(classes)):
example_indices = np.where(labels == classes[i])
random_select = np.random.randint(0, len(example_indices[0]))
class_examples_indices.append(example_indices[0][random_select])
class_examples = [images[index] for index in class_examples_indices]
# Display class examples
fig, axes = plt.subplots(1, n_examples, figsize=(4, 4*n_examples))
for i in range(0, n_examples):
axes[i].imshow(class_examples[i])
axes[i].axis('off')
axes[i].set_xticks([])
axes[i].set_yticks([])
# Apply image preprocessing but reverse final normalization for viewing images
class_examples = preprocess_images(class_examples)
class_examples = np.multiply(np.add(np.array(class_examples, dtype = 'float'), 128), 128)
# Display preprocessed class examples
fig, axes = plt.subplots(1, n_examples, figsize=(4, 4*n_examples))
for i in range(0, n_examples):
axes[i].imshow(np.squeeze(class_examples[i], axis = 2), cmap = "gray")
axes[i].axis('off')
axes[i].set_xticks([])
axes[i].set_yticks([])
return
# Preprocess data
X_train_preprocessed = preprocess_images(X_train)
X_valid_preprocessed = preprocess_images(X_valid)
X_test_preprocessed = preprocess_images(X_test)
# Display some examples
show_examples(X_test, y_test, 5)
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten
def CNN(x, keep_prob):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x10.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 10), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(10))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Activation.
conv1 = tf.nn.relu(conv1)
# Dropout
conv1 = tf.nn.dropout(conv1, keep_prob)
# Pooling. Input = 28x28x12. Output = 14x14x12.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 10, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Activation.
conv2 = tf.nn.relu(conv2)
# Dropout
conv2 = tf.nn.dropout(conv2, keep_prob)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# Activation.
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Activation.
fc2 = tf.nn.relu(fc2)
# Layer 5: Fully Connected. Input = 84. Output = n_classes.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(n_classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from sklearn.utils import shuffle
EPOCHS = 10
BATCH_SIZE = 128
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
keep_prob = tf.placeholder(tf.float32, (None))
one_hot_y = tf.one_hot(y, n_classes)
rate = 0.001
logits = CNN(x, keep_prob)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train_preprocessed)
print("Training...")
print()
for i in range(EPOCHS):
print("EPOCH {} ...".format(i+1))
X_train_preprocessed, y_train = shuffle(X_train_preprocessed, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train_preprocessed[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.6})
validation_accuracy = evaluate(X_valid_preprocessed, y_valid)
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './cnn')
print("Model saved")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test_preprocessed, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import glob
import math
# Read images
web_images = []
for file in glob.glob("web_images\*.jpg"):
image = cv2.cvtColor(cv2.imread(file), cv2.COLOR_BGR2RGB)
web_images.append(np.array(image))
# Show raw images
fig, axes = plt.subplots(1, len(web_images), figsize=(8, 48), dpi = 80)
for i in range(len(web_images)):
axes[i].imshow(web_images[i])
axes[i].set_xlabel(str(web_images[i].shape))
axes[i].set_xticks([])
axes[i].set_yticks([])
# Resize/pad images for input
desired_size = (32, 32)
for i in range(len(web_images)):
image = web_images[i]
scale = min(desired_size[0]/image.shape[0], desired_size[1]/image.shape[1])
image = cv2.resize(image, None, image, scale, scale, interpolation = cv2.INTER_AREA)
im_shape = image.shape
x_pad = (desired_size[0] - im_shape[0])/2
y_pad = (desired_size[1] - im_shape[1])/2
left_pad = math.floor(x_pad)
right_pad = math.ceil(x_pad)
top_pad = math.floor(y_pad)
bottom_pad = math.ceil(y_pad)
image = image.transpose(2, 0, 1)
new_image = np.zeros((3,)+desired_size, dtype = "uint8")
for j in range(3):
new_image[j] = np.pad(image[j], ((left_pad,right_pad),(top_pad,bottom_pad)), mode = "edge")
web_images[i] = new_image.transpose(1, 2, 0)
# Show images after resizing/padding
fig, axes = plt.subplots(1, len(web_images), figsize=(8, 48), dpi = 80)
for i in range(len(web_images)):
axes[i].imshow(web_images[i])
axes[i].set_xlabel(str(web_images[i].shape))
axes[i].set_xticks([])
axes[i].set_yticks([])
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
# Create input images (preprocessed) and expected labels
X_web = preprocess_images(web_images)
labels_web = ["General caution", "No entry", "Road work", "Stop", "Yield"]
y_web = []
for i in range(len(labels_web)):
y_web.append(class_labels.index(labels_web[i]))
expected = [class_labels[index] for index in y_web]
print("Input classes: ", y_web)
print("Expected labels: ", expected)
# Predict image labels
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
output = sess.run(tf.argmax(logits, 1), feed_dict={x: X_web, y: y_web, keep_prob: 1.0})
predictions = [class_labels[index] for index in output]
print("Output classes: ", output)
print("Predicted labels: ", predictions)
sess.close()
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
accuracy = sess.run(accuracy_operation, feed_dict={x: X_web, y: y_web, keep_prob: 1.0})
print("Web test accuracy = {:.3f}".format(accuracy))
sess.close()
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
top_probs = sess.run(tf.nn.top_k(tf.nn.softmax(logits), k=5), feed_dict={x: X_web, y: y_web, keep_prob: 1.0})
print("Top softmax probabilities:\n", top_probs.values)
print("Top classes: ", top_probs.indices)
sess.close()
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
print(activation.shape)
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
outputFeatureMap(X_web, conv2)
sess.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Include an exploratory visualization of the dataset
Step3: Step 2
Step4: Model Architecture
Step5: Train, Validate and Test the Model
Step6: Step 3
Step7: Predict the Sign Type for Each Image
Step8: Analyze Performance
Step9: Output Top 5 Softmax Probabilities For Each Image Found on the Web
Step10: Project Writeup
|
15,449 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
#importar los modulos pandas y numpy con los alias convencionales
from pandas import Series, DataFrame
#Crear una serie desde un ndarray
s = pd.Series(np.arange(0,5), index=['a', 'b', 'c', 'd', 'e'])
s
s.index
type(s)
s0 = Series(np.random.random(10)) #crea una serie con 10 valores aleatorios
s0
#Crear serie desde un dictionario
d = {'a':100,'b':124,'c':700,'d':430,'e':400}
s1 = pd.Series(d)
s1
#Crear serie desde un valor escalar
s2 = pd.Series(23., s1.index)
s2
# Acceder a los valores de la serie
print(s2.index)
print(s2.values)
lpaises=['Venezuela','República Dominicana','India','Guatemala','Alemania','Arabia Saudita','Argentina','Francia','Filipinas','Indonesia','Haití','México','Nicaragua']
lpoblacion=[31108.08,10528.39,1311050.53,16342.90,81413.15,31540.37,43416.75,66808.38,100699.40,257563.82,10711.07,127017.22,6082.03]
seriepp=Series(lpoblacion,lpaises)
seriepp['India'] #seleccionar el indice India
#Ver cuales de los paises seleccionado superan los 50 millones de personas
seriepp[seriepp>50000]
seried = seriepp.to_dict() # convertir en un diccionario
seried
seriepp.name = 'Población algunos paises'
seriepp['India']
seriepp[['India','Aruba','Venezuela']] #Seleccionar desde una lista, en caso de no estar el elemento NaN
'México' in seriepp #Verificar si México esta en la serie
'Estados Unidos' in seriepp #Verificar si Estados Unidos esta en la serie
seriepp.append(Series({'Ginea':12608.59}))
seriepp=seriepp.append(Series({'Ginea':12608.59})) #Agregar un elemento a una serie
seriepp
seriepp=seriepp.append(Series({'Otro Nación':np.NAN}))
seriepp
seriepp.notnull() # retorna una serie boleana donde identifca los valores no son nulos
#Crear un DataFrame desde un dictionario
AntillaMayores = {'ciudad':['Santo Domingo','La Habana','San Juan','Kington','Puerto Principe'],
'pais':['República Dominicana','Cuba','Puerto Rico','Jaimaica','Haití'],
'poblacion':[10528.39,11389.56,3474.18,2725.94,10711.07]}
dfAntillasMay = DataFrame(AntillaMayores)
dfAntillasMay
#Crear un DataFram desde multiples listas
#Creando varias listas con datos de paises de antillas menores territorios independient
#fuente de datos: https://es.wikipedia.org/wiki/Antillas_Menores
pais = ['Antigua y Barbuda', 'Barbados', 'Dominica', 'Granada', 'San Cristóbal y Nieve', 'San Vicente y las Granadinas', 'Santa Lucía', 'Trinidad y Tobago']
idiomaOficial = ['Inglés','Inglés','Inglés','Inglés','Inglés','Inglés','Inglés','Inglés']
superficieTerritorial = [443,431,754, 344, 261,389, 616, 5128 ]
poblacion = [68722 , 279912 , 69278 , 89502 , 38958, 117534, 160145, 1075066]
etiquetas = ['pais','idioma','superficie','población']
listacolumnas = [pais,idiomaOficial,superficieTerritorial,poblacion]
datazipped = list(zip(etiquetas,listacolumnas))
dataantillas = dict(datazipped)
dfantillasmenores = DataFrame(dataantillas)
dfantillasmenores
# Cambiando los nombres de las columnas
# complicarla un poco
dfantillasmenores.rename(columns={'idioma': 'Idioma Oficial','pais':'País','población':'Población','superficie':'Superficie Territorial'},inplace=True)
dfantillasmenores
#Broadcasting o cambio en memoria, permite agregar nuevos datos o agregar columnas
#según se necesite, ejemplo una nueva colunma para asignar la visitas de turistas en el 2017
dfantillasmenores['Turismo 2017(MM)']=1
#De forma automática asignamos 1
dfantillasmenores
dfantillasmenores['Turismo 2017(MM)']=dfantillasmenores['Población']/dfantillasmenores['Superficie Territorial']**3
dfantillasmenores
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Series es similar a un arreglo de numpy(ndarray) con la excepción de que estos poseen un axis labels.
Step2: Ejercicio 1
Step3: Data Frames se define como arreglo bidimensional etiquetadas, cada columna es una Serie.
|
15,450 | <ASSISTANT_TASK:>
Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
sys.path.append('..')
#sys.path.append('../../spystats')
import django
django.setup()
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
## Use the ggplot style
plt.style.use('ggplot')
from external_plugins.spystats.spystats import tools as sptools
import scipy
vm = sptools.ExponentialVariogram(sill=0.3,range_a=0.4)
%time xx,yy,z = sptools.simulatedGaussianFieldAsPcolorMesh(vm)
plt.imshow(z)
x = xx[1,:]
y = yy[:,1]
import scipy.fftpack as fft
c_delta = lambda d : np.hstack(((2 + d),-1,np.zeros(128 - 3),-1))
c_base = c_delta(0.1)
c_base
C = scipy.linalg.circulant(c_base)
n = C.shape[1]
C_inv = np.linalg.inv(C)
plt.imshow(C_inv, interpolation = 'None')
C_inv.max()
plop = fft.ifft(fft.fft(c_base) ** -1)
#plop = fft.rifft(fft.rfft(c_base) ** -1) / n
plop.real.max()
C_inv2 = scipy.linalg.circulant(plop.real)
plt.imshow(C_inv2, interpolation = 'None')
C_chol = np.linalg.cholesky(C_inv)
z = np.random.normal(0, 1, [n])
mmm = np.matmul(C_chol, z)
plt.plot(mmm);
lambda_vec = fft.fft(c_base)
Lambda_aux = np.power(lambda_vec, -0.5)
z_re = np.random.normal(0, 1, [n])
z_im = np.random.normal(0, 1, [n])
z = z_re + 1j * z_im
# z = np.random.normal(0, 1, [n])
x = fft.fft(Lambda_aux.real * z).real / np.sqrt(n)
plt.plot(x);
#c_delta = lambda d : np.hstack(((4 + d),-1,np.zeros(128 - 3),-1))
#c_delta = lambda d : np.hstack(((0),-1,np.zeros(128 - 3),-1))
n = 64
N = 64
delta = 0.1
c_base = np.zeros([n, N])
c_base[0, 0] = 4 + delta
c_base[0, 1] = -1
c_base[0, 2] = -1
c_base[1, 0] = -1
c_base[2, 0] = -1
c_base[0, N-1] = -1
c_base[0, N-2] = -1
c_base[N-1, 0] = -1
c_base[N-2, 0] = -1
c_base
%%time
lambda_mat = fft.fft2(c_base)
z_re = np.random.normal(0, 1, [n, N])
z_im = np.random.normal(0, 1, [n, N])
z = z_re + 1j * z_im
x = fft.fft2((lambda_mat ** -0.5) * z).real / np.sqrt(n *N)
plt.imshow(x, interpolation = 'None')
%%time
n = 64
N = 64
delta = 0.0001
c_base = np.zeros([n, N])
c_base[0, 0] = 4 + delta
c_base[0, 1] = -1
#c_base[0, 2] = -1
c_base[1, 0] = -1
#c_base[2, 0] = -1
c_base[0, N-1] = -1
#c_base[0, N-2] = -1
c_base[N-1, 0] = -1
#c_base[N-2, 0] = -1
lambda_mat = fft.fft2(c_base)
z_re = np.random.normal(0, 1, [n, N])
z_im = np.random.normal(0, 1, [n, N])
z = z_re + 1j * z_im
x = fft.fft2((lambda_mat ** -0.5) * z).real / np.sqrt(n *N)
plt.imshow(x, interpolation = 'none')
## Simulate random noise (Normal distributed)
zr = scipy.stats.norm.rvs(size=(C.size,2),loc=0,scale=1)
zr.dtype=np.complex_
#plt.hist(zr.real)
from scipy.fftpack import ifft2, fft2
Lm = scipy.sqrt(C.shape[0]*C.shape[0]) * fft2(C)
Lm.shape
zr.shape
Lm.size
%time v = fft2(scipy.sqrt(Lm) * zr.reshape(Lm.shape))
x = v.real
x.shape
plt.imshow(x,interpolation='None')
cc = scipy.linalg.inv(C)
plt.plot(cc[:,0])
n = x.shape[0]
mm = scipy.stats.multivariate_normal(np.zeros(n),cc)
mmm = mm.rvs()
plt.imshow(mmm.reshape(100,100))
scipy.stats.multivariate_normal?
nn = mm.rvs()
from scipy.fftpack import ifftn
import matplotlib.pyplot as plt
import matplotlib.cm as cm
N = 30
f, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3, sharex='col', sharey='row')
xf = np.zeros((N,N))
xf[0, 5] = 1
xf[0, N-5] = 1
Z = ifftn(xf)
ax1.imshow(xf, cmap=cm.Reds)
ax4.imshow(np.real(Z), cmap=cm.gray)
xf = np.zeros((N, N))
xf[5, 0] = 1
xf[N-5, 0] = 1
Z = ifftn(xf)
ax2.imshow(xf, cmap=cm.Reds)
ax5.imshow(np.real(Z), cmap=cm.gray)
xf = np.zeros((N, N))
xf[5, 10] = 1
xf[N-5, N-10] = 1
Z = ifftn(xf)
ax3.imshow(xf, cmap=cm.Reds)
ax6.imshow(np.real(Z), cmap=cm.gray)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For benchmarking we will perfom a GF simulation.
Step2: Simulation of a temporal GMRF with DFT
Step3: Now let's build the circulant matrix for the tourus
Step4: Algorithm to simulate GMRF with block-circulant
Step5: Example to perform a FFT in two dimensions
|
15,451 | <ASSISTANT_TASK:>
Python Code:
!ls -ltr /data
spark
df = spark.read.format("csv").option("header","true")\
.option("inferSchema","true").load("/data/Combined_Cycle_Power_Plant.csv")
df.show()
df.cache()
df.limit(10).toPandas().head()
from pyspark.ml.feature import *
vectorizer = VectorAssembler()
vectorizer.setInputCols(["AT", "V", "AP", "RH"])
vectorizer.setOutputCol("features")
df_vect = vectorizer.transform(df)
df_vect.show(10, False)
print(vectorizer.explainParams())
from pyspark.ml.regression import LinearRegression
lr = LinearRegression()
print(lr.explainParams())
lr.setLabelCol("EP")
lr.setFeaturesCol("features")
model = lr.fit(df_vect)
type(model)
print("R2:", model.summary.r2)
print("Intercept: ", model.intercept, "Coefficients", model.coefficients)
df_pred = model.transform(df_vect)
df_pred.show()
from pyspark.ml.evaluation import RegressionEvaluator
evaluator = RegressionEvaluator()
print(evaluator.explainParams())
evaluator = RegressionEvaluator(labelCol = "EP",
predictionCol = "prediction",
metricName = "rmse")
evaluator.evaluate(df_pred)
from pyspark.ml.pipeline import Pipeline, PipelineModel
pipeline = Pipeline()
print(pipeline.explainParams())
pipeline.setStages([vectorizer, lr])
pipelineModel = pipeline.fit(df)
pipeline.getStages()
lr_model = pipelineModel.stages[1]
lr_model .coefficients
pipelineModel.transform(df).show()
evaluator.evaluate(pipelineModel.transform(df))
pipelineModel.save("/tmp/lr-pipeline")
!tree /tmp/lr-pipeline
saved_model = PipelineModel.load("/tmp/lr-pipeline")
saved_model.stages[1].coefficients
saved_model.transform(df).show()
df_train, df_test = df.randomSplit(weights=[0.7, 0.3], seed = 200)
pipelineModel = pipeline.fit(df_train)
evaluator.evaluate(pipelineModel.transform(df_test))
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit
paramGrid = ParamGridBuilder()\
.addGrid(lr.regParam, [0.1, 0.01]) \
.addGrid(lr.fitIntercept, [False, True])\
.addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])\
.build()
# In this case the estimator is simply the linear regression.
# A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
tvs = TrainValidationSplit(estimator=lr,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
trainRatio=0.8)
tuned_model = tvs.fit(vectorizer.transform(df_train))
tuned_model.bestModel, tuned_model.validationMetrics
df_test_pred = tuned_model.transform(vectorizer.transform(df_test))
df_test_pred.show()
evaluator.evaluate(df_test_pred)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Convert Spark Dataframe to Pandas Dataframe
Step3: Verctorize the features
Step4: Fit Linear Regression Model
Step5: View model summary
Step6: Predict
Step7: Evaluate
Step8: Build a pipeline
Step9: Save the pipeline to disk to persist the model
Step10: Load the persisted model from the disk
Step11: Tune the model
|
15,452 | <ASSISTANT_TASK:>
Python Code:
from taxii2client import Collection
from stix2 import CompositeDataSource, FileSystemSource, TAXIICollectionSource
# create FileSystemStore
fs = FileSystemSource("/tmp/stix2_source")
# create TAXIICollectionSource
colxn = Collection('http://127.0.0.1:5000/trustgroup1/collections/91a7b528-80eb-42ed-a74d-c6fbd5a26116/', user="user1", password="Password1")
ts = TAXIICollectionSource(colxn)
# add them both to the CompositeDataSource
cs = CompositeDataSource()
cs.add_data_sources([fs,ts])
# get an object that is only in the filesystem
intrusion_set = cs.get('intrusion-set--f3bdec95-3d62-42d9-a840-29630f6cdc1a')
print(intrusion_set.serialize(pretty=True))
# get an object that is only in the TAXII collection
ind = cs.get('indicator--a740531e-63ff-4e49-a9e1-a0a3eed0e3e7')
print(ind.serialize(pretty=True))
import sys
from stix2 import Filter
# create filter for STIX objects that have external references to MITRE ATT&CK framework
f = Filter("external_references.source_name", "=", "mitre-attack")
# create filter for STIX objects that are not of SDO type Attack-Pattnern
f1 = Filter("type", "!=", "attack-pattern")
# create filter for STIX objects that have the "threat-report" label
f2 = Filter("labels", "in", "threat-report")
# create filter for STIX objects that have been modified past the timestamp
f3 = Filter("modified", ">=", "2017-01-28T21:33:10.772474Z")
# create filter for STIX objects that have been revoked
f4 = Filter("revoked", "=", True)
from stix2 import MemoryStore, FileSystemStore, FileSystemSource
fs = FileSystemStore("/tmp/stix2_store")
fs_source = FileSystemSource("/tmp/stix2_source")
# attach filter to FileSystemStore
fs.source.filters.add(f)
# attach multiple filters to FileSystemStore
fs.source.filters.add([f1,f2])
# can also attach filters to a Source
# attach multiple filters to FileSystemSource
fs_source.filters.add([f3, f4])
mem = MemoryStore()
# As it is impractical to only use MemorySink or MemorySource,
# attach a filter to a MemoryStore
mem.source.filters.add(f)
# attach multiple filters to a MemoryStore
mem.source.filters.add([f1,f2])
from stix2 import Campaign, Identity, Indicator, Malware, Relationship
mem = MemoryStore()
cam = Campaign(name='Charge', description='Attack!')
idy = Identity(name='John Doe', identity_class="individual")
ind = Indicator(pattern_type='stix', pattern="[file:hashes.MD5 = 'd41d8cd98f00b204e9800998ecf8427e']")
mal = Malware(name="Cryptolocker", is_family=False, created_by_ref=idy)
rel1 = Relationship(ind, 'indicates', mal,)
rel2 = Relationship(mal, 'targets', idy)
rel3 = Relationship(cam, 'uses', mal)
mem.add([cam, idy, ind, mal, rel1, rel2, rel3])
print(mem.creator_of(mal).serialize(pretty=True))
rels = mem.relationships(mal)
len(rels)
mem.relationships(mal, relationship_type='indicates')
mem.relationships(mal, source_only=True)
mem.relationships(mal, target_only=True)
mem.related_to(mal, target_only=True, relationship_type='uses')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Filters
Step2: For Filters to be applied to a query, they must be either supplied with the query call or attached to a DataStore, more specifically to a DataSource whether that DataSource is a part of a DataStore or stands by itself.
Step3: Note
Step4: If a STIX object has a created_by_ref property, you can use the creator_of() method to retrieve the Identity object that created it.
Step5: Use the relationships() method to retrieve all the relationship objects that reference a STIX object.
Step6: You can limit it to only specific relationship types
Step7: You can limit it to only relationships where the given object is the source
Step8: And you can limit it to only relationships where the given object is the target
Step9: Finally, you can retrieve all STIX objects related to a given STIX object using related_to(). This calls relationships() but then performs the extra step of getting the objects that these Relationships point to. related_to() takes all the same arguments that relationships() does.
|
15,453 | <ASSISTANT_TASK:>
Python Code:
# Run this cell, but please don't change it.
# These lines import the Numpy and Datascience modules.
import numpy as np
from datascience import *
# These lines do some fancy plotting magic.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
# These lines load the tests.
from client.api.assignment import load_assignment
tests = load_assignment('lab07.ok')
# For the curious: this is how to display a YouTube video in a
# Jupyter notebook. The argument to YouTubeVideo is the part
# of the URL (called a "query parameter") that identifies the
# video. For example, the full URL for this video is:
# https://www.youtube.com/watch?v=wE8NDuzt8eg
from IPython.display import YouTubeVideo
YouTubeVideo("wE8NDuzt8eg")
faithful = Table.read_table("faithful.csv")
faithful
...
duration_mean = ...
duration_std = ...
wait_mean = ...
wait_std = ...
faithful_standard = Table().with_columns(
"duration (standard units)", ...,
"wait (standard units)", ...)
faithful_standard
_ = tests.grade('q1_3')
...
r = ...
r
_ = tests.grade('q1_6')
def plot_data_and_line(dataset, x, y, point_0, point_1):
Makes a scatter plot of the dataset, along with a line passing through two points.
dataset.scatter(x, y, label="data")
plt.plot(make_array(point_0.item(0), point_1.item(0)), make_array(point_0.item(1), point_1.item(1)), label="regression line")
plt.legend(bbox_to_anchor=(1.5,.8))
plot_data_and_line(faithful_standard, "duration (standard units)", "wait (standard units)", make_array(-2, -2*r), make_array(2, 2*r))
slope = ...
slope
intercept = slope*(-duration_mean) + wait_mean
intercept
_ = tests.grade('q2_1')
two_minute_predicted_waiting_time = ...
five_minute_predicted_waiting_time = ...
# Here is a helper function to print out your predictions
# (you don't need to modify it):
def print_prediction(duration, predicted_waiting_time):
print("After an eruption lasting", duration,
"minutes, we predict you'll wait", predicted_waiting_time,
"minutes until the next eruption.")
print_prediction(2, two_minute_predicted_waiting_time)
print_prediction(5, five_minute_predicted_waiting_time)
plot_data_and_line(faithful, "duration", "wait", make_array(2, two_minute_predicted_waiting_time), make_array(5, five_minute_predicted_waiting_time))
faithful_predictions = ...
faithful_predictions
_ = tests.grade("q3_2")
faithful_residuals = ...
faithful_residuals
_ = tests.grade("q3_3")
faithful_residuals.scatter("duration", "residual", color="r")
faithful_residuals.scatter("duration", "wait", label="actual waiting time", color="blue")
plt.scatter(faithful_residuals.column("duration"), faithful_residuals.column("residual"), label="residual", color="r")
plt.plot(make_array(2, 5), make_array(two_minute_predicted_waiting_time, five_minute_predicted_waiting_time), label="regression line")
plt.legend(bbox_to_anchor=(1.7,.8));
zero_minute_predicted_waiting_time = ...
two_point_five_minute_predicted_waiting_time = ...
hour_predicted_waiting_time = ...
print_prediction(0, zero_minute_predicted_waiting_time)
print_prediction(2.5, two_point_five_minute_predicted_waiting_time)
print_prediction(60, hour_predicted_waiting_time)
_ = tests.grade('q4_1')
# For your convenience, you can run this cell to run all the tests at once!
import os
print("Running all tests...")
_ = [tests.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')]
print("Finished running all tests.")
# Run this cell to submit your work *after* you have passed all of the test cells.
# It's ok to run this cell multiple times. Only your final submission will be scored.
!TZ=America/Los_Angeles jupyter nbconvert --output=".lab07_$(date +%m%d_%H%M)_submission.html" lab07.ipynb && echo "Submitted successfully."
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. How Faithful is Old Faithful?
Step2: Some of Old Faithful's eruptions last longer than others. When it has a long eruption, there's generally a longer wait until the next eruption.
Step3: We would like to use linear regression to make predictions, but that won't work well if the data aren't roughly linearly related. To check that, we should look at the data.
Step4: Question 2
Step5: Question 4
Step6: You'll notice that this plot looks exactly the same as the last one! The data really are different, but the axes are scaled differently. (The method scatter scales the axes so the data fill up the available space.) So it's important to read the ticks on the axes.
Step8: 2. The regression line
Step9: How would you take a point in standard units and convert it back to original units? We'd have to "stretch" its horizontal position by duration_std and its vertical position by wait_std.
Step10: We know that the regression line passes through the point (duration_mean, wait_mean). You might recall from high-school algebra that the equation for the line is therefore
Step11: 3. Investigating the regression line
Step12: The next cell plots the line that goes between those two points, which is (a segment of) the regression line.
Step13: Question 2
Step14: Question 3
Step15: Here is a plot of the residuals you computed. Each point corresponds to one eruption. It shows how much our prediction over- or under-estimated the waiting time.
Step16: There isn't really a pattern in the residuals, which confirms that it was reasonable to try linear regression. It's true that there are two separate clouds; the eruption durations seemed to fall into two distinct clusters. But that's just a pattern in the eruption durations, not a pattern in the relationship between eruption durations and waiting times.
Step17: However, unless you have a strong reason to believe that the linear regression model is true, you should be wary of applying your prediction model to data that are very different from the training data.
Step18: Question 2. Do you believe any of these values are reliable predictions? If you don't believe some of them, say why.
|
15,454 | <ASSISTANT_TASK:>
Python Code:
# Load libraries
# Math
import numpy as np
# Visualization
%matplotlib notebook
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.max_open_warning': 0})
from mpl_toolkits.axes_grid1 import make_axes_locatable
from scipy import ndimage
# Print output of LFR code
import subprocess
# Sparse matrix
import scipy.sparse
import scipy.sparse.linalg
# 3D visualization
import pylab
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot
# Import data
import scipy.io
# Import functions in lib folder
import sys
sys.path.insert(1, 'lib')
# Import helper functions
%load_ext autoreload
%autoreload 2
from lib.utils import construct_kernel
from lib.utils import compute_kernel_kmeans_EM
from lib.utils import compute_purity
# Import distance function
import sklearn.metrics.pairwise
# Remove warnings
import warnings
warnings.filterwarnings("ignore")
# Load dataset: W is the Adjacency Matrix and Cgt is the ground truth clusters
mat = scipy.io.loadmat('datasets/mnist_2000_graph.mat')
W = mat['W']
n = W.shape[0]
Cgt = mat['Cgt'] - 1; Cgt = Cgt.squeeze()
nc = len(np.unique(Cgt))
print('Number of nodes =',n)
print('Number of classes =',nc);
# Degree Matrix
d = scipy.sparse.csr_matrix.sum(W,axis=-1)
# Compute D^(-0.5)
d_sqrt = np.sqrt(d)
d_sqrt_inv = 1./d_sqrt
D_sqrt_inv = scipy.sparse.diags(d_sqrt_inv.A.squeeze(), 0)
# Create Identity matrix
I = scipy.sparse.identity(d.size, dtype=W.dtype)
# Construct A
A = I - D_sqrt_inv*W*D_sqrt_inv
# Perform EVD on A
U = scipy.sparse.linalg.eigsh(A, k=4, which='SM')
fig = plt.figure(1)
ax = fig.gca(projection='3d')
ax.scatter(U[1][:,1], U[1][:,2], U[1][:,3], c=Cgt)
plt.title('$Y^*$')
# Your code here
#lamb, Y_star = scipy.sparse.linalg.eigsh(A, k=4, which='SM')
# Normalize the rows of Y* with the L2 norm, i.e. ||y_i||_2 = 1
#Y_star = Y_star/np.sqrt(np.sum((Y_star)**2))
Y_star = U[1]
Y_star = ( Y_star.T / np.sqrt(np.sum(Y_star**2,axis=1)+1e-10) ).T
# Your code here
# Run standard K-Means
Ker=construct_kernel(Y_star,'linear')
n = Y_star.shape[0]
Theta= np.ones(n)
[C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(nc,Ker,Theta,10)
accuracy = compute_purity(C_kmeans,Cgt,nc)
print('accuracy = ',accuracy,'%')
fig = plt.figure(2)
ax = fig.gca(projection='3d')
plt.scatter(Y_star[:,1], U[1][:,2], U[1][:,3], c=Cgt)
plt.title('$Y^*$')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 1
Step2: Question 6
|
15,455 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
from Geant4 import *
from IPython.display import Image
class MyDetectorConstruction(G4VUserDetectorConstruction):
"My Detector Construction"
def __init__(self):
G4VUserDetectorConstruction.__init__(self)
self.solid = {}
self.logical = {}
self.physical = {}
self.create_world(side = 4000,
material = "G4_AIR")
self.create_cylinder(name = "vacuum",
radius = 200,
length = 320,
translation = [0,0,900],
material = "G4_Galactic",
colour = [1.,1.,1.,0.1],
mother = 'world')
self.create_cylinder(name = "upper_scatter",
radius = 10,
length = 0.01,
translation = [0,0,60],
material = "G4_Ta",
colour = [1.,1.,1.,0.7],
mother = 'vacuum')
self.create_cylinder(name = "lower_scatter",
radius = 30,
length = 0.01,
translation = [0,0,20],
material = "G4_Al",
colour = [1.,1.,1.,0.7],
mother = 'vacuum')
self.create_applicator_aperture(name = "apature_1",
inner_side = 142,
outer_side = 182,
thickness = 6,
translation = [0,0,449],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_applicator_aperture(name = "apature_2",
inner_side = 130,
outer_side = 220,
thickness = 12,
translation = [0,0,269],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_applicator_aperture(name = "apature_3",
inner_side = 110,
outer_side = 180,
thickness = 12,
translation = [0,0,140],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_applicator_aperture(name = "apature_4",
inner_side = 100,
outer_side = 140,
thickness = 12,
translation = [0,0,59],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_applicator_aperture(name = "cutout",
inner_side = 100,
outer_side = 120,
thickness = 6,
translation = [0,0,50],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_cube(name = "phantom",
side = 500,
translation = [0,0,-250],
material = "G4_WATER",
colour = [0,0,1,0.4],
mother = 'world')
def create_world(self, **kwargs):
material = gNistManager.FindOrBuildMaterial(kwargs['material'])
side = kwargs['side']
self.solid['world'] = G4Box("world", side/2., side/2., side/2.)
self.logical['world'] = G4LogicalVolume(self.solid['world'],
material,
"world")
self.physical['world'] = G4PVPlacement(G4Transform3D(),
self.logical['world'],
"world", None, False, 0)
visual = G4VisAttributes()
visual.SetVisibility(False)
self.logical['world'].SetVisAttributes(visual)
def create_cylinder(self, **kwargs):
name = kwargs['name']
radius = kwargs['radius']
length = kwargs['length']
translation = G4ThreeVector(*kwargs['translation'])
material = gNistManager.FindOrBuildMaterial(kwargs['material'])
visual = G4VisAttributes(G4Color(*kwargs['colour']))
mother = self.physical[kwargs['mother']]
self.solid[name] = G4Tubs(name, 0., radius, length/2., 0., 2*pi)
self.logical[name] = G4LogicalVolume(self.solid[name],
material,
name)
self.physical[name] = G4PVPlacement(None, translation,
name,
self.logical[name],
mother, False, 0)
self.logical[name].SetVisAttributes(visual)
def create_cube(self, **kwargs):
name = kwargs['name']
side = kwargs['side']
translation = G4ThreeVector(*kwargs['translation'])
material = gNistManager.FindOrBuildMaterial(kwargs['material'])
visual = G4VisAttributes(G4Color(*kwargs['colour']))
mother = self.physical[kwargs['mother']]
self.solid[name] = G4Box(name, side/2., side/2., side/2.)
self.logical[name] = G4LogicalVolume(self.solid[name],
material,
name)
self.physical[name] = G4PVPlacement(None, translation,
name,
self.logical[name],
mother, False, 0)
self.logical[name].SetVisAttributes(visual)
def create_applicator_aperture(self, **kwargs):
name = kwargs['name']
inner_side = kwargs['inner_side']
outer_side = kwargs['outer_side']
thickness = kwargs['thickness']
translation = G4ThreeVector(*kwargs['translation'])
material = gNistManager.FindOrBuildMaterial(kwargs['material'])
visual = G4VisAttributes(G4Color(*kwargs['colour']))
mother = self.physical[kwargs['mother']]
inner_box = G4Box("inner", inner_side/2., inner_side/2., thickness/2. + 1)
outer_box = G4Box("outer", outer_side/2., outer_side/2., thickness/2.)
self.solid[name] = G4SubtractionSolid(name,
outer_box,
inner_box)
self.logical[name] = G4LogicalVolume(self.solid[name],
material,
name)
self.physical[name] = G4PVPlacement(None,
translation,
name,
self.logical[name],
mother, False, 0)
self.logical[name].SetVisAttributes(visual)
# -----------------------------------------------------------------
def Construct(self): # return the world volume
return self.physical['world']
# set geometry
detector = MyDetectorConstruction()
gRunManager.SetUserInitialization(detector)
# set physics list
physics_list = FTFP_BERT()
gRunManager.SetUserInitialization(physics_list)
class MyPrimaryGeneratorAction(G4VUserPrimaryGeneratorAction):
"My Primary Generator Action"
def __init__(self):
G4VUserPrimaryGeneratorAction.__init__(self)
particle_table = G4ParticleTable.GetParticleTable()
electron = particle_table.FindParticle(G4String("e-"))
positron = particle_table.FindParticle(G4String("e+"))
gamma = particle_table.FindParticle(G4String("gamma"))
beam = G4ParticleGun()
beam.SetParticleEnergy(6*MeV)
beam.SetParticleMomentumDirection(G4ThreeVector(0,0,-1))
beam.SetParticleDefinition(electron)
beam.SetParticlePosition(G4ThreeVector(0,0,1005))
self.particleGun = beam
def GeneratePrimaries(self, event):
self.particleGun.GeneratePrimaryVertex(event)
primary_generator_action = MyPrimaryGeneratorAction()
gRunManager.SetUserAction(primary_generator_action)
# Initialise
gRunManager.Initialize()
%%file macros/raytrace.mac
/vis/open RayTracer
/vis/rayTracer/headAngle 340.
/vis/rayTracer/eyePosition 200 200 250 cm
/vis/rayTracer/trace images/world.jpg
gUImanager.ExecuteMacroFile('macros/raytrace.mac')
# Show image
Image(filename="images/world.jpg")
%%file macros/dawn.mac
/vis/open DAWNFILE
/vis/scene/create
/vis/scene/add/volume
/vis/scene/add/trajectories smooth
/vis/modeling/trajectories/create/drawByCharge
/vis/modeling/trajectories/drawByCharge-0/default/setDrawStepPts true
/vis/modeling/trajectories/drawByCharge-0/default/setStepPtsSize 2
/vis/scene/endOfEventAction accumulate 1000
/vis/scene/add/hits
/vis/sceneHandler/attach
#/vis/scene/add/axes 0. 0. 0. 10. cm
/vis/viewer/set/targetPoint 0.0 0.0 300.0 mm
/vis/viewer/set/viewpointThetaPhi 90 0
/vis/viewer/zoom 1
gUImanager.ExecuteMacroFile('macros/dawn.mac')
gRunManager.BeamOn(50)
!mv g4_00.prim images/world.prim
!dawn -d images/world.prim
!convert images/world.eps images/world.png
Image("images/world.png")
!G4VRML_DEST_DIR=.
!G4VRMLFILE_MAX_FILE_NUM=1
!G4VRMLFILE_VIEWER=echo
gApplyUICommand("/vis/open VRML2FILE")
gRunManager.BeamOn(1)
!mv g4_00.wrl images/world.wrl
%load_ext version_information
%version_information matplotlib, numpy
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting the requirements for a simulation
Step2: Now that you have made your geometry class, time to load it up
Step3: The physics list
Step4: Generating the beam
Step5: And this is now loading up the generator we have just made
Step6: Initialise the simulation
Step7: Seeing the geometry
Step8: Seeing the particle tracks
Step9: Once we have defined how we want to see the tracks we can beam on our pretend linac with 50 electrons.
Step10: The beam on created a prim file which needs to be converted to png for viewing.
Step11: And here is our wonderful simulation
Step12: Versions
|
15,456 | <ASSISTANT_TASK:>
Python Code:
import matplotlib as mpl
mpl.use('TkAgg')
import matplotlib.pyplot as plt
%matplotlib inline
import bacteriopop_utils
import feature_selection_utils
import load_data
loaded_data = data = load_data.load_data()
loaded_data.shape
loaded_data[loaded_data['phylum'].isnull()].head(3)
loaded_data.head()
bacteriopop_utils.filter_by_abundance(dataframe=loaded_data, low= 0.6).head()
bacteriopop_utils.reduce_data(dataframe=loaded_data, min_abundance= 0.6,
phylo_column='genus', oxygen='high').head()
raw_dmd_data = bacteriopop_utils.reduce_data(
dataframe=loaded_data, min_abundance= 0.01,
phylo_column='genus', oxygen='Low')
data_dict = bacteriopop_utils.break_apart_experiments(raw_dmd_data)
data_dict.keys()
# Can't view generators very easily!!!
data_dict.itervalues()
# But we can make a list from them and grab the 0th item
first_df = list(data_dict.itervalues())[0]
first_df.head(3)
first_df[first_df['genus'] == 'other'].head()
first_df[first_df['genus'] != ''].pivot(index='genus', columns='week', values='abundance')
raw_dmd_data.columns
DMD_input_dict = \
bacteriopop_utils.prepare_DMD_matrices(raw_dmd_data,
groupby_level = "genus")
type(DMD_input_dict)
DMD_input_dict[('Low', 1)]
DMD_input_dict[('Low', 1)].shape
DMD_input_dict[('Low', 1)].groupby('week')['abundance'].sum()
DMD_test_matrix = DMD_input_dict[('Low', 1)]
# Who is in there?
DMD_test_matrix.reset_index()['genus'].unique()
# following example 1: https://pythonhosted.org/modred/tutorial_modaldecomp.html
import modred as MR
num_modes = 1
modes, eig_vals = MR.compute_POD_matrices_snaps_method(DMD_test_matrix, range(num_modes))
modes
eig_vals
extracted_features = bacteriopop_utils.extract_features(
dataframe = loaded_data,
column_list = ['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'oxygen', 'abundance']
# default list was: ['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'length', 'abundance', 'project']
)
extracted_features.head()
extracted_features.shape
pca_results = feature_selection_utils.pca_bacteria(
data = extracted_features.head(100), n_components = 10)
pca_results.components_
feature_selection_utils.calculate_features_target_correlation(
data = extracted_features.head(100),
features = extracted_features.columns.tolist(),
target='abundance',
method="Pearson")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: make sure none of the phyla are NA (checking 160304 update to load_data.py
Step2: Test filter and reduce functions using a high threshold, which selects for genus==Methylobacter
Step3: Demo of DMD data prep
Step4: Errors are thrown by functions below if you drop min_abunance below. I think it is hanging up on multiple "other" rows.
Step5: We can get each dataframe out like this
Step6: DMD
Step7: I'm stuck at the installation of modred
Step8: Feature extraction and PCA
Step9: Just do PCA on a tiny bit of the data as a demo
Step10: Do correlations for a tiny subset of the data.
|
15,457 | <ASSISTANT_TASK:>
Python Code:
import inflect # for string manipulation
import numpy as np
import pandas as pd
import scipy as sp
import scipy.stats as st
import matplotlib.pyplot as plt
%matplotlib inline
filename = '/Users/excalibur/py/nanodegree/intro_ds/final_project/improved-dataset/turnstile_weather_v2.csv'
# import data
data = pd.read_csv(filename)
entries_hourly_by_row = data['ENTRIESn_hourly'].values
def map_column_to_entries_hourly(column):
instances = column.values # e.g., longitude_instances = data['longitude'].values
# reduce
entries_hourly = {} # e.g., longitude_entries_hourly = {}
for i in np.arange(len(instances)):
if instances[i] in entries_hourly:
entries_hourly[instances[i]] += float(entries_hourly_by_row[i])
else:
entries_hourly[instances[i]] = float(entries_hourly_by_row[i])
return entries_hourly # e.g., longitudes, entries
def create_df(entries_hourly_dict, column1name):
# e.g, longitude_df = pd.DataFrame(data=longitude_entries_hourly.items(), columns=['longitude','entries'])
df = pd.DataFrame(data=entries_hourly_dict.items(), columns=[column1name,'entries'])
return df # e.g, longitude_df
rain_entries_hourly = map_column_to_entries_hourly(data['rain'])
rain_df = create_df(rain_entries_hourly, 'rain')
rain_days = data[data['rain'] == 1]
no_rain_days = data[data['rain'] == 0]
def plot_box(sample1, sample2):
plt.boxplot([sample2, sample1], vert=False)
plt.title('NUMBER OF ENTRIES PER SAMPLE')
plt.xlabel('ENTRIESn_hourly')
plt.yticks([1, 2], ['Sample 2', 'Sample 1'])
plt.show()
def describe_samples(sample1, sample2):
size1, min_max1, mean1, var1, skew1, kurt1 = st.describe(sample1)
size2, min_max2, mean2, var2, skew2, kurt2 = st.describe(sample2)
med1 = np.median(sample1)
med2 = np.median(sample2)
std1 = np.std(sample1)
std2 = np.std(sample2)
print "Sample 1 (rainy days):\n min = {0}, max = {1},\n mean = {2:.2f}, median = {3}, var = {4:.2f}, std = {5:.2f}".format(min_max1[0], min_max1[1], mean1, med1, var1, std1)
print "Sample 2 (non-rainy days):\n min = {0}, max = {1},\n mean = {2:.2f}, median = {3}, var = {4:.2f}, std = {5:.2f}".format(min_max2[0], min_max2[1], mean2, med2, var2, std2)
class MannWhitneyU:
def __init__(self,n):
self.n = n
self.num_of_tests = 1000
self.sample1 = 0
self.sample2 = 0
def sample_and_test(self, plot, describe):
self.sample1 = np.random.choice(rain_days['ENTRIESn_hourly'], size=self.n, replace=False)
self.sample2 = np.random.choice(no_rain_days['ENTRIESn_hourly'], size=self.n, replace=False)
### the following two self.sample2 assignments are for testing purposes ###
#self.sample2 = self.sample1 # test when samples are same
#self.sample2 = np.random.choice(np.random.randn(self.n),self.n) # test for when samples are very different
if plot == True:
plot_box(self.sample1,self.sample2)
if describe == True:
describe_samples(self.sample1,self.sample2)
return st.mannwhitneyu(self.sample1, self.sample2)
def effect_sizes(self, U):
# Wendt's rank-biserial correlation
r = (1 - np.true_divide((2*U),(self.n*self.n)))
# Cohen's d
s = np.sqrt(np.true_divide((((self.n-1)*np.std(self.sample1)**2) + ((self.n-1)*np.std(self.sample2)**2)), (self.n+self.n-2)))
d = np.true_divide((np.mean(self.sample1) - np.mean(self.sample2)), s)
return r,d
def trial_series(self):
success = 0
U_values = []
p_values = []
d_values = []
r_values = []
for i in np.arange(self.num_of_tests):
U, p = self.sample_and_test(False, False)
r, d = self.effect_sizes(U)
U_values.append(U)
# scipy.stats.mannwhitneyu returns p for a one-sided hypothesis,
# so multiply by 2 for two-sided
p_values.append(p*2)
d_values.append(d)
r_values.append(r)
if p <= 0.05:
success += 1
print "n = {0}".format(self.n)
print "average U value: {0:.2f}".format(np.mean(U_values))
print "number of times p <= 0.05: {0}/{1} ({2}%)".format(success, self.num_of_tests, (np.true_divide(success,self.num_of_tests)*100))
print "average p value: {0:.2f}".format(np.mean(p_values))
print "average rank-biserial r value: {0:.2f}".format(np.mean(r_values))
print "average Cohen's d value: {0:.2f}".format(np.mean(d_values))
plt.hist(p_values, color='green', alpha=0.3)
plt.show()
sample_sizes = [30, 100, 500, 1500, 3000, 5000, 9585]
for n in sample_sizes:
MannWhitneyU(n).trial_series()
print "Shape of rainy-days data:" +str(rain_days.shape)
N = rain_days.shape[0]
print "N = " + str(N)
print "0.05 * N = " + str(0.05 * N)
n = 450
mwu = MannWhitneyU(n)
U, p = mwu.sample_and_test(True,True)
r, d = mwu.effect_sizes(U)
print "\nMann-Whitney U test results:"
print "n = {0}".format(n)
print "U = {0}".format(U)
print "p = {0:.2f}".format(np.mean(p))
print "rank-biserial r value: {0:.2f}".format(np.mean(r))
print "Cohen's d value: {0:.2f}".format(np.mean(d))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functions for Getting, Mapping, and Plotting Data
Step2: Function for Basic Statistics
Step3: Formulas Implemented
Step4: Section 1. Statistical Test
Step5: As witnessed above, when rainy and non-rainy days from the data set are considered populations (as opposed to samples themselves), it takes significantly large sample sizes from each population (e.g., $n = 3000$, which is more than $30\%$ of the total number of rainy days in the data set) to attain low $p$-values<sup>1</sup> frequently enough to reject the null hypothesis of the Mann-Whitney $U$ test<sup>2</sup> with the critical values proposed below.
Step6: The Mann-Whitney $U$ test is a nonparametric test of the null hypothesis that the distributions of two populations are the same.
|
15,458 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.optimize
plt.plot([1, 2, 3], [10, 30, 20], "o-")
plt.xlabel("Unit of time (t)")
plt.ylabel("Price of one unit of energy (c)")
plt.title("Cost of energy on the market")
plt.show();
# Price of energy on the market
price = [10, 30, 20]
plt.plot(price);
stock_max = 100 # battery capacity
# Coefficients of the linear objective function to be minimized
p = -np.array(price)
# 2-D array which, when matrix-multiplied by x, gives the values of the upper-bound inequality constraints at x.
A = [[-1, 0, 0],
[ 1, 0, 0],
[-1, -1, 0],
[ 1, 1, 0],
[-1, -1, -1],
[ 1, 1, 1]]
# 1-D array of values representing the upper-bound of each inequality constraint (row) in A.
b = [stock_max, 0, stock_max, 0, stock_max, 0]
# Sequence of (min, max) pairs for each element in x, defining the bounds on that parameter.
# Use None for one of min or max when there is no bound in that direction.
# By default bounds are (0, None) (non-negative).
# If a sequence containing a single tuple is provided, then min and max will be applied to all variables in the problem.
x0_bounds = (None, None)
x1_bounds = (None, None)
x2_bounds = (None, None)
bounds = (x0_bounds, x1_bounds, x2_bounds)
scipy.optimize.linprog(p, A_ub=A, b_ub=b, bounds=bounds)
# Cost of energy on the market
#price = [10, 30, 20] # -> -100, 100, 0
#price = [10, 30, 10, 30] # -> [-100., 100., -100., 100.]
#price = [10, 30, 10, 30, 30] # -> [-100., 100., -100., 100., 0.]
#price = [10, 20, 30, 40] # -> [-100., 0., 0., 100.]
price = [10, 30, 20, 50]
price = [10, 30, 20, 50]
plt.plot(price);
p = -np.array(price)
A = np.repeat(np.tril(np.ones(len(price))), 2, axis=0)
A[::2, :] *= -1
A
b = np.zeros(A.shape[0])
b[::2] = stock_max
b
bounds = tuple((None, None) for _p in price)
bounds
%%time
res = scipy.optimize.linprog(p, A_ub=A, b_ub=b, bounds=bounds) # , method='revised simplex'
res
res.x.round(decimals=2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scipy's syntax
Step2: TODO
Step3: Hand write the model
Step4: Automatically make the model
|
15,459 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
# crear un arreglo
arr = np.arange(0,11)
# desplegar el arreglo
arr
#obtener el valor del indice 8
arr[8]
#obtener los valores de un rango
arr[1:5]
#obtener los valores de otro rango
arr[0:5]
# reemplazar valores en un rango determinado
arr[0:5]=100
# desplegar el arreglo
arr
# Generar nuevamente el arreglo
arr = np.arange(0,11)
# desplegar
arr
# corte de un arreglo
slice_of_arr = arr[0:6]
# desplegar el corte
slice_of_arr
# cambiar valores del corte
slice_of_arr[:]=99
# desplegar los valores del corte
slice_of_arr
# desplegar arreglo
arr
# para obtener una copia se debe hacer explicitamente
arr_copy = arr.copy()
# desplegar el arreglo copia
arr_copy
# generar un arreglo 2D
arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45]))
#Show
arr_2d
# indices de filas
arr_2d[1]
# Formato es arr_2d[row][col] o arr_2d[row,col]
# Seleccionar un solo elemento
arr_2d[1][0]
# Seleccionar un solo elemento
arr_2d[1,0]
# Cortes en 2D
# forma (2,2) desde la esquina superior derecha
arr_2d[:2,1:]
#forma desde la ultima fila
arr_2d[2]
# forma desde la ultima fila
arr_2d[2,:]
# longitud de un arreglo
arr_length = arr2d.shape[1]
arr = np.arange(1,11)
arr
arr > 4
bool_arr = arr>4
bool_arr
arr[bool_arr]
arr[arr>2]
x = 2
arr[arr>x]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: seleccion utilizando los corchetes
Step2: Reemplazar valores
Step3: Tomar en cuenta que los cambios tambien se realizaron al arreglo original
Step4: La información no es copiada para evitar problemas de memoria
Step5: Indices en un arreglo 2D (matrices)
Step6: Seleccion
|
15,460 | <ASSISTANT_TASK:>
Python Code:
import os
import os.path as op
# Step 2
itasser_download_link = 'my_download_link'
# Step 3
itasser_version_number = '5.1'
# Step 4
itasser_archive = itasser_download_link.split('/')[-1]
os.mkdir(op.expanduser('~/software/itasser/'))
os.chdir(op.expanduser('~/software/itasser/'))
!wget $itasser_download_link
!tar -jxf $itasser_archive
# Step 5
os.mkdir(op.expanduser('~/software/itasser/ITLIB'))
!./I-TASSER5.1/download_lib.pl -libdir ITLIB
# Step 2
tmhmm_download_link = 'my_download_link'
# Step 3
os.mkdir(op.expanduser('~/software/tmhmm/'))
os.chdir(op.expanduser('~/software/tmhmm/'))
!wget $tmhmm_download_link
!tar -zxf tmhmm-2.0c.Linux.tar.gz
# Replace perl path
os.chdir(op.expanduser('~/software/tmhmm/tmhmm-2.0c/bin'))
!perl -i -pe 's{^#!/usr/local/bin/perl}{#!/usr/bin/perl}' tmhmm
!perl -i -pe 's{^#!/usr/local/bin/perl -w}{#!/usr/bin/perl -w}' tmhmmformat.pl
# Create symbolic links
!ln -s $HOME/software/tmhmm/tmhmm-2.0c/bin/* /srv/venv/bin/
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I-TASSER
Step2: TMHMM
|
15,461 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
NN = 6
NF = 5
MM = 5
C = np.zeros((MM, NN))
C[0, 0] = 1; C[0, 1] = -1 # element 1
C[1, 0] = 1; C[1, 2] = -1 # element 2
C[2, 0] = 1; C[2, 3] = -1 # element 3
C[3, 0] = 1; C[3, 4] = -1 # element 4
C[4, 0] = 1; C[4, 5] = -1 # element 4
CL = C[:, 0:(NN - NF)]
CF = C[:, (NN - NF):]
print(C)
q = [5., -1.5, 5., -7.5, 2.5] # force densities in kN/m corresponding to every element
qQ = np.diagflat(q)
gammaF = np.array([-5. + -5.*1j, 3. + -5.*1j, 5. + 3.*1j, -1. + 6.*1j, -5. + 5.*1j ])
fL = np.array([0. + (-5.)*1j])
gammaL = np.zeros(NN - NF)
DL = np.zeros((NN - NF, NN - NF))
DF = np.zeros((NN - NF, NF))
DL = np.dot(np.transpose(CL), np.dot(qQ, CL))
DF = np.dot(np.transpose(CL), np.dot(qQ, CF))
gammaL = np.linalg.solve(DL, fL - np.dot(DF, gammaF))
gammaL
fig = plt.figure(figsize=(9,9))
ax = fig.gca(aspect='equal')
ax.scatter(gammaF.real, gammaF.imag, color='b')
ax.scatter(gammaL.real, gammaL.imag, color='r')
for i in range(MM):
if q[i] > 0.:
col = 'b'
else:
col = 'r'
ax.plot([gammaL.real[0], gammaF.real[i]], [gammaL.imag[0], gammaF.imag[i]], color = col)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
NN = 6
NF = 4
MM = 5
C = np.zeros((MM, NN))
C[0, 0] = 1; C[0, 2] = -1 # element 1
C[1, 0] = 1; C[1, 1] = -1 # element 2
C[2, 1] = 1; C[2, 5] = -1 # element 3
C[3, 0] = 1; C[3, 3] = -1 # element 4
C[4, 1] = 1; C[4, 4] = -1 # element 5
CL = C[:, 0:(NN - NF)]
CF = C[:, (NN - NF):]
print(C)
gammaF = np.array([-5. + 0.*1j, 0. + 2.5*1j, 0 - 2.5*1j, 5. + 0.*1j])
q = [-5. + 1.35*1j, -5. + -5.*1j, -5. + 1.35*1j, 0., 0.] # force densities in kN/m corresponding to every element
qQ = np.diagflat(q)
print(qQ)
fL = np.array([0. + (0.)*1j])
gammaL = np.zeros(NN - NF)
DL = np.zeros((NN - NF, NN - NF))
DF = np.zeros((NN - NF, NF))
DL = np.dot(np.transpose(CL), np.dot(qQ, CL))
DF = np.dot(np.transpose(CL), np.dot(qQ, CF))
gammaL = np.linalg.solve(DL, fL - np.dot(DF, gammaF))
print(gammaL)
fig = plt.figure(figsize=(9,9))
ax = fig.gca(aspect='equal')
ax.scatter(gammaF.real, gammaF.imag, color='b')
ax.scatter(gammaL.real, gammaL.imag, color='r')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
gamma = np.concatenate((gammaL, gammaF), axis=0)
d = np.absolute((np.dot(C, gamma)))
f = q*d
V = np.imag(f)
DM = -V*d
print(DM)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 Classical 2D Force density equations
Step2: We consider now the equilibrium of the node $i$ joining nodes $j, k, l$ through members $m, n, r$, respectively
Step3: 1.3 Restrained nodes
Step4: 1.4 External forces
Step5: 1.5 Solving for the free nodes
Step6: 2 Force density for active bending members
|
15,462 | <ASSISTANT_TASK:>
Python Code:
!pip install -q scann
import tensorflow as tf
import numpy as np
from datetime import datetime
PROJECT_ID = 'yourProject' # Change to your project.
BUCKET = 'yourBucketName' # Change to the bucket you created.
REGION = 'yourTrainingRegion' # Change to your AI Platform Training region.
EMBEDDING_FILES_PREFIX = f'gs://{BUCKET}/bqml/item_embeddings/embeddings-*'
OUTPUT_INDEX_DIR = f'gs://{BUCKET}/bqml/scann_index'
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
from index_builder.builder import indexer
indexer.build(EMBEDDING_FILES_PREFIX, OUTPUT_INDEX_DIR)
if tf.io.gfile.exists(OUTPUT_INDEX_DIR):
print("Removing {} contents...".format(OUTPUT_INDEX_DIR))
tf.io.gfile.rmtree(OUTPUT_INDEX_DIR)
print("Creating output: {}".format(OUTPUT_INDEX_DIR))
tf.io.gfile.makedirs(OUTPUT_INDEX_DIR)
timestamp = datetime.utcnow().strftime('%y%m%d%H%M%S')
job_name = f'ks_bqml_build_scann_index_{timestamp}'
!gcloud ai-platform jobs submit training {job_name} \
--project={PROJECT_ID} \
--region={REGION} \
--job-dir={OUTPUT_INDEX_DIR}/jobs/ \
--package-path=index_builder/builder \
--module-name=builder.task \
--config='index_builder/config.yaml' \
--runtime-version=2.2 \
--python-version=3.7 \
--\
--embedding-files-path={EMBEDDING_FILES_PREFIX} \
--output-dir={OUTPUT_INDEX_DIR} \
--num-leaves=500
!gsutil ls {OUTPUT_INDEX_DIR}
from index_server.matching import ScaNNMatcher
scann_matcher = ScaNNMatcher(OUTPUT_INDEX_DIR)
vector = np.random.rand(50)
scann_matcher.match(vector, 5)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import libraries
Step2: Configure GCP environment settings
Step3: Authenticate your GCP account
Step4: Build the ANN index
Step5: Build the index using AI Platform Training
Step6: After the AI Platform Training job finishes, check that the scann_index folder has been created in your Cloud Storage bucket
Step7: Test the ANN index
|
15,463 | <ASSISTANT_TASK:>
Python Code:
from pymongo import MongoClient
import pandas as pd
from datetime import datetime
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.airbnb
cursor = db.Rawdata.find()
data = pd.DataFrame(list(cursor))
data.head(1)
data.columns
data = data.drop("listing_url",axis=1)
data = data.drop("scrape_id",axis=1)
data = data.drop("name",axis=1)
data = data.drop("notes",axis=1)
data = data.drop("access",axis=1)
data = data.drop("thumbnail_url",axis=1)
data = data.drop("medium_url",axis=1)
data = data.drop("picture_url",axis=1)
data = data.drop("xl_picture_url",axis=1)
data = data.drop("host_url",axis=1)
data = data.drop("host_thumbnail_url",axis=1)
data = data.drop("host_picture_url",axis=1)
data = data.drop("street",axis=1)
data = data.drop("neighbourhood",axis=1)
data = data.drop("neighbourhood_cleansed",axis=1)
data = data.drop("city",axis=1)
data = data.drop("state",axis=1)
data = data.drop("zipcode",axis=1)
data = data.drop("market",axis=1)
data = data.drop("smart_location",axis=1)
data = data.drop("country_code",axis=1)
data = data.drop("country",axis=1)
data = data.drop("is_location_exact",axis=1)
data = data.drop("property_type",axis=1)
data = data.drop("bed_type",axis=1)
data = data.drop("amenities",axis=1)
data = data.drop("square_feet",axis=1)
data = data.drop("weekly_price",axis=1)
data = data.drop("monthly_price",axis=1)
data = data.drop("availability_30",axis=1)
data = data.drop("availability_60",axis=1)
data = data.drop("availability_90",axis=1)
data = data.drop("calendar_last_scraped",axis=1)
data = data.drop("license",axis=1)
data = data.drop("jurisdiction_names",axis=1)
data = data.drop("first_review",axis=1)
data = data.drop("last_review",axis=1)
data = data.drop("Shampoo",axis=1)
data = data.drop("nearest_attr_lat",axis=1)
data = data.drop("nearest_attr_long",axis=1)
data = data.drop("Dryer",axis=1)
data = data.drop("Doorman",axis=1)
data = data.drop("Essentials",axis=1)
#data = data.drop("translation missing: en.hosting_amenity_50",axis=1)
data = data.drop("Washer",axis=1)
data = data.drop("Washer / Dryer",axis=1)
data = data.drop("First aid kit",axis=1)
data = data.drop("Smoke detector",axis=1)
#data = data.drop("translation missing: en.hosting_amenity_49",axis=1)
data = data.drop("Hangers",axis=1)
data = data.drop("Fire extinguisher",axis=1)
data = data.drop("Iron",axis=1)
data = data.drop("Carbon monoxide detector",axis=1)
data = data.drop("Wireless Internet",axis=1)
data = data.drop("Laptop friendly workspace",axis=1)
data = data.drop("Hot tub",axis=1)
data = data.drop("Dog(s)",axis=1)
data = data.drop("Cat(s)",axis=1)
data = data.drop("Buzzer/wireless intercom",axis=1)
data = data.drop("Hair dryer",axis=1)
data = data.drop("Safety card",axis=1)
data = data.drop("last_scraped",axis=1)
data = data.drop("house_rules",axis=1)
data = data.drop("interaction",axis=1)
data = data.drop("transit",axis=1)
data = data.drop("neighborhood_overview",axis=1)
data = data.drop("experiences_offered",axis=1)
data = data.drop("id",axis=1)
data = data.drop("summary",axis=1)
data = data.drop("space",axis=1)
data = data.drop("description",axis=1)
data = data.drop("host_id",axis=1)
data = data.drop("host_name",axis=1)
data = data.drop("host_about",axis=1)
data = data.drop("latitude",axis=1)
data = data.drop("longitude",axis=1)
data = data.drop("host_neighbourhood",axis=1)
data = data.drop("host_location",axis=1)
data = data.drop("calendar_updated",axis=1)
data = data.drop("host_listings_count",axis=1)
#data = data.drop("Unnamed: 0",axis=1)
data = data.drop("calculated_host_listings_count",axis=1)
data = data.drop("host_acceptance_rate",axis=1)
data.head(1)
data = data.drop("",axis=1)
data.dtypes
data.host_response_rate.tail()
x = pd.to_datetime(data.host_since)
x.head()
today = '2016-04-01'
y = datetime.strptime('2017-04-01', '%Y-%m-%d')
x[16]
y-x
data["host_since_days"] = y-x
data = data.drop("host_since",axis=1)
data = data.drop("host_response_time",axis=1)
data.head(1)
test_host_response = data.host_response_rate
a = test_host_response.map(lambda x: str(x)[:-1])
for i in range(0,len(a)):
print(i)
if a[i] == "na":
continue
if a[i] == "":
continue
else:
a[i] = int(a[i])
a[i] = a[i]/100
data["host_response_rate"] = a
test_superhost = data['host_is_superhost']
a = test_superhost.str.replace('t', '1')
a = a.str.replace('f', '0')
test_superhost.isnull().sum()
data['host_is_superhost'] = a
data = data.drop("host_verifications",axis=1)
data.head()
df_dummies1= pd.get_dummies(data, prefix='neighbourhood', columns=['neighbourhood_group_cleansed'])
df_dummies2= pd.get_dummies(df_dummies1, prefix='roomtype', columns=['room_type'])
test_profilepic = df_dummies2['host_has_profile_pic']
a = test_profilepic.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies2["host_has_profile_pic"] = a
test_host_identity_verified = df_dummies2['host_identity_verified']
a = test_host_identity_verified.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies2["host_identity_verified"] = a
df_dummies2 = df_dummies2.drop("has_availability",axis=1)
df_dummies2 = df_dummies2.drop("requires_license",axis=1)
test_instant_bookable = df_dummies2['instant_bookable']
a = test_instant_bookable.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies2['instant_bookable'] = a
df_dummies2.head()
df_dummies3= pd.get_dummies(df_dummies2, prefix='cancellation_policy', columns=['cancellation_policy'])
test_instant_bookable = df_dummies3['require_guest_profile_picture']
a = test_instant_bookable.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies3['require_guest_profile_picture'] = a
df_dummies3["require_guest_phone_verification"].head()
df_dummies3["require_guest_phone_verification"].isnull().sum()
test_phone = df_dummies3['require_guest_phone_verification']
a = test_phone.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies3['require_guest_phone_verification'] = a
df_dummies3.head()
df_dummies3.nearest_attr_rating.head()
df_dummies3.nearest_attr_rating.isnull().sum()
len(df_dummies3)
data.host_since_days[0]
from datetime import timedelta
data.host_since_days[0].days
data.host_since_days.head()
data.host_since_days[1]
data['host_since_days'] = data['host_since_days'].apply(lambda x: x.days if pd.isnull(x) == False else 0)
#pd.DataFrame.to_csv(df_dummies3, "preprocessed_data.csv")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Drop columns that are not important
Step2: Convert string date to type date, then convert it to number of days since user became a host till date of analysis (4th april 2017)
Step3: convert host_response_rate from string to integer. if "na", convert to not a number (NaN)
Step4: for i in range(0,len(a))
Step5: a = test_superhost.astype("str")
Step6: data['host_is_superhost'] = data['host_is_superhost'].astype("str")
Step7: convert columns with string names to dummy variables
|
15,464 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from numpy import random
from keras.datasets import mnist # helps in loading the MNIST dataset
from keras.models import Sequential
from keras.layers import Input, Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
from keras.models import load_model
from keras.utils.np_utils import probas_to_classes
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import time
import cv2
#to plot inside the notebook itself
%matplotlib inline
# to be able to reproduce the same randomness
random.seed(42)
# No of rows and columns in the image
img_rows = 28
img_cols = 28
#No of output classes (0-9)
nb_classes = 10
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
print("\n")
# Show the number of rows and columns
print("Row pixels in each image : {}.".format(X_train.shape[1]))
print("Column pixels in each image : {}.".format(X_train.shape[2]))
print("\n")
print("Successfully Downloaded and Loaded the dataset")
# Show the handwritten image
plt.imshow(X_train[0], cmap=cm.binary)
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
print("The data is reshaped to the respective format!" , input_shape)
X_train = X_train.astype('float32') #converted to float so that it can hold floating values between 0-1
X_test = X_test.astype('float32') #converted to float so that it can hold floating values between 0-1
X_train /= 255
X_test /= 255
print("In Integer form : ", y_train,y_test)
Y_train = np_utils.to_categorical(y_train, nb_classes) #converted to their binary forms
Y_test = np_utils.to_categorical(y_test, nb_classes) #converted to their binary forms
print("In Binary form : ", Y_train,Y_test)
print("Preprocessing of Data is Done Successfully...")
pool_size = (2, 2)
kernel_size = (3, 3)
model = Sequential()
model.add(Convolution2D(32, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
print("Successfully built the DNN Model!")
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy']
)
print("Model Compilation completed!")
from IPython.display import SVG
from keras.utils.visualize_util import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
batch_size = 128
nb_epoch=10
start = time.time()
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1,validation_data=(X_test, Y_test))
done = time.time()
elapsed = (done - start)/60
print("Model trained Successfully : Took - {} mins!".format(elapsed))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test Loss Value:', score[0])
print('Test Accuracy Value:', score[1])
pool_size = (2, 2)
kernel_size = (3, 3)
rmodel = Sequential()
rmodel.add(Convolution2D(32, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
rmodel.add(Activation('relu'))
rmodel.add(Convolution2D(64, kernel_size[0], kernel_size[1]))
rmodel.add(Activation('relu'))
rmodel.add(MaxPooling2D(pool_size=pool_size))
rmodel.add(Dropout(0.25))
rmodel.add(Flatten())
rmodel.add(Dense(128))
rmodel.add(Activation('relu'))
rmodel.add(Dropout(0.5))
rmodel.add(Dense(nb_classes))
rmodel.add(Activation('softmax'))
print("Successfully built the Refined DNN Model!")
rmodel.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print("Refined Model Compilation completed!")
from IPython.display import SVG
from keras.utils.visualize_util import model_to_dot
SVG(model_to_dot(rmodel).create(prog='dot', format='svg'))
batch_size = 128
nb_epoch=10
start = time.time()
rmodel.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
done = time.time()
elapsed = (done - start)/60
print("Refined Model trained Successfully : Took - {} mins!".format(elapsed))
score = rmodel.evaluate(X_test, Y_test, verbose=0)
print('Test Loss Value:', score[0])
print('Test Accuracy Value:', score[1])
import os, os.path
imgs = []
path = "/home/joel/PROJECTS/Udacity-MLND/Udacity-MLND-Capstone-Handwritting-Digit-Recognition/images"
count=0
for f in os.listdir(path):
imgs.append(cv2.imread(os.path.join(path,f)))
count+=1
print("Successfully loaded {} images".format(count))
X_pred = []
for img in imgs:
# Convert the color image to rgb
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Invert black and white color(since in opencv white is considered 255 and black 0 but we need vice versa in order to match with the dataset)
invert_gray = (255-gray)
# Resize the image to 28,28 pixels as per the mnist dataset format
resized = cv2.resize(invert_gray, (28, 28))
# Convert the image format from (28,28) to (28,28,1) in order for the model to recognize
resized = np.asarray(resized)
resized.shape+=1,
#scale the color channel from 0-255 to 0-1
resized/=255
X_pred.append(resized)
X_pred = np.asarray(X_pred)
print(X_pred.shape)
# Predict the output
proba = rmodel.predict(X_pred)
# Convert the predicted output to respective integer number
answers = probas_to_classes(proba)
#plot the image and the predicted number
i=0
for img in imgs:
plt.figure()
plt.imshow(img, cmap=cm.binary)
plt.suptitle("The predicted digit is : " + str(answers[i]))
i+=1
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Dataset
Step2: Visualize an Image sample
Step3: Preprocess the Data
Step4: The MNIST dataset contains grayscale images where the color channel value varies from 0 to 255. In order to reduce the computational load and training difficulty, we will map the values from 0 - 255 to 0 - 1 by dividing each pixel values by 255. Run the below code to do this.
Step5: The target labels y_train,y_test are in the form of numerical integers(0-9), we need to convert them to binary form in order for the neural network to perform mapping from input to output correctly and efficiently. Run the below code to do this.
Step6: Implementing the Deep Neural Network (DNN)
Step7: Compile the Model
Step8: Visualize the Model
Step9: Train the DNN Model
Step10: In the neural network terminology
Step11: Refinement of the Deep Neural Network(DNN)
Step12: Compile the Refined Model
Step13: Visualize the Refined Model
Step14: Train the Refined DNN Model
Step15: Evaluate the Refined DNN Model
Step16: Test the Refined Model with Real Data
Step17: Preprocess the Loaded Image
Step18: Predict the digit in the Images
|
15,465 | <ASSISTANT_TASK:>
Python Code:
import spvcm.api as spvcm #package API
spvcm.both.Generic # abstract customizable class, ignores rho/lambda, equivalent to MVCM
spvcm.both.MVCM # no spatial effect
spvcm.both.SESE # both spatial error (SE)
spvcm.both.SESMA # response-level SE, region-level spatial moving average
spvcm.both.SMASE # response-level SMA, region-level SE
spvcm.both.SMASMA # both levels SMA
spvcm.upper.SE # response-level uncorrelated, region-level SE
spvcm.upper.SMA # response-level uncorrelated, region-level SMA
spvcm.lower.SE # response-level SE, region-level uncorrelated
spvcm.lower.SMA # response-level SMA, region-level uncorrelated
#seaborn is required for the traceplots
import pysal as ps
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import geopandas as gpd
%matplotlib inline
data = ps.pdio.read_files(ps.examples.get_path('south.shp'))
gdf = gpd.read_file(ps.examples.get_path('south.shp'))
data = data[data.STATE_NAME != 'District of Columbia']
X = data[['UE90', 'PS90', 'RD90']].values
N = X.shape[0]
Z = data.groupby('STATE_NAME')[['FP89', 'GI89']].mean().values
J = Z.shape[0]
Y = data.HR90.values.reshape(-1,1)
W2 = ps.queen_from_shapefile(ps.examples.get_path('us48.shp'),
idVariable='STATE_NAME')
W2 = ps.w_subset(W2, ids=data.STATE_NAME.unique().tolist()) #only keep what's in the data
W1 = ps.queen_from_shapefile(ps.examples.get_path('south.shp'),
idVariable='FIPS')
W1 = ps.w_subset(W1, ids=data.FIPS.tolist()) #again, only keep what's in the data
W1.transform = 'r'
W2.transform = 'r'
membership = data.STATE_NAME.apply(lambda x: W2.id_order.index(x)).values
Delta_frame = pd.get_dummies(data.STATE_NAME)
Delta = Delta_frame.values
vcsma = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=5000,
configs=dict(tuning=1000, adapt_step=1.01))
vcsma.trace.varnames
vcsma.trace.varnames
trace_dataframe = vcsma.trace.to_df()
trace_dataframe.head()
trace_dataframe.mean()
fig, ax = vcsma.trace.plot()
plt.show()
vcsma.trace['Lambda',-4:] #last 4 draws of lambda
vcsma.trace[['Tau2', 'Sigma2'], 0:2] #the first 2 variance parameters
vcsma_p = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=5000, n_jobs=3, #run 3 chains
configs=dict(tuning=500, adapt_step=1.01))
vcsma_p.trace[0, 'Betas', -1] #the last draw of Beta on the first chain.
vcsma_p.trace[1, 'Betas', -1] #the last draw of Beta on the second chain
vcsma_p.trace.plot(burn=1000, thin=10)
plt.suptitle('SMA of Homicide Rate in Southern US Counties', y=0, fontsize=20)
#plt.savefig('trace.png') #saves to a file called "trace.png"
plt.show()
vcsma_p.trace.plot(burn=-100, varnames='Lambda') #A negative burn-in works like negative indexing in Python & R
plt.suptitle('First 100 iterations of $\lambda$', fontsize=20, y=.02)
plt.show() #so this plots Lambda in the first 100 iterations.
df = vcsma.trace.to_df()
df.describe()
vcsma.trace.summarize()
from statsmodels.api import tsa
#if you don't have it, try removing the comment and:
#! pip install statsmodels
plt.plot(tsa.pacf(vcsma.trace['Lambda', -2500:]))
tsa.pacf(df.Lambda)[0:3]
betas = [c for c in df.columns if c.startswith('Beta')]
f,ax = plt.subplots(len(betas), 2, figsize=(10,8))
for i, col in enumerate(betas):
ax[i,0].plot(tsa.acf(df[col].values))
ax[i,1].plot(tsa.pacf(df[col].values)) #the pacf plots take a while
ax[i,0].set_title(col +' (ACF)')
ax[i,1].set_title('(PACF)')
f.tight_layout()
plt.show()
gstats = spvcm.diagnostics.geweke(vcsma, varnames='Tau2') #takes a while
print(gstats)
plt.plot(gstats[0]['Tau2'][:-1])
spvcm.diagnostics.mcse(vcsma, varnames=['Tau2', 'Sigma2'])
spvcm.diagnostics.psrf(vcsma_p, varnames=['Tau2', 'Sigma2'])
spvcm.diagnostics.hpd_interval(vcsma, varnames=['Betas', 'Lambda', 'Sigma2'])
vcsma.trace.map(np.percentile,
varnames=['Lambda', 'Tau2', 'Sigma2'],
#arguments to pass to the function go last
q=[25, 50, 75])
vcsma.trace.to_csv('./model_run.csv')
tr = spvcm.Trace.from_csv('./model_run.csv')
print(tr.varnames)
tr.plot(varnames=['Tau2'])
vcsma.draw()
vcsma.sample(10)
vcsma.cycles
vcsma_p.sample(10)
vcsma_p.cycles
print(vcsma.state.keys())
example = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=250,
extra_traced_params = ['DeltaAlphas'],
configs=dict(tuning=500, adapt_step=1.01))
example.trace.varnames
vcsma.configs
vcsma.configs.Lambda.accepted
vcsma.configs.Lambda.accepted / float(vcsma.cycles)
example = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=500,
configs=dict(tuning=250, adapt_step=1.01,
debug=True))
example.configs.Lambda._cache[-1] #let's only look at the last one
from spvcm.steps import Metropolis, Slice
example = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=500,
configs=dict(tuning=250, adapt_step=1.01,
debug=True, ar_low=.1, ar_hi=.4))
example.configs.Lambda.ar_hi, example.configs.Lambda.ar_low
example_slicer = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=500,
configs=dict(Lambda_method='slice'))
example_slicer.trace.plot(varnames='Lambda')
plt.show()
example_slicer.configs.Lambda.adapt, example_slicer.configs.Lambda.width
vcsese = spvcm.both.SESE(Y, X, W=W1, M=W2, Z=Z, membership=membership,
n_samples=0)
vcsese.configs
vcsese.configs.Lambda.max_tuning = 0
vcsese.configs.Lambda.jump = .25
Delta = vcsese.state.Delta
DeltaZ = Delta.dot(Z)
vcsese.state.Betas_mean0 = ps.spreg.OLS(Y, np.hstack((X, DeltaZ))).betas
vcsese.state.Lambda = -.25
vcsese.state.Betas += np.random.uniform(-10, 10, size=(vcsese.state.p,1))
from scipy import stats
def Lambda_prior(val):
if (val < 0) or (val > 1):
return -np.inf
return np.log(stats.beta.pdf(val, 2,1))
def Rho_prior(val):
if (val > .5) or (val < -.5):
return -np.inf
return np.log(stats.truncnorm.pdf(val, -.5, .5, loc=0, scale=.5))
vcsese.state.LogLambda0 = Lambda_prior
vcsese.state.LogRho0 = Rho_prior
%timeit vcsese.draw()
%time vcsese.sample(100)
vcsese.sample(10)
vcsese.state.Psi_1 #lower-level covariance
vcsese.state.Psi_2 #upper-level covariance
vcsma.state.Psi_2 #upper-level covariance
vcsma.state.Psi_2i
vcsma.state.Psi_1
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Depending on the structure of the model, you need at least
Step2: Reading in the data, we'll extract these values we need from the dataframe.
Step3: Then, we'll construct some queen contiguity weights from the files to show how to run a model.
Step4: With the data, upper-level weights, and lower-level weights, we can construct a membership vector or a dummy data matrix. For now, I'll create the membership vector.
Step5: But, we could also build the dummy variable matrix using pandas, if we have a suitable categorical variable
Step6: Every call to the sampler is of the following form
Step7: This models, spvcm.upper.SMA, is a variance components/varying intercept model with a state-level SMA-correlated error.
Step8: The results and state of the sampler are stored within the vcsma object. I'll step through the most important parts of this object.
Step9: In this case, Lambda is the upper-level moving average parameter, Alphas is the vector of correlated group-level random effects, Tau2 is the upper-level variance, Betas are the marginal effects, and Sigma2 is the lower-level error variance.
Step10: the dataframe will have columns containing the elements of the parameters and each row is a single iteration of the sampler
Step11: You can write this out to a csv or analyze it in memory like a typical pandas dataframes
Step12: The second is a method to plot the traces
Step13: The trace object can be sliced by (chain, parameter, index) tuples, or any subset thereof.
Step14: We only ran a single chain, so the first index is assumed to be zero. You can run more than one chain in parallel, using the builtin python multiprocessing library
Step15: and the chain plotting works also for the multi-chain traces. In addition, there are quite a few traceplot options, and all the plots are returned by the methods as matplotlib objects, so they can also be saved using plt.savefig().
Step16: To get stuff like posterior quantiles, you can use the attendant pandas dataframe functionality, like describe.
Step17: There is also a trace.summarize function that will compute various things contained in spvcm.diagnostics on the chain. It takes a while for large chains, because the statsmodels.tsa.AR estimator is much slower than the ar estimator in R. If you have rpy2 installed and CODA installed in your R environment, I attempt to use R directly.
Step18: So, 5000 iterations, but many parameters have an effective sample size that's much less than this. There's debate about whether it's necesasry to thin these samples in accordance with the effective size, and I think you should thin your sample to the effective size and see if it affects your HPD/Standard Errorrs.
Step19: For example, a plot of the partial autocorrelation in $\lambda$, the upper-level spatial moving average parameter, over the last half of the chain is
Step20: So, the chain is close-to-first order
Step21: We could do this for many parameters, too. An Autocorrelation/Partial Autocorrelation plot can be made of the marginal effects by
Step22: As far as the builtin diagnostics for convergence and simulation quality, the diagnostics module exposes a few things
Step23: Typically, this means the chain is converged at the given "bin" count if the line stays within $\pm2$. The geweke statistic is a test of differences in means between the given chunk of the chain and the remaining chain. If it's outside of +/- 2 in the early part of the chain, you should discard observations early in the chain. If you get extreme values of these statistics throughout, you need to keep running the chain.
Step24: We can also compute Monte Carlo Standard Errors like in the mcse R package, which represent the intrinsic error contained in the estimate
Step25: Another handy statistic is the Partial Scale Reduction factor, which measures of how likely a set of chains run in parallel have converged to the same stationary distribution. It provides the difference in variance between between chains vs. within chains.
Step26: Highest posterior density intervals provide a kind of interval estimate for parameters in Bayesian models
Step27: Sometimes, you want to apply arbitrary functions to each parameter trace. To do this, I've written a map function that works like the python builtin map. For example, if you wanted to get arbitrary percentiles from the chain
Step28: In addition, you can pop the trace results pretty simply to a .csv file and analyze it elsewhere, like if you want to use use the coda Bayesian Diagnostics package in R.
Step29: And, you can even load traces from csvs
Step30: Working with models
Step31: And sample steps forward an arbitrary number of times
Step32: At this point, we did 5000 initial samples and 11 extra samples. Thus
Step33: Parallel models can suspend/resume sampling too
Step34: Under the hood, it's the draw method that actually ends up calling one run of model._iteration, which is where the actual statistical code lives. Then, it updates all model.traced_params by adding their current value in model.state to model.trace. In addition, model._finalize is called the first time sampling is run, which computes some of the constants & derived quantities that save computing time.
Step35: If you want to track how something (maybe a hyperparameter) changes over sampling, you can pass extra_traced_params to the model declaration
Step36: configs
Step37: Since vcsma is an upper-level-only model, the Rho config is skipped. But, we can look at the Lambda config. The number of accepted lambda draws is contained in
Step38: so, the acceptance rate is
Step39: Also, if you want to get verbose output from the metropolis sampler, there is a "debug" flag
Step40: Which stores the information about each iteration in a list, accessible from model.configs.<parameter>._cache
Step41: Configuration of the MCMC steps is done using the config options dictionary, like done in spBayes in R. The actual configuration classes exist in spvcm.steps
Step42: Most of the common options are
Step43: Working with models
Step44: This sets up a two-level spatial error model with the default uninformative configuration. This means the prior precisions are all I * .001*, prior means are all 0, spatial parameters are set to -1/(n-1), and prior scale factors are set arbitrarily.
Step45: So, for example, if we wanted to turn off adaptation in the upper-level parameter, and fix the Metrpolis jump variance to .25
Step46: Priors
Step47: Starting Values
Step48: Sometimes, it's suggested that you start the beta vector randomly, rather than at zero. For the parallel sampling, the model starting values are adjusted to induce overdispersion in the start values.
Step49: Spatial Priors
Step50: And then assigning to their symbols, LogLambda0 and LogRho0 in the state
Step51: Performance
Step52: To make it easy to work with the model, you can interrupt and resume sampling using keyboard interrupts (ctrl-c or the stop button in the notebook).
Step53: Under the Hood
|
15,466 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
from pandas import Series, DataFrame
series_1 = Series([-2, -1, 0, 1, 2, 3, 4, 5])
series_1
series_1.values
series_1.index
series_2 = Series([1, 2, 3], index=['a', 'b', 'c'])
series_2
series_2.index
series_2['a']
series_2[['a', 'b']]
series_2[series_2 > 1]
series_2 * 2
numbers_1 = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5}
series_3 = Series(numbers_1)
series_3
numbers_2_index = ['a', 'b', 'c', 'd']
numbers_2 = {'a': 1, 'b': 2, 'c': 3}
series_4 = Series(numbers_2, index=numbers_2_index)
series_4
pd.isnull(series_4) # same as series_4.isnull()
pd.notnull(series_4)
series_3 + series_4
series_4.name = 'numbers'
series_4.index.name = 'letter'
series_4
series_4.index = ['1', '2', '3', 'x'] # update index
series_4
data_1 = {
'pet': ['Toffee', 'Candy', 'Cake', 'Sussy'],
'age': [3, 1, 2, 4],
}
frame_1 = DataFrame(data_1)
frame_1
frame_1['age'] # dict-like notation
frame_1.pet # attribute
frame_2 = DataFrame(data_1, columns=['pet', 'age', 'toy'], index=['one', 'two', 'three', 'four'])
frame_2
frame_2.ix['four'] # row
frame_2.toy = 'bone'
frame_2
frame_2.toy = Series(['bone', None, 'bone', 'bone'], index=['one', 'two', 'three', 'four'])
frame_2
frame_2['likes_bone_toy'] = frame_2.toy == 'bone'
frame_2
frame_2.T # transpose
frame_2.columns.name = 'Number'
frame_2.index.name = 'Pet'
frame_2
frame_2.values
frame_2.index
'one' in frame_2.index
frame_2.reindex(['four', 'three', 'two', 'one'], fill_value=0)
fill_numbers = Series(['blue', 'yellow', 'green'], index=[0, 4, 8])
fill_numbers.reindex(range(12), method='ffill')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using a dict for Series
Step2: DataFrame
Step3: Interpolation / filling of values when reindex (Series)
|
15,467 | <ASSISTANT_TASK:>
Python Code:
# HTTP Client for Python
import requests
# Cytoscape port number
PORT_NUMBER = 1234
BASE_URL = "https://raw.githubusercontent.com/ls-cwi/eXamine/master/data/"
# The Base path for the CyRest API
BASE = 'http://localhost:' + str(PORT_NUMBER) + '/v1/'
#Helper command to call a command via HTTP POST
def executeRestCommand(namespace="", command="", args={}):
postString = BASE + "commands/" + namespace + "/" + command
res = requests.post(postString,json=args)
return res
# First we import our demo network
executeRestCommand("network", "import url", {"indexColumnSourceInteraction":"1",
"indexColumnTargetInteraction":"2",
"url": BASE_URL + "edges_karate.gml"})
# Next we import node annotations
executeRestCommand("table", "import url",
{"firstRowAsColumnNames":"true",
"keyColumnIndex" : "1",
"startLoadRow" : "1",
"dataTypeList":"s,sl",
"url": BASE_URL + "nodes_karate.txt"})
executeRestCommand("network", "select", {"nodeList" : "all"})
executeRestCommand("examine", "generate groups",
{"selectedGroupColumns" : "Community"})
# Adjust the visualization settings
executeRestCommand("examine", "update settings",
{"labelColumn" : "label",
"URL" : "label",
"showScore" : "false",
"selectedGroupColumns" : "Community"})
# Select groups for demarcation in the visualization
executeRestCommand("examine", "select groups",
{"selectedGroups":"A,B,C,D,E,F"})
# Launch the interactive eXamine visualization
executeRestCommand("examine", "interact", {})
# Export a graphic instead of interacting with it
# use absolute path; writes in Cytoscape directory if not changed
executeRestCommand("examine", "export", {"path": "test.svg"})
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing network and node-specific annotation
Step2: We then import node-specific annotation directly from the eXamine repository on github. The imported file contains set membership information for each node. Note that it is important to ensure that set-membership information is imported as List of String, as indicated by sl. Additionaly, note that the default list separator is a pipe character.
Step3: Set-based visualization using eXamine
Step4: We then select six groups.
Step5: There are two options
Step6: The command below launches the eXamine window. If this window is blank, simply resize the window to force a redraw of the scene.
|
15,468 | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import sys
import warnings
warnings.filterwarnings("ignore")
import torch
import numpy as np
from tqdm import tqdm_notebook, tqdm
sys.path.append('../..')
from batchflow import Notifier, Pipeline, Dataset, I, W, V, L, B
from batchflow.monitor import *
# Set GPU
%env CUDA_VISIBLE_DEVICES=0
DEVICE = torch.device('cuda:0')
torch.ones((1, 1, 1), device=DEVICE)
BAR = 't' # can be changed to 'n' to use Jupyter Notebook progress bar
for item in Notifier(BAR)(range(5)):
print(item)
%time for item in Notifier('t')(range(100000)): pass
%time for item in tqdm(range(100000)): pass
%time for item in Notifier('n')(range(100000)): pass
%time for item in tqdm_notebook(range(100000)): pass
with monitor_cpu(frequency=0.1) as cpu_monitor:
for _ in Notifier(BAR)(range(10)):
_ = np.random.random((1000, 10000))
cpu_monitor.visualize()
with monitor_resource(['uss', 'gpu', 'gpu_memory'], frequency=0.1) as (uss_monitor, gpu_monitor, gpum_monitor):
for _ in Notifier(BAR)(range(42)):
cpu_data = np.random.random((1000, 10000))
gpu_data = torch.ones((256, 512, 2096), device=DEVICE)
gpu_op = torch.mvlgamma(torch.erfinv(gpu_data), 1) # intense operation
torch.cuda.empty_cache()
uss_monitor.visualize()
gpu_monitor.visualize()
gpum_monitor.visualize()
notifier = Notifier(BAR, monitors=['memory', 'cpu'])
for _ in notifier(range(100)):
_ = np.random.random((1000, 100))
notifier.visualize()
pipeline = (
Pipeline()
.init_variable('loss_history', [])
.init_variable('image')
.update(V('loss_history', mode='a'), 100 * 2 ** (-I()))
.update(V('image'), L(np.random.random)((30, 30)))
) << Dataset(10)
pipeline.reset('all')
_ = pipeline.run(1, n_iters=10, notifier=BAR)
pipeline.reset('all')
_ = pipeline.run(1, n_iters=10, notifier=Notifier(BAR, monitors='loss_history'))
pipeline.notifier.visualize()
pipeline.reset('all')
_ = pipeline.run(1, n_iters=50, notifier=Notifier(BAR, monitors=['cpu', 'loss_history'], file='notifications.txt'))
pipeline.notifier.visualize()
!head notifications.txt -n 13
pipeline.reset('all')
_ = pipeline.run(1, n_iters=10, notifier=Notifier('n', graphs=['memory', 'loss_history']))
pipeline.reset('all')
_ = pipeline.run(1, n_iters=100, notifier=Notifier('n', graphs=['memory', 'loss_history', 'image'], frequency=10))
def custom_plotter(ax=None, container=None, **kwargs):
Zero-out center area of the image, change plot parameters.
container['data'][10:20, 10:20] = 0
ax.imshow(container['data'])
ax.set_title(container['name'], fontsize=18)
ax.set_xlabel('axis one', fontsize=18)
ax.set_ylabel('axis two', fontsize=18)
pipeline.reset('all')
_ = pipeline.run(1, n_iters=100,
notifier=Notifier('n',
graphs=[{'source': 'memory',
'name': 'my custom monitor'},
{'source': 'image',
'name': 'amazing plot',
'plot_function': custom_plotter}],
frequency=10)
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notifier
Step2: As some of the loops are running hundreds of iterations per second, we should take special care of speed of updating
Step3: Monitors
Step4: Under the hood Monitor creates a separate process, that checks the state of a resource every frequency seconds and can fetch collected data on demand.
Step5: This feature is immensely helpful during both research and deploy stages, so we included it in the Notifier itself
Step6: Pipeline
Step7: Vanilla pipeline
Step8: Track pipeline variables
Step9: Obviously, we can use the same resource monitors, as before, by passing additional items to monitors. There is also file argument, that allows us to log the progress to an external storage
Step10: Live plots
Step11: It can work with images also. As the rendering of plots might take some time, we want to do so once every 10 iterations and achieve so by using frequency parameter
Step13: Advanced usage of Notifier
|
15,469 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3))
ratings.head()
import numpy as np
movieProperties = ratings.groupby('movie_id').agg({'rating': [np.size, np.mean]})
movieProperties.head()
movieNumRatings = pd.DataFrame(movieProperties['rating']['size'])
movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
movieNormalizedNumRatings.head()
movieDict = {}
with open(r'e:/sundog-consult/udemy/datascience/ml-100k/u.item') as f:
temp = ''
for line in f:
fields = line.rstrip('\n').split('|')
movieID = int(fields[0])
name = fields[1]
genres = fields[5:25]
genres = map(int, genres)
movieDict[movieID] = (name, genres, movieNormalizedNumRatings.loc[movieID].get('size'), movieProperties.loc[movieID].rating.get('mean'))
movieDict[1]
from scipy import spatial
def ComputeDistance(a, b):
genresA = a[1]
genresB = b[1]
genreDistance = spatial.distance.cosine(genresA, genresB)
popularityA = a[2]
popularityB = b[2]
popularityDistance = abs(popularityA - popularityB)
return genreDistance + popularityDistance
ComputeDistance(movieDict[2], movieDict[4])
print movieDict[2]
print movieDict[4]
import operator
def getNeighbors(movieID, K):
distances = []
for movie in movieDict:
if (movie != movieID):
dist = ComputeDistance(movieDict[movieID], movieDict[movie])
distances.append((movie, dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(K):
neighbors.append(distances[x][0])
return neighbors
K = 10
avgRating = 0
neighbors = getNeighbors(1, K)
for neighbor in neighbors:
avgRating += movieDict[neighbor][3]
print movieDict[neighbor][0] + " " + str(movieDict[neighbor][3])
avgRating /= float(K)
avgRating
movieDict[1]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, we'll group everything by movie ID, and compute the total number of ratings (each movie's popularity) and the average rating for every movie
Step2: The raw number of ratings isn't very useful for computing distances between movies, so we'll create a new DataFrame that contains the normalized number of ratings. So, a value of 0 means nobody rated it, and a value of 1 will mean it's the most popular movie there is.
Step3: Now, let's get the genre information from the u.item file. The way this works is there are 19 fields, each corresponding to a specific genre - a value of '0' means it is not in that genre, and '1' means it is in that genre. A movie may have more than one genre associated with it.
Step4: For example, here's the record we end up with for movie ID 1, "Toy Story"
Step5: Now let's define a function that computes the "distance" between two movies based on how similar their genres are, and how similar their popularity is. Just to make sure it works, we'll compute the distance between movie ID's 2 and 4
Step6: Remember the higher the distance, the less similar the movies are. Let's check what movies 2 and 4 actually are - and confirm they're not really all that similar
Step7: Now, we just need a little code to compute the distance between some given test movie (Toy Story, in this example) and all of the movies in our data set. When the sort those by distance, and print out the K nearest neighbors
Step8: While we were at it, we computed the average rating of the 10 nearest neighbors to Toy Story
Step9: How does this compare to Toy Story's actual average rating?
|
15,470 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for index,review in reviews.iterrows():
for word in review[0].split(' '):
total_counts[word] += 1
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {key:idx for (idx,key) in enumerate(vocab)} ## create the word-to-index dictionary
def text_to_vector(text):
word_vector = np.zeros(len(word2idx),dtype=int)
for word in text.split(' '):
if word2idx.get(word,None) != None:
word_vector[word2idx[word]] += 1
return word_vector
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Start the network graph
#Input
net = tflearn.input_data([None, 10000])
#Hidden layers 250, 10
net = tflearn.fully_connected(net, 400, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='ReLU')
#output
net = tflearn.fully_connected(net, 2, activation='softmax')
#Training specifications
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# End of network graph
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
15,471 | <ASSISTANT_TASK:>
Python Code:
from oemof.solph import EnergySystem
import pandas as pd
# initialize energy system
energysystem = EnergySystem(timeindex=pd.date_range('1/1/2016',
periods=168,
freq='H'))
# import example data with scaled demands and feedin timeseries of renewables
# as dataframe
data = pd.read_csv("data/example_data.csv", sep=",")
#print(data.demand_el[0:10])
#print(data.keys())
from oemof.solph import Bus, Flow, Sink, Source, LinearTransformer
### BUS
# create electricity bus
b_el = Bus(label="b_el")
# add excess sink to help avoid infeasible problems
Sink(label="excess_el",
inputs={b_el: Flow()})
Source(label="shortage_el",
outputs={b_el: Flow(variable_costs=1000)})
### DEMAND
# add electricity demand
Sink(label="demand_el",
inputs={b_el: Flow(nominal_value=85,
actual_value=data['demand_el'],
fixed=True)})
### SUPPLY
# add wind and pv feedin
Source(label="wind",
outputs={b_el: Flow(actual_value=data['wind'],
nominal_value=60,
fixed=True)});
Source(label="pv",
outputs={b_el: Flow(actual_value=data['pv'],
nominal_value=200,
fixed=True)});
from oemof.solph import OperationalModel
import oemof.outputlib
import matplotlib.pyplot as plt
def optimize(energysystem):
### optimize
# create operational model
om = OperationalModel(es=energysystem)
# solve using the cbc solver
om.solve(solver='cbc',
solve_kwargs={'tee': False})
# save LP-file
om.write('sector_coupling.lp', io_options={'symbolic_solver_labels': True})
# write back results from optimization object to energysystem
om.results();
def plot(energysystem, bus_label, bus_type):
# define colors
cdict = {'wind': '#00bfff', 'pv': '#ffd700', 'pp_gas': '#8b1a1a',
'pp_chp_extraction': '#838b8b', 'excess_el': '#8b7355',
'shortage_el': '#000000', 'heater_rod': 'darkblue',
'pp_chp': 'green', 'demand_el': 'lightgreen', 'demand_th': '#ce4aff',
'heat_pump': 'red', 'leaving_bev': 'darkred', 'bev_storage': 'orange'}
# create multiindex dataframe with result values
esplot = oemof.outputlib.DataFramePlot(energy_system=energysystem)
# select input results of electrical bus (i.e. power delivered by plants)
esplot.slice_unstacked(bus_label=bus_label, type=bus_type,
date_from='2016-01-03 00:00:00',
date_to='2016-01-06 00:00:00')
# set colorlist for esplot
colorlist = esplot.color_from_dict(cdict)
# set plot attributes
esplot.plot(color=colorlist, title="January 2016", stacked=True, width=1,
kind='bar')
esplot.ax.set_ylabel('Power')
esplot.ax.set_xlabel('Date')
esplot.set_datetime_ticks(tick_distance=24, date_format='%d-%m')
esplot.outside_legend(reverse=True)
plt.show()
optimize(energysystem)
plot(energysystem, "b_el", "to_bus")
# add gas bus
b_gas = Bus(label="b_gas",
balanced=False)
# add gas power plant
LinearTransformer(label="pp_gas",
inputs={b_gas: Flow(summed_max_flow=200)},
outputs={b_el: Flow(nominal_value=40,
variable_costs=40)},
conversion_factors={b_el: 0.50});
optimize(energysystem)
plot(energysystem, "b_el", "to_bus")
# add heat bus
b_heat = Bus(label="b_heat",
balanced=True)
# add heat demand
Sink(label="demand_th",
inputs={b_heat: Flow(nominal_value=60,
actual_value=data['demand_th'],
fixed=True)})
# add heater rod
LinearTransformer(label="heater_rod",
inputs={b_el: Flow()},
outputs={b_heat: Flow(variable_costs=10)},
conversion_factors={b_heat: 0.98});
optimize(energysystem)
plot(energysystem, "b_heat", "to_bus")
# COP can be calculated beforehand, assuming the heat reservoir temperature
# is infinite random timeseries for COP
import numpy as np
COP = np.random.uniform(low=3.0, high=5.0, size=(168,))
# add heater rod
#LinearTransformer(label="heater_rod",
# inputs={b_el: Flow()},
# outputs={b_heat: Flow(variable_costs=10)},
# conversion_factors={b_heat: 0.98});
# add heat pump
LinearTransformer(label="heat_pump",
inputs={b_el: Flow()},
outputs={b_heat: Flow(nominal_value=20,
variable_costs=10)},
conversion_factors={b_heat: COP});
optimize(energysystem)
plot(energysystem, "b_heat", "to_bus")
# add CHP with fixed ratio of heat and power (back-pressure turbine)
LinearTransformer(label='pp_chp',
inputs={b_gas: Flow()},
outputs={b_el: Flow(nominal_value=30,
variable_costs=42),
b_heat: Flow(nominal_value=40)},
conversion_factors={b_el: 0.3,
b_heat: 0.4});
from oemof.solph import VariableFractionTransformer
# add CHP with variable ratio of heat and power (extraction turbine)
VariableFractionTransformer(label='pp_chp_extraction',
inputs={b_gas: Flow()},
outputs={b_el: Flow(nominal_value=30,
variable_costs=42),
b_heat: Flow(nominal_value=40)},
conversion_factors={b_el: 0.3,
b_heat: 0.4},
conversion_factor_single_flow={b_el: 0.5});
optimize(energysystem)
plot(energysystem, "b_el", "to_bus")
from oemof.solph import Storage
charging_power = 20
bev_battery_cap = 50
# add mobility bus
b_bev = Bus(label="b_bev",
balanced=True)
# add transformer to transport electricity from grid to mobility sector
LinearTransformer(label="transport_el_bev",
inputs={b_el: Flow()},
outputs={b_bev: Flow(variable_costs=10,
nominal_value=charging_power,
max=data['bev_charging_power'])},
conversion_factors={b_bev: 1.0})
# add BEV storage
Storage(label='bev_storage',
inputs={b_bev: Flow()},
outputs={b_bev: Flow()},
nominal_capacity=bev_battery_cap,
capacity_min=data['bev_cap_min'],
capacity_max=data['bev_cap_max'],
capacity_loss=0.00,
initial_capacity=None,
inflow_conversion_factor=1.0,
outflow_conversion_factor=1.0,
nominal_input_capacity_ratio=1.0,
nominal_output_capacity_ratio=1.0,
fixed_costs=35)
# add sink for leaving vehicles
Sink(label="leaving_bev",
inputs={b_bev: Flow(nominal_value=bev_battery_cap,
actual_value=data['bev_sink'],
fixed=True)})
# add source for returning vehicles
Source(label="returning_bev",
outputs={b_bev: Flow(nominal_value=bev_battery_cap,
actual_value=data['bev_source'],
fixed=True)});
optimize(energysystem)
plot(energysystem, "b_bev", "from_bus")
plot(energysystem, "b_el", "to_bus")
plot(energysystem, "b_el", "from_bus")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import input data
Step2: Add entities to energy system
Step3: Optimize energy system and plot results
Step4: Adding the gas sector
Step5: Adding the heat sector
Step6: Adding a heat pump
Step7: Adding a combined heat and power plant
Step8: Adding the mobility sector
|
15,472 | <ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0]
fig, ax = plt.subplots()
ax.hist( # complete
ax.set_xlabel('chinups', fontsize=14)
ax.set_ylabel('N', fontsize=14)
fig.tight_layout()
fig, ax = plt.subplots()
ax.hist(# complete
ax.hist(# complete
ax.set_xlabel('chinups', fontsize=14)
ax.set_ylabel('N', fontsize=14)
fig.tight_layout()
bins = np.append(# complete
fig, ax = plt.subplots()
ax.hist( # complete
ax.set_xlabel('chinups', fontsize=14)
ax.set_ylabel('N', fontsize=14)
fig.tight_layout()
fig, ax = plt.subplots()
ax.hist(chinups, histtype = 'step')
# this is the code for the rug plot
ax.plot(chinups, np.zeros_like(chinups), '|', color='k', ms = 25, mew = 4)
ax.set_xlabel('chinups', fontsize=14)
ax.set_ylabel('N', fontsize=14)
fig.tight_layout()
# execute this cell
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
grid = # complete
PDFtophat = kde_sklearn( # complete
fig, ax = plt.subplots()
ax.plot( # complete
ax.set_xlabel('chinups', fontsize=14)
ax.set_ylabel('PDF', fontsize=14)
fig.tight_layout()
PDFtophat1 = # complete
PDFtophat5 = # complete
fig, ax = plt.subplots()
ax.plot(# complete
ax.plot(# complete
ax.set_xlabel('chinups', fontsize=14)
ax.set_ylabel('PDF', fontsize=14)
fig.tight_layout()
ax.legend()
PDFgaussian = # complete
PDFepanechnikov = # complete
fig, ax = plt.subplots()
ax.plot(# complete
ax.plot(# complete
ax.legend(loc = 2)
ax.set_xlabel('chinups', fontsize=14)
ax.set_ylabel('PDF', fontsize=14)
fig.tight_layout()
x = np.arange(0, 6*np.pi, 0.1)
y = np.cos(x)
fig, ax=plt.subplots()
ax.plot(x,y, lw = 2)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_xlim(0, 6*np.pi)
fig.tight_layout()
import seaborn as sns
fig, ax = plt.subplots()
ax.plot(x,y, lw = 2)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_xlim(0, 6*np.pi)
fig.tight_layout()
sns.set_style(# complete
# complete
# complete
# complete
# default color palette
current_palette = sns.color_palette()
sns.palplot(current_palette)
# set palette to colorblind
sns.set_palette("colorblind")
current_palette = sns.color_palette()
sns.palplot(current_palette)
iris = sns.load_dataset("iris")
iris
# note - kde, and rug all set to True, set to False to turn them off
with sns.axes_style("dark"):
sns.displot(iris['petal_length'], bins=20,
kde=True, rug=True)
plt.tight_layout()
fig, ax = plt.subplots()
ax.scatter( # complete
ax.set_xlabel("petal length (cm)")
ax.set_ylabel("petal width (cm)")
fig.tight_layout()
np.random.seed(2016)
xexample = np.random.normal(loc = 0.2, scale = 1.1, size = 10000)
yexample = np.random.normal(loc = -0.2, scale = 0.9, size = 10000)
fig, ax = plt.subplots()
ax.scatter(xexample, yexample)
ax.set_xlabel('X', fontsize=14)
ax.set_ylabel('Y', fontsize=14)
fig.tight_layout()
# hexbin w/ bins = "log" returns the log of counts/bin
# mincnt = 1 displays only hexpix with at least 1 source present
fig, ax = plt.subplots()
cax = ax.hexbin(xexample, yexample, bins = "log", cmap = "viridis", mincnt = 1)
ax.set_xlabel('X', fontsize=14)
ax.set_ylabel('Y', fontsize=14)
fig.tight_layout()
plt.colorbar(cax)
fig, ax = plt.subplots()
sns.kdeplot(x=xexample, y=yexample, shade=False)
ax.set_xlabel('X', fontsize=14)
ax.set_ylabel('Y', fontsize=14)
fig.tight_layout()
sns.jointplot(x=iris['petal_length'], y=iris['petal_width'])
plt.tight_layout()
sns.jointplot(# complete
plt.tight_layout()
sns.pairplot(iris[["sepal_length", "sepal_width",
"petal_length", "petal_width"]])
plt.tight_layout()
sns.pairplot(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_kind = 'kde')
g = sns.PairGrid(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_sharey=False)
g.map_lower(sns.kdeplot)
g.map_upper(plt.scatter, edgecolor='white')
g.map_diag(sns.kdeplot, lw=3)
g.add_legend()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1a
Step2: Something is wrong here - the choice of bin centers and number of bins suggest that there is a 0% probability that middle aged men can do 10 chinups. This is intuitively incorrect; we will now adjust the bins in the histogram.
Step3: These small changes significantly change the estimator for the PDF. With fewer bins we get something closer to a continuous distribution, while shifting the bin centers reduces the probability to zero at 9 chinups.
Step4: Ending the lie
Step5: Of course, even rug plots are not a perfect solution. Many of the chinup measurements are repeated, and those instances cannot be easily isolated above. One (slightly) better solution is to vary the transparency of the rug "whiskers" using alpha = 0.3 in the whiskers plot call. But this too is far from perfect.
Step6: Problem 1e
Step7: In this representation, each "block" has a height of 0.25. The bandwidth is too narrow to provide any overlap between the blocks. This choice of kernel and bandwidth produces an estimate that is essentially a histogram with a large number of bins. It gives no sense of continuity for the distribution. Now, we examine the difference (relative to histograms) upon changing the the width (i.e. kernel) of the blocks.
Step8: It turns out blocks are not an ideal representation for continuous data (see discussion on histograms above). Now we will explore the resulting PDF from other kernels.
Step9: So, what is the optimal choice of bandwidth and kernel? Unfortunately, there is no hard and fast rule, as every problem will likely have a different optimization. Typically, the choice of bandwidth is far more important than the choice of kernel. In the case where the PDF is likely to be gaussian (or close to gaussian), then Silverman's rule of thumb can be used
Step10: Seaborn
Step11: These plots look identical, but it is possible to change the style with seaborn.
Step12: The folks behind seaborn have thought a lot about color palettes, which is a good thing. Remember - the choice of color for plots is one of the most essential aspects of visualization. A poor choice of colors can easily mask interesting patterns or suggest structure that is not real. To learn more about what is available, see the seaborn color tutorial.
Step13: which we will now change to colorblind, which is clearer to those that are colorblind.
Step14: Now that we have covered the basics of seaborn (and the above examples truly only scratch the surface of what is possible), we will explore the power of seaborn for higher dimension data sets. We will load the famous Iris data set, which measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set.
Step15: Now that we have a sense of the data structure, it is useful to examine the distribution of features. Above, we went to great pains to produce histograms, KDEs, and rug plots. seaborn handles all of that effortlessly with the displot function.
Step16: Of course, this data set lives in a 4D space, so plotting more than univariate distributions is important. Fortunately, seaborn makes it very easy to produce handy summary plots.
Step17: Of course, when there are many many data points, scatter plots become difficult to interpret. As in the example below
Step18: Here, we see that there are many points, clustered about the origin, but we have no sense of the underlying density of the distribution. 2D histograms, such as plt.hist2d(), can alleviate this problem. I prefer to use plt.hexbin() which is a little easier on the eyes (though note - these histograms are just as subject to the same issues discussed above).
Step19: While the above plot provides a significant improvement over the scatter plot by providing a better sense of the density near the center of the distribution, the binedge effects are clearly present. An even better solution, like before, is a density estimate, which is easily built into seaborn via the kdeplot function.
Step20: This plot is much more appealing (and informative) than the previous two. For the first time we can clearly see that the distribution is not actually centered on the origin. Now we will move back to the Iris data set.
Step21: But! Histograms and scatter plots can be problematic as we have discussed many times before.
Step22: That is much nicer than what was presented above. However - we still have a problem in that our data live in 4D, but we are (mostly) limited to 2D projections of that data. One way around this is via the seaborn version of a pairplot, which plots the distribution of every variable in the data set against each other. (Here is where the integration with pandas DataFrames becomes so powerful.)
Step23: For data sets where we have classification labels, we can even color the various points using the hue option, and produce KDEs along the diagonal with diag_type = 'kde'.
Step24: Even better - there is an option to create a PairGrid which allows fine tuned control of the data as displayed above, below, and along the diagonal. In this way it becomes possible to avoid having symmetric redundancy, which is not all that informative. In the example below, we will show scatter plots and contour plots simultaneously.
|
15,473 | <ASSISTANT_TASK:>
Python Code:
from osgeo import gdal
import numpy as np
betasso_dem_name = '/Users/gtucker/Dev/dem_analysis_with_gdal/czo_1m_bt1.img'
geo = gdal.Open(betasso_dem_name)
zb = geo.ReadAsArray()
zb[np.where(zb<0.0)[0],np.where(zb<0.0)[1]] = 0.0
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(zb, vmin=1600.0, vmax=2350.0)
np.amax(zb)
def slope_gradient(z):
Calculate absolute slope gradient elevation array.
x, y = np.gradient(z)
#slope = (np.pi/2. - np.arctan(np.sqrt(x*x + y*y)))
slope = np.sqrt(x*x + y*y)
return slope
sb = slope_gradient(zb)
plt.imshow(sb, vmin=0.0, vmax=1.0, cmap='pink')
def aspect(z):
Calculate aspect from DEM.
x, y = np.gradient(z)
return np.arctan2(-x, y)
ab = aspect(zb)
plt.imshow(ab)
abdeg = (180./np.pi)*ab # convert to degrees
n, bins, patches = plt.hist(abdeg.flatten(), 50, normed=1, facecolor='green', alpha=0.75)
def hillshade(z, azimuth=315.0, angle_altitude=45.0):
Generate a hillshade image from DEM.
Notes: adapted from example on GeoExamples blog,
published March 24, 2014, by Roger Veciana i Rovira.
x, y = np.gradient(z)
slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y))
aspect = np.arctan2(-x, y)
azimuthrad = azimuth*np.pi / 180.
altituderad = angle_altitude*np.pi / 180.
shaded = np.sin(altituderad) * np.sin(slope)\
+ np.cos(altituderad) * np.cos(slope)\
* np.cos(azimuthrad - aspect)
return 255*(shaded + 1)/2
hb = hillshade(zb)
plt.imshow(hb, cmap='gray')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open and read data from the DEM
Step2: If the previous two lines worked, zb should be a 2D numpy array that contains the DEM elevations. There are some cells along the edge of the grid with invalid data. Let's set their elevations to zero, using the numpy where function
Step3: Now let's make a color image of the data. To do this, we'll need Pylab and a little "magic".
Step4: Questions
Step6: Make a slope map
Step7: Let's see what it looks like
Step9: Questions
Step10: We can make a histogram (frequency diagram) of aspect. Here 0 degrees is east-facing, 90 is north-facing, 180 is west-facing, and -90 is south-facing.
Step12: Questions
|
15,474 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def get_ids(input_text, vocab_to_int):
Returns a list of word IDs for each word in the input
:param input_text: Input string
:param vocab_to_int: A mapping of word to wordID.
:return: A list of [IDs] for each sentence in the input.
return [[vocab_to_int[word] for word in sentence.split()] for sentence in input_text]
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_sentences = [sentence for sentence in source_text.split('\n')]
target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')]
source_id_text = get_ids(source_sentences, source_vocab_to_int)
target_id_text = get_ids(target_sentences, target_vocab_to_int)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_seq_length = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_seq_length = tf.reduce_max(target_seq_length)
source_seq_length = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return inputs, targets, learning_rate, keep_prob, target_seq_length, max_target_seq_length, source_seq_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
end = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), end], 1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed_encoder_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
# Create an LSTM cell wrapped in a DropOutWrapper
def create_lstm_cell(rnn_size):
encoder_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=42))
return tf.contrib.rnn.DropoutWrapper(encoder_cell, output_keep_prob=keep_prob)
encoder_cell = tf.contrib.rnn.MultiRNNCell([create_lstm_cell(rnn_size) for _ in range(num_layers)])
return tf.nn.dynamic_rnn(encoder_cell,
embed_encoder_input,
sequence_length=source_sequence_length,
dtype=tf.float32)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
# Try if a dropout has to be added here for the Decoder RNN cell.
helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer)
output = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True,
maximum_iterations=max_summary_length)[0]
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_token = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32),
[batch_size], name='start_token')
helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_token,
end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer)
decoder_output = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
return decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
decoded_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
decoded_embed_input = tf.nn.embedding_lookup(decoded_embeddings, dec_input)
# Create an LSTM cell wrapped in a DropOutWrapper
def create_lstm_cell(rnn_size):
encoder_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=42))
return encoder_cell
decoded_cell = tf.contrib.rnn.MultiRNNCell([create_lstm_cell(rnn_size) for _ in range(num_layers)])
output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
with tf.variable_scope("decode"):
training_logits = decoding_layer_train(encoder_state, decoded_cell, decoded_embed_input,
target_sequence_length, max_target_sequence_length,
output_layer, keep_prob)
with tf.variable_scope("decode", reuse=True):
inference_logits = decoding_layer_infer(encoder_state, decoded_cell, decoded_embeddings,
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],
max_target_sequence_length, len(target_vocab_to_int),
output_layer, batch_size, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
_, encoding_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
decoding_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_dec_output, inference_dec_output = decoding_layer(decoding_input, encoding_state,
target_sequence_length,
max_target_sentence_length, rnn_size,
num_layers, target_vocab_to_int,
target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return training_dec_output, inference_dec_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 8
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.6
display_step = 100
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step7: Implement Preprocessing Function
Step9: Preprocess all the data and save it
Step11: Check Point
Step13: Check the Version of TensorFlow and Access to GPU
Step16: Build the Neural Network
Step19: Process Decoder Input
Step22: Encoding
Step25: Decoding - Training
Step28: Decoding - Inference
Step31: Build the Decoding Layer
Step34: Build the Neural Network
Step35: Neural Network Training
Step37: Build the Graph
Step41: Batch and pad the source and target sequences
Step44: Train
Step46: Save Parameters
Step48: Checkpoint
Step51: Sentence to Sequence
Step53: Translate
|
15,475 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sklearn
import scipy.stats as stats
import scipy.optimize
import matplotlib.pyplot as plt
import seaborn as sns
import time
import numpy as np
import os
import pandas as pd
!pip install -U pymc3>=3.8
import pymc3 as pm
print(pm.__version__)
import theano.tensor as tt
import theano
#!pip install arviz
import arviz as az
!mkdir ../figures
# https://github.com/probml/pyprobml/blob/master/scripts/schools8_pymc3.py
# Data of the Eight Schools Model
J = 8
y = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0])
sigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0])
print(np.mean(y))
print(np.median(y))
names = []
for t in range(8):
names.append("{}".format(t))
# Plot raw data
fig, ax = plt.subplots()
y_pos = np.arange(8)
ax.errorbar(y, y_pos, xerr=sigma, fmt="o")
ax.set_yticks(y_pos)
ax.set_yticklabels(names)
ax.invert_yaxis() # labels read top-to-bottom
plt.title("8 schools")
plt.savefig("../figures/schools8_data.png")
plt.show()
# Centered model
with pm.Model() as Centered_eight:
mu_alpha = pm.Normal("mu_alpha", mu=0, sigma=5)
sigma_alpha = pm.HalfCauchy("sigma_alpha", beta=5)
alpha = pm.Normal("alpha", mu=mu_alpha, sigma=sigma_alpha, shape=J)
obs = pm.Normal("obs", mu=alpha, sigma=sigma, observed=y)
log_sigma_alpha = pm.Deterministic("log_sigma_alpha", tt.log(sigma_alpha))
np.random.seed(0)
with Centered_eight:
trace_centered = pm.sample(1000, chains=4, return_inferencedata=False)
pm.summary(trace_centered).round(2)
# PyMC3 gives multiple warnings about divergences
# Also, see r_hat ~ 1.01, ESS << nchains*1000, especially for sigma_alpha
# We can solve these problems below by using a non-centered parameterization.
# In practice, for this model, the results are very similar.
# Display the total number and percentage of divergent chains
diverging = trace_centered["diverging"]
print("Number of Divergent Chains: {}".format(diverging.nonzero()[0].size))
diverging_pct = diverging.nonzero()[0].size / len(trace_centered) * 100
print("Percentage of Divergent Chains: {:.1f}".format(diverging_pct))
dir(trace_centered)
trace_centered.varnames
with Centered_eight:
# fig, ax = plt.subplots()
az.plot_autocorr(trace_centered, var_names=["mu_alpha", "sigma_alpha"], combined=True)
plt.savefig("schools8_centered_acf_combined.png", dpi=300)
with Centered_eight:
# fig, ax = plt.subplots()
az.plot_autocorr(trace_centered, var_names=["mu_alpha", "sigma_alpha"])
plt.savefig("schools8_centered_acf.png", dpi=300)
with Centered_eight:
az.plot_forest(trace_centered, var_names="alpha", hdi_prob=0.95, combined=True)
plt.savefig("schools8_centered_forest_combined.png", dpi=300)
with Centered_eight:
az.plot_forest(trace_centered, var_names="alpha", hdi_prob=0.95, combined=False)
plt.savefig("schools8_centered_forest.png", dpi=300)
# Non-centered parameterization
with pm.Model() as NonCentered_eight:
mu_alpha = pm.Normal("mu_alpha", mu=0, sigma=5)
sigma_alpha = pm.HalfCauchy("sigma_alpha", beta=5)
alpha_offset = pm.Normal("alpha_offset", mu=0, sigma=1, shape=J)
alpha = pm.Deterministic("alpha", mu_alpha + sigma_alpha * alpha_offset)
# alpha = pm.Normal('alpha', mu=mu_alpha, sigma=sigma_alpha, shape=J)
obs = pm.Normal("obs", mu=alpha, sigma=sigma, observed=y)
log_sigma_alpha = pm.Deterministic("log_sigma_alpha", tt.log(sigma_alpha))
np.random.seed(0)
with NonCentered_eight:
trace_noncentered = pm.sample(1000, chains=4)
pm.summary(trace_noncentered).round(2)
# Samples look good: r_hat = 1, ESS ~= nchains*1000
with NonCentered_eight:
az.plot_autocorr(trace_noncentered, var_names=["mu_alpha", "sigma_alpha"], combined=True)
plt.savefig("schools8_noncentered_acf_combined.png", dpi=300)
with NonCentered_eight:
az.plot_forest(trace_noncentered, var_names="alpha", combined=True, hdi_prob=0.95)
plt.savefig("schools8_noncentered_forest_combined.png", dpi=300)
az.plot_forest(
[trace_centered, trace_noncentered],
model_names=["centered", "noncentered"],
var_names="alpha",
combined=True,
hdi_prob=0.95,
)
plt.axvline(np.mean(y), color="k", linestyle="--")
az.plot_forest(
[trace_centered, trace_noncentered],
model_names=["centered", "noncentered"],
var_names="alpha",
kind="ridgeplot",
combined=True,
hdi_prob=0.95,
);
# Plot the "funnel of hell"
# Based on
# https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/GLM_hierarchical_non_centered.ipynb
fig, axs = plt.subplots(ncols=2, sharex=True, sharey=True)
x = pd.Series(trace_centered["mu_alpha"], name="mu_alpha")
y = pd.Series(trace_centered["log_sigma_alpha"], name="log_sigma_alpha")
axs[0].plot(x, y, ".")
axs[0].set(title="Centered", xlabel="µ", ylabel="log(sigma)")
# axs[0].axhline(0.01)
x = pd.Series(trace_noncentered["mu_alpha"], name="mu")
y = pd.Series(trace_noncentered["log_sigma_alpha"], name="log_sigma_alpha")
axs[1].plot(x, y, ".")
axs[1].set(title="NonCentered", xlabel="µ", ylabel="log(sigma)")
# axs[1].axhline(0.01)
plt.savefig("schools8_funnel.png", dpi=300)
xlim = axs[0].get_xlim()
ylim = axs[0].get_ylim()
x = pd.Series(trace_centered["mu_alpha"], name="mu")
y = pd.Series(trace_centered["log_sigma_alpha"], name="log sigma_alpha")
sns.jointplot(x, y, xlim=xlim, ylim=ylim)
plt.suptitle("centered")
plt.savefig("schools8_centered_joint.png", dpi=300)
x = pd.Series(trace_noncentered["mu_alpha"], name="mu")
y = pd.Series(trace_noncentered["log_sigma_alpha"], name="log sigma_alpha")
sns.jointplot(x, y, xlim=xlim, ylim=ylim)
plt.suptitle("noncentered")
plt.savefig("schools8_noncentered_joint.png", dpi=300)
group = 0
fig, axs = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(10, 5))
x = pd.Series(trace_centered["alpha"][:, group], name=f"alpha {group}")
y = pd.Series(trace_centered["log_sigma_alpha"], name="log_sigma_alpha")
axs[0].plot(x, y, ".")
axs[0].set(title="Centered", xlabel=r"$\alpha_0$", ylabel=r"$\log(\sigma_\alpha)$")
x = pd.Series(trace_noncentered["alpha"][:, group], name=f"alpha {group}")
y = pd.Series(trace_noncentered["log_sigma_alpha"], name="log_sigma_alpha")
axs[1].plot(x, y, ".")
axs[1].set(title="NonCentered", xlabel=r"$\alpha_0$", ylabel=r"$\log(\sigma_\alpha)$")
xlim = axs[0].get_xlim()
ylim = axs[0].get_ylim()
plt.savefig("schools8_funnel_group0.png", dpi=300)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Centered model
Step3: Non-centered
Step4: Funnel of hell
|
15,476 | <ASSISTANT_TASK:>
Python Code:
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# tempo switch from one to another, you have to select Kernel->Restart
%matplotlib notebook
from modsim import *
condition = Condition(g = 9.8,
m = 75,
area = 1,
rho = 1.2,
v_term = 60,
duration = 30,
length0 = 100,
angle = (270 - 45),
k = 20)
def make_system(condition):
Makes a System object for the given conditions.
condition: Condition with height, g, m, diameter,
rho, v_term, and duration
returns: System with init, g, m, rho, C_d, area, and ts
unpack(condition)
theta = np.deg2rad(angle)
x, y = pol2cart(theta, length0)
P = Vector(x, y)
V = Vector(0, 0)
init = State(x=P.x, y=P.y, vx=V.x, vy=V.y)
C_d = 2 * m * g / (rho * area * v_term**2)
ts = linspace(0, duration, 501)
return System(init=init, g=g, m=m, rho=rho,
C_d=C_d, area=area, length0=length0,
k=k, ts=ts)
system = make_system(condition)
system
system.init
def slope_func(state, t, system):
Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with length0, m, k
returns: sequence (vx, vy, ax, ay)
x, y, vx, vy = state
unpack(system)
ax = x*(g*y - vx**2 - vy**2)/(x**2 + y**2)
ay = -(g*x**2 + y*(vx**2 + vy**2))/(x**2 + y**2)
return vx, vy, ax, ay
slope_func(system.init, 0, system)
%time run_odeint(system, slope_func)
xs = system.results.x
ys = system.results.y
newfig()
plot(xs, label='x')
plot(ys, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
vxs = system.results.vx
vys = system.results.vy
newfig()
plot(vxs, label='vx')
plot(vys, label='vy')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
newfig()
plot(xs, ys, label='trajectory')
decorate(xlabel='x position (m)',
ylabel='y position (m)')
newfig()
decorate(xlabel='x position (m)',
ylabel='y position (m)',
xlim=[-100, 100],
ylim=[-200, -50],
legend=False)
for x, y in zip(xs, ys):
plot(x, y, 'bo', update=True)
sleep(0.01)
def animate2d(xs, ys, speedup=1):
Animate the results of a projectile simulation.
xs: x position as a function of time
ys: y position as a function of time
speedup: how much to divide `dt` by
# get the time intervals between elements
ts = xs.index
dts = np.diff(ts)
dts = np.append(dts, 0)
# decorate the plot
newfig()
decorate(xlabel='x position (m)',
ylabel='y position (m)',
xlim=[xs.min(), xs.max()],
ylim=[ys.min(), ys.max()],
legend=False)
# loop through the values
for x, y, dt in zip(xs, ys, dts):
plot(x, y, 'bo', update=True)
sleep(dt / speedup)
animate2d(system.results.x, system.results.y)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pendulum
Step3: Now here's a version of make_system that takes a Condition object as a parameter.
Step4: Let's make a System
Step6: To write the slope function, we can get the expressions for ax and ay directly from SymPy and plug them in.
Step7: As always, let's test the slope function with the initial conditions.
Step8: And then run the simulation.
Step9: Visualizing the results
Step10: The simplest way to visualize the results is to plot x and y as functions of time.
Step11: We can plot the velocities the same way.
Step12: Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion.
Step13: We can also animate the trajectory. If there's an error in the simulation, we can sometimes spot it by looking at animations.
Step15: Here's a function that encapsulates that code and runs the animation in (approximately) real time.
|
15,477 | <ASSISTANT_TASK:>
Python Code:
from pyCHX.chx_packages import *
%matplotlib notebook
plt.rcParams.update({'figure.max_open_warning': 0})
plt.rcParams.update({ 'image.origin': 'lower' })
plt.rcParams.update({ 'image.interpolation': 'none' })
import pickle as cpk
from pyCHX.chx_xpcs_xsvs_jupyter_V1 import *
import itertools
#from pyCHX.XPCS_SAXS import get_QrQw_From_RoiMask
%run /home/yuzhang/pyCHX_link/pyCHX/chx_generic_functions.py
#%matplotlib notebook
%matplotlib inline
scat_geometry = 'saxs' #suport 'saxs', 'gi_saxs', 'ang_saxs' (for anisotropics saxs or flow-xpcs)
#scat_geometry = 'ang_saxs'
#scat_geometry = 'gi_waxs'
#scat_geometry = 'gi_saxs'
analysis_type_auto = True #if True, will take "analysis type" option from data acquisition func series
qphi_analysis = False #if True, will do q-phi (anisotropic analysis for transmission saxs)
isotropic_Q_mask = 'normal' #'wide' # 'normal' # 'wide' ## select wich Q-mask to use for rings: 'normal' or 'wide'
phi_Q_mask = 'phi_4x_20deg' ## select wich Q-mask to use for phi analysis
q_mask_name = ''
force_compress = False #True #force to compress data
bin_frame = False #generally make bin_frame as False
para_compress = True #parallel compress
run_fit_form = False #run fit form factor
run_waterfall = False #True #run waterfall analysis
run_profile_plot = False #run prolfile plot for gi-saxs
run_t_ROI_Inten = True #run ROI intensity as a function of time
run_get_mass_center = False # Analysis for mass center of reflective beam center
run_invariant_analysis = False
run_one_time = True #run one-time
cal_g2_error = False #True #calculate g2 signal to noise
#run_fit_g2 = True #run fit one-time, the default function is "stretched exponential"
fit_g2_func = 'stretched'
run_two_time = True #run two-time
run_four_time = False #True #True #False #run four-time
run_xsvs= False #False #run visibility analysis
att_pdf_report = True #attach the pdf report to CHX olog
qth_interest = 1 #the intested single qth
use_sqnorm = True #if True, use sq to normalize intensity
use_SG = True # False #if True, use the Sawitzky-Golay filter for <I(pix)>
use_imgsum_norm= True #if True use imgsum to normalize intensity for one-time calculatoin
pdf_version='_%s'%get_today_date() #for pdf report name
run_dose = True #True # True #False #run dose_depend analysis
if scat_geometry == 'gi_saxs':run_xsvs= False;use_sqnorm=False
if scat_geometry == 'gi_waxs':use_sqnorm = False
if scat_geometry != 'saxs':qphi_analysis = False;scat_geometry_ = scat_geometry
else:scat_geometry_ = ['','ang_'][qphi_analysis]+ scat_geometry
if scat_geometry != 'gi_saxs':run_profile_plot = False
scat_geometry
taus=None;g2=None;tausb=None;g2b=None;g12b=None;taus4=None;g4=None;times_xsv=None;contrast_factorL=None; lag_steps = None
CYCLE= '2019_1' #change clycle here
path = '/XF11ID/analysis/%s/masks/'%CYCLE
username = getpass.getuser()
username = 'commisionning'
username = 'petrash'
data_dir0 = create_user_folder(CYCLE, username)
print( data_dir0 )
uid = 'd099ce48' #(scan num: 3567 (Measurement: 500k, 9kHz 5k CoralPor
uid = '0587b05b' #(scan num: 3570 (Measurement: 4M, 100Hz, 200 testing data processing CoralPor
uid = 'ad658cdf' #(scan num: 3571 (Measurement: 4M, 100Hz, 200 testing data processing CoralPor
uid = '9f849990' #(scan num: 3573 (Measurement: 500k, 9 kHz, 2000 testing data processing CoralPor
uid = '25171c35-ce50-450b-85a0-ba9e116651e3'
uid = uid[:8]
print('The current uid for analysis is: %s...'%uid)
#get_last_uids( -1)
sud = get_sid_filenames(db[uid])
for pa in sud[2]:
if 'master.h5' in pa:
data_fullpath = pa
print ('scan_id, full-uid, data path are: %s--%s--%s'%(sud[0], sud[1], data_fullpath ))
#start_time, stop_time = '2017-2-24 12:23:00', '2017-2-24 13:42:00'
#sids, uids, fuids = find_uids(start_time, stop_time)
data_dir = os.path.join(data_dir0, '%s/'%(sud[1]))
os.makedirs(data_dir, exist_ok=True)
print('Results from this analysis will be stashed in the directory %s' % data_dir)
uidstr = 'uid=%s'%uid
md = get_meta_data( uid )
md_blue = md.copy()
#md_blue
#md_blue['detectors'][0]
#if md_blue['OAV_mode'] != 'none':
# cx , cy = md_blue[md_blue['detectors'][0]+'_beam_center_x'], md_blue[md_blue['detectors'][0]+'_beam_center_x']
#else:
# cx , cy = md_blue['beam_center_x'], md_blue['beam_center_y']
#print(cx,cy)
detectors = sorted(get_detectors(db[uid]))
print('The detectors are:%s'%detectors)
if len(detectors) >1:
md['detector'] = detectors[1]
print( md['detector'])
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
reverse= True
rot90= False
elif md['detector'] =='eiger500K_single_image':
reverse= True
rot90=True
elif md['detector'] =='eiger1m_single_image':
reverse= True
rot90=False
print('Image reverse: %s\nImage rotate 90: %s'%(reverse, rot90))
try:
cx , cy = md_blue['beam_center_x'], md_blue['beam_center_y']
print(cx,cy)
except:
print('Will find cx,cy later.')
if analysis_type_auto:#if True, will take "analysis type" option from data acquisition func series
try:
qphi_analysis_ = md['analysis'] #if True, will do q-phi (anisotropic analysis for transmission saxs)
print(md['analysis'])
if qphi_analysis_ == 'iso':
qphi_analysis = False
elif qphi_analysis_ == '':
qphi_analysis = False
else:
qphi_analysis = True
except:
print('There is no analysis in metadata.')
print('Will %s qphis analysis.'%['NOT DO','DO'][qphi_analysis])
if scat_geometry != 'saxs':qphi_analysis = False;scat_geometry_ = scat_geometry
else:scat_geometry_ = ['','ang_'][qphi_analysis]+ scat_geometry
if scat_geometry != 'gi_saxs':run_profile_plot = False
print(scat_geometry_)
#isotropic_Q_mask
scat_geometry
##For SAXS
roi_path = '/XF11ID/analysis/2019_1/masks/'
roi_date = 'Feb6'
if scat_geometry =='saxs':
if qphi_analysis == False:
if isotropic_Q_mask == 'normal':
#print('Here')
q_mask_name='rings'
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image': #for 4M
fp = roi_path + 'roi_mask_%s_4M_norm.pkl'%roi_date
elif md['detector'] =='eiger500K_single_image': #for 500K
fp = roi_path + 'roi_mask_%s_500K_norm.pkl'%roi_date
elif isotropic_Q_mask == 'wide':
q_mask_name='wide_rings'
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image': #for 4M
fp = roi_path + 'roi_mask_%s_4M_wide.pkl'%roi_date
elif md['detector'] =='eiger500K_single_image': #for 500K
fp = roi_path + 'roi_mask_%s_500K_wide.pkl'%roi_date
elif qphi_analysis:
if phi_Q_mask =='phi_4x_20deg':
q_mask_name='phi_4x_20deg'
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image': #for 4M
fp = roi_path + 'roi_mask_%s_4M_phi_4x_20deg.pkl'%roi_date
elif md['detector'] =='eiger500K_single_image': #for 500K
fp = roi_path + 'roi_mask_%s_500K_phi_4x_20deg.pkl'%roi_date
#fp = 'XXXXXXX.pkl'
roi_mask,qval_dict = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
#print(fp)
## Gi_SAXS
elif scat_geometry =='gi_saxs':
# dynamics mask
fp = '/XF11ID/analysis/2018_2/masks/uid=460a2a3a_roi_mask.pkl'
roi_mask,qval_dict = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
print('The dynamic mask is: %s.'%fp)
# static mask
fp = '/XF11ID/analysis/2018_2/masks/uid=460a2a3a_roi_masks.pkl'
roi_masks,qval_dicts = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
print('The static mask is: %s.'%fp)
# q-map
fp = '/XF11ID/analysis/2018_2/masks/uid=460a2a3a_qmap.pkl'
#print(fp)
qr_map, qz_map, ticks, Qrs, Qzs, Qr, Qz, inc_x0,refl_x0, refl_y0 = cpk.load( open(fp, 'rb' ) )
print('The qmap is: %s.'%fp)
## WAXS
elif scat_geometry =='gi_waxs':
fp = '/XF11ID/analysis/2018_2/masks/uid=db5149a1_roi_mask.pkl'
roi_mask,qval_dict = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
print(roi_mask.shape)
#qval_dict
#roi_mask = shift_mask(roi_mask, 10,30) #if shift mask to get new mask
show_img(roi_mask, aspect=1.0, image_name = fp)#, center=center[::-1])
#%run /home/yuzhang/pyCHX_link/pyCHX/chx_generic_functions.py
imgs = load_data( uid, md['detector'], reverse= reverse, rot90=rot90 )
md.update( imgs.md );Nimg = len(imgs);
#md['beam_center_x'], md['beam_center_y'] = cx, cy
#if 'number of images' not in list(md.keys()):
md['number of images'] = Nimg
pixel_mask = 1- np.int_( np.array( imgs.md['pixel_mask'], dtype= bool) )
print( 'The data are: %s' %imgs )
#md['acquire period' ] = md['cam_acquire_period']
#md['exposure time'] = md['cam_acquire_time']
mdn = md.copy()
if md['detector'] =='eiger1m_single_image':
Chip_Mask=np.load( '/XF11ID/analysis/2017_1/masks/Eiger1M_Chip_Mask.npy')
elif md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
Chip_Mask= np.array(np.load( '/XF11ID/analysis/2017_1/masks/Eiger4M_chip_mask.npy'), dtype=bool)
BadPix = np.load('/XF11ID/analysis/2018_1/BadPix_4M.npy' )
Chip_Mask.ravel()[BadPix] = 0
elif md['detector'] =='eiger500K_single_image':
#print('here')
Chip_Mask= np.load( '/XF11ID/analysis/2017_1/masks/Eiger500K_Chip_Mask.npy') #to be defined the chip mask
Chip_Mask = np.rot90(Chip_Mask)
pixel_mask = np.rot90( 1- np.int_( np.array( imgs.md['pixel_mask'], dtype= bool)) )
else:
Chip_Mask = 1
#show_img(Chip_Mask)
print(Chip_Mask.shape, pixel_mask.shape)
use_local_disk = True
import shutil,glob
save_oavs = False
if len(detectors)==2:
if '_image' in md['detector']:
pref = md['detector'][:-5]
else:
pref=md['detector']
for k in [ 'beam_center_x', 'beam_center_y','cam_acquire_time','cam_acquire_period','cam_num_images',
'wavelength', 'det_distance', 'photon_energy']:
md[k] = md[ pref + '%s'%k]
if 'OAV_image' in detectors:
try:
#tifs = list( db[uid].data( 'OAV_image') )[0]
#print(len(tifs))
save_oavs_tifs( uid, data_dir )
save_oavs = True
## show all images
#fig, ax = show_tif_series( tifs, Nx = None, vmin=1.0, vmax=20, logs=False,
# cmap= cm.gray, figsize=[4,6] )
##show one image
#show_img(tifs[0],cmap= cm.gray,)
except:
pass
print_dict( md, ['suid', 'number of images', 'uid', 'scan_id', 'start_time', 'stop_time', 'sample', 'Measurement',
'acquire period', 'exposure time',
'det_distance', 'beam_center_x', 'beam_center_y', ] )
if scat_geometry =='gi_saxs':
inc_x0 = md['beam_center_x']
inc_y0 = imgs[0].shape[0] - md['beam_center_y']
refl_x0 = md['beam_center_x']
refl_y0 = 1000 #imgs[0].shape[0] - 1758
print( "inc_x0, inc_y0, ref_x0,ref_y0 are: %s %s %s %s."%(inc_x0, inc_y0, refl_x0, refl_y0) )
else:
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image' or md['detector']=='eiger1m_single_image':
inc_x0 = imgs[0].shape[0] - md['beam_center_y']
inc_y0= md['beam_center_x']
elif md['detector'] =='eiger500K_single_image':
inc_y0 = imgs[0].shape[1] - md['beam_center_y']
inc_x0 = imgs[0].shape[0] - md['beam_center_x']
print(inc_x0, inc_y0)
###for this particular uid, manually give x0/y0
#inc_x0 = 1041
#inc_y0 = 1085
dpix, lambda_, Ldet, exposuretime, timeperframe, center = check_lost_metadata(
md, Nimg, inc_x0 = inc_x0, inc_y0= inc_y0, pixelsize = 7.5*10*(-5) )
if scat_geometry =='gi_saxs':center=center[::-1]
setup_pargs=dict(uid=uidstr, dpix= dpix, Ldet=Ldet, lambda_= lambda_, exposuretime=exposuretime,
timeperframe=timeperframe, center=center, path= data_dir)
print_dict( setup_pargs )
setup_pargs
if scat_geometry == 'gi_saxs':
mask_path = '/XF11ID/analysis/2018_2/masks/'
mask_name = 'July13_2018_4M.npy'
elif scat_geometry == 'saxs':
mask_path = '/XF11ID/analysis/2019_1/masks/'
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
mask_name = 'Feb6_2019_4M_SAXS.npy'
elif md['detector'] =='eiger500K_single_image':
mask_name = 'Feb6_2019_500K_SAXS.npy'
elif scat_geometry == 'gi_waxs':
mask_path = '/XF11ID/analysis/2018_2/masks/'
mask_name = 'July20_2018_1M_WAXS.npy'
mask = load_mask(mask_path, mask_name, plot_ = False, image_name = uidstr + '_mask', reverse= reverse, rot90=rot90 )
mask = mask * pixel_mask * Chip_Mask
show_img(mask,image_name = uidstr + '_mask', save=True, path=data_dir, aspect=1, center=center[::-1])
mask_load=mask.copy()
imgsa = apply_mask( imgs, mask )
img_choice_N = 3
img_samp_index = random.sample( range(len(imgs)), img_choice_N)
avg_img = get_avg_img( imgsa, img_samp_index, plot_ = False, uid =uidstr)
if avg_img.max() == 0:
print('There are no photons recorded for this uid: %s'%uid)
print('The data analysis should be terminated! Please try another uid.')
#show_img( imgsa[1000], vmin=.1, vmax= 1e1, logs=True, aspect=1,
# image_name= uidstr + '_img_avg', save=True, path=data_dir, cmap = cmap_albula )
print(center[::-1])
show_img( imgsa[ 5], vmin = -1, vmax = 20, logs=False, aspect=1, #save_format='tif',
image_name= uidstr + '_img_avg', save=True, path=data_dir, cmap=cmap_albula,center=center[::-1])
# select subregion, hard coded center beam location
#show_img( imgsa[180+40*3/0.05][110:110+840*2, 370:370+840*2], vmin = 0.01, vmax = 20, logs=False, aspect=1, #save_format='tif',
# image_name= uidstr + '_img_avg', save=True, path=data_dir, cmap=cmap_albula,center=[845,839])
compress=True
photon_occ = len( np.where(avg_img)[0] ) / ( imgsa[0].size)
#compress = photon_occ < .4 #if the photon ocupation < 0.5, do compress
print ("The non-zeros photon occupation is %s."%( photon_occ))
print("Will " + 'Always ' + ['NOT', 'DO'][compress] + " apply compress process.")
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
good_start = 5 #make the good_start at least 0
elif md['detector'] =='eiger500K_single_image':
good_start = 100 #5 #make the good_start at least 0
elif md['detector'] =='eiger1m_single_image' or md['detector'] == 'image':
good_start = 5
bin_frame = False # True #generally make bin_frame as False
if bin_frame:
bin_frame_number=4
acquisition_period = md['acquire period']
timeperframe = acquisition_period * bin_frame_number
else:
bin_frame_number =1
force_compress = False
#force_compress = True
import time
t0= time.time()
if not use_local_disk:
cmp_path = '/nsls2/xf11id1/analysis/Compressed_Data'
else:
cmp_path = '/tmp_data/compressed'
cmp_path = '/nsls2/xf11id1/analysis/Compressed_Data'
if bin_frame_number==1:
cmp_file = '/uid_%s.cmp'%md['uid']
else:
cmp_file = '/uid_%s_bined--%s.cmp'%(md['uid'],bin_frame_number)
filename = cmp_path + cmp_file
mask2, avg_img, imgsum, bad_frame_list = compress_eigerdata(imgs, mask, md, filename,
force_compress= force_compress, para_compress= para_compress, bad_pixel_threshold = 1e14,
reverse=reverse, rot90=rot90,
bins=bin_frame_number, num_sub= 100, num_max_para_process= 500, with_pickle=True,
direct_load_data =use_local_disk, data_path = data_fullpath, )
min_inten = 10
good_start = max(good_start, np.where( np.array(imgsum) > min_inten )[0][0] )
print ('The good_start frame number is: %s '%good_start)
FD = Multifile(filename, good_start, len(imgs)//bin_frame_number )
#FD = Multifile(filename, good_start, 100)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
plot1D( y = imgsum[ np.array( [i for i in np.arange(good_start, len(imgsum)) if i not in bad_frame_list])],
title =uidstr + '_imgsum', xlabel='Frame', ylabel='Total_Intensity', legend='imgsum' )
Nimg = Nimg/bin_frame_number
run_time(t0)
mask = mask * pixel_mask * Chip_Mask
mask_copy = mask.copy()
mask_copy2 = mask.copy()
#%run ~/pyCHX_link/pyCHX/chx_generic_functions.py
try:
if md['experiment']=='printing':
#p = md['printing'] #if have this printing key, will do error function fitting to find t_print0
find_tp0 = True
t_print0 = ps( y = imgsum[:400] ) * timeperframe
print( 'The start time of print: %s.' %(t_print0 ) )
else:
find_tp0 = False
print('md[experiment] is not "printing" -> not going to look for t_0')
t_print0 = None
except:
find_tp0 = False
print('md[experiment] is not "printing" -> not going to look for t_0')
t_print0 = None
show_img( avg_img, vmin=1e-3, vmax= 1e1, logs=True, aspect=1, #save_format='tif',
image_name= uidstr + '_img_avg', save=True,
path=data_dir, center=center[::-1], cmap = cmap_albula )
good_end= None # 2000
if good_end is not None:
FD = Multifile(filename, good_start, min( len(imgs)//bin_frame_number, good_end) )
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
re_define_good_start =False
if re_define_good_start:
good_start = 180
#good_end = 19700
good_end = len(imgs)
FD = Multifile(filename, good_start, good_end)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( FD.beg, FD.end)
bad_frame_list = get_bad_frame_list( imgsum, fit='both', plot=True,polyfit_order = 30,
scale= 3.5, good_start = good_start, good_end=good_end, uid= uidstr, path=data_dir)
print( 'The bad frame list length is: %s'%len(bad_frame_list) )
imgsum_y = imgsum[ np.array( [i for i in np.arange( len(imgsum)) if i not in bad_frame_list])]
imgsum_x = np.arange( len( imgsum_y))
save_lists( [imgsum_x, imgsum_y], label=['Frame', 'Total_Intensity'],
filename=uidstr + '_img_sum_t', path= data_dir )
plot1D( y = imgsum_y, title = uidstr + '_img_sum_t', xlabel='Frame', c='b',
ylabel='Total_Intensity', legend='imgsum', save=True, path=data_dir)
#%run /home/yuzhang/pyCHX_link/pyCHX/chx_packages.py
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
pass
elif md['detector'] =='eiger500K_single_image':
#if md['cam_acquire_period'] <= 0.00015: #will check this logic
if imgs[0].dtype == 'uint16':
print('Create dynamic mask for 500K due to 9K data acquistion!!!')
bdp = find_bad_pixels_FD( bad_frame_list, FD, img_shape = avg_img.shape, threshold=20 )
mask = mask_copy2.copy()
mask *=bdp
mask_copy = mask.copy()
show_img( mask, image_name='New Mask_uid=%s'%uid )
setup_pargs
#%run ~/pyCHX_link/pyCHX/chx_generic_functions.py
%run ~/pyCHX_link/pyCHX/XPCS_SAXS.py
if scat_geometry =='saxs':
## Get circular average| * Do plot and save q~iq
mask = mask_copy.copy()
hmask = create_hot_pixel_mask( avg_img, threshold = 1e8, center=center, center_radius= 10)
qp_saxs, iq_saxs, q_saxs = get_circular_average( avg_img * Chip_Mask , mask * hmask, pargs=setup_pargs )
plot_circular_average( qp_saxs, iq_saxs, q_saxs, pargs=setup_pargs, show_pixel=True,
xlim=[qp_saxs.min(), qp_saxs.max()*1.0], ylim = [iq_saxs.min(), iq_saxs.max()*2] )
mask =np.array( mask * hmask, dtype=bool)
if scat_geometry =='saxs':
if run_fit_form:
form_res = fit_form_factor( q_saxs,iq_saxs, guess_values={'radius': 2500, 'sigma':0.05,
'delta_rho':1E-10 }, fit_range=[0.0001, 0.015], fit_variables={'radius': T, 'sigma':T,
'delta_rho':T}, res_pargs=setup_pargs, xlim=[0.0001, 0.015])
qr = np.array( [qval_dict[k][0] for k in sorted( qval_dict.keys())] )
if qphi_analysis == False:
try:
qr_cal, qr_wid = get_QrQw_From_RoiMask( roi_mask, setup_pargs )
print(len(qr))
if (qr_cal - qr).sum() >=1e-3:
print( 'The loaded ROI mask might not be applicable to this UID: %s.'%uid)
print('Please check the loaded roi mask file.')
except:
print('Something is wrong with the roi-mask. Please check the loaded roi mask file.')
show_ROI_on_image( avg_img*roi_mask, roi_mask, center, label_on = False, rwidth = 840, alpha=.9,
save=True, path=data_dir, uid=uidstr, vmin= 1e-3,
vmax= 1e-1, #np.max(avg_img),
aspect=1,
show_roi_edge=True,
show_ang_cor = True)
plot_qIq_with_ROI( q_saxs, iq_saxs, np.unique(qr), logs=True, uid=uidstr,
xlim=[q_saxs.min(), q_saxs.max()*1.02],#[0.0001,0.08],
ylim = [iq_saxs.min(), iq_saxs.max()*1.02], save=True, path=data_dir)
roi_mask = roi_mask * mask
if scat_geometry =='saxs':
Nimg = FD.end - FD.beg
time_edge = create_time_slice( Nimg, slice_num= 10, slice_width= 1, edges = None )
time_edge = np.array( time_edge ) + good_start
#print( time_edge )
qpt, iqst, qt = get_t_iqc( FD, time_edge, mask*Chip_Mask, pargs=setup_pargs, nx=1500, show_progress= False )
plot_t_iqc( qt, iqst, time_edge, pargs=setup_pargs, xlim=[qt.min(), qt.max()],
ylim = [iqst.min(), iqst.max()], save=True )
if run_invariant_analysis:
if scat_geometry =='saxs':
invariant = get_iq_invariant( qt, iqst )
time_stamp = time_edge[:,0] * timeperframe
if scat_geometry =='saxs':
plot_q2_iq( qt, iqst, time_stamp,pargs=setup_pargs,ylim=[ -0.001, 0.01] ,
xlim=[0.007,0.2],legend_size= 6 )
if scat_geometry =='saxs':
plot_time_iq_invariant( time_stamp, invariant, pargs=setup_pargs, )
if False:
iq_int = np.zeros( len(iqst) )
fig, ax = plt.subplots()
q = qt
for i in range(iqst.shape[0]):
yi = iqst[i] * q**2
iq_int[i] = yi.sum()
time_labeli = 'time_%s s'%( round( time_edge[i][0] * timeperframe, 3) )
plot1D( x = q, y = yi, legend= time_labeli, xlabel='Q (A-1)', ylabel='I(q)*Q^2', title='I(q)*Q^2 ~ time',
m=markers[i], c = colors[i], ax=ax, ylim=[ -0.001, 0.01] , xlim=[0.007,0.2],
legend_size=4)
#print( iq_int )
if scat_geometry =='gi_saxs':
plot_qzr_map( qr_map, qz_map, inc_x0, ticks = ticks, data= avg_img, uid= uidstr, path = data_dir )
if scat_geometry =='gi_saxs':
#roi_masks, qval_dicts = get_gisaxs_roi( Qrs, Qzs, qr_map, qz_map, mask= mask )
show_qzr_roi( avg_img, roi_masks, inc_x0, ticks[:4], alpha=0.5, save=True, path=data_dir, uid=uidstr )
if scat_geometry =='gi_saxs':
Nimg = FD.end - FD.beg
time_edge = create_time_slice( N= Nimg, slice_num= 3, slice_width= 2, edges = None )
time_edge = np.array( time_edge ) + good_start
print( time_edge )
qrt_pds = get_t_qrc( FD, time_edge, Qrs, Qzs, qr_map, qz_map, mask=mask, path=data_dir, uid = uidstr )
plot_qrt_pds( qrt_pds, time_edge, qz_index = 0, uid = uidstr, path = data_dir )
if scat_geometry =='gi_saxs':
if run_profile_plot:
xcorners= [ 1100, 1250, 1250, 1100 ]
ycorners= [ 850, 850, 950, 950 ]
waterfall_roi_size = [ xcorners[1] - xcorners[0], ycorners[2] - ycorners[1] ]
waterfall_roi = create_rectangle_mask( avg_img, xcorners, ycorners )
#show_img( waterfall_roi * avg_img, aspect=1,vmin=.001, vmax=1, logs=True, )
wat = cal_waterfallc( FD, waterfall_roi, qindex= 1, bin_waterfall=True,
waterfall_roi_size = waterfall_roi_size,save =True, path=data_dir, uid=uidstr)
if scat_geometry =='gi_saxs':
if run_profile_plot:
plot_waterfallc( wat, qindex=1, aspect=None, vmin=1, vmax= np.max( wat), uid=uidstr, save =True,
path=data_dir, beg= FD.beg)
if scat_geometry =='gi_saxs':
show_qzr_roi( avg_img, roi_mask, inc_x0, ticks[:4], alpha=0.5, save=True, path=data_dir, uid=uidstr )
## Get 1D Curve (Q||-intensity¶)
qr_1d_pds = cal_1d_qr( avg_img, Qr, Qz, qr_map, qz_map, inc_x0= None, mask=mask, setup_pargs=setup_pargs )
plot_qr_1d_with_ROI( qr_1d_pds, qr_center=np.unique( np.array(list( qval_dict.values() ) )[:,0] ),
loglog=True, save=True, uid=uidstr, path = data_dir)
if scat_geometry =='gi_waxs':
#badpixel = np.where( avg_img[:600,:] >=300 )
#roi_mask[badpixel] = 0
show_ROI_on_image( avg_img, roi_mask, label_on = True, alpha=.5,
save=True, path=data_dir, uid=uidstr, vmin=0.1, vmax=5)
qind, pixelist = roi.extract_label_indices(roi_mask)
noqs = len(np.unique(qind))
print(noqs)
nopr = np.bincount(qind, minlength=(noqs+1))[1:]
nopr
roi_inten = check_ROI_intensity( avg_img, roi_mask, ring_number= 2, uid =uidstr ) #roi starting from 1
qth_interest = 2 #the second ring. #qth_interest starting from 1
if scat_geometry =='saxs' or scat_geometry =='gi_waxs':
if run_waterfall:
wat = cal_waterfallc( FD, roi_mask, qindex= qth_interest, save =True, path=data_dir, uid=uidstr)
plot_waterfallc( wat, qth_interest, aspect= None, vmin=1e-1, vmax= wat.max(), uid=uidstr, save =True,
path=data_dir, beg= FD.beg, cmap = cmap_vge )
q_mask_name
ring_avg = None
if run_t_ROI_Inten:
times_roi, mean_int_sets = cal_each_ring_mean_intensityc(FD, roi_mask, timeperframe = None, multi_cor=True )
plot_each_ring_mean_intensityc( times_roi, mean_int_sets, uid = uidstr, save=True, path=data_dir )
roi_avg = np.average( mean_int_sets, axis=0)
if run_get_mass_center:
cx, cy = get_mass_center_one_roi(FD, roi_mask, roi_ind=25)
if run_get_mass_center:
fig,ax=plt.subplots(2)
plot1D( cx, m='o', c='b',ax=ax[0], legend='mass center-refl_X',
ylim=[940, 960], ylabel='posX (pixel)')
plot1D( cy, m='s', c='r',ax=ax[1], legend='mass center-refl_Y',
ylim=[1540, 1544], xlabel='frames',ylabel='posY (pixel)')
define_good_series = False
#define_good_series = True
if define_good_series:
good_start = 200
FD = Multifile(filename, beg = good_start, end = 600) #end=1000)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
if use_sqnorm:#for transmision SAXS
norm = get_pixelist_interp_iq( qp_saxs, iq_saxs, roi_mask, center)
print('Using circular average in the normalization of G2 for SAXS scattering.')
elif use_SG:#for Gi-SAXS or WAXS
avg_imgf = sgolay2d( avg_img, window_size= 11, order= 5) * mask
norm=np.ravel(avg_imgf)[pixelist]
print('Using smoothed image by SavitzkyGolay filter in the normalization of G2.')
else:
norm= None
print('Using simple (average) normalization of G2.')
if use_imgsum_norm:
imgsum_ = imgsum
print('Using frame total intensity for intensity normalization in g2 calculation.')
else:
imgsum_ = None
import time
if run_one_time:
t0 = time.time()
if cal_g2_error:
g2,lag_steps,g2_err = cal_g2p(FD,roi_mask,bad_frame_list,good_start, num_buf = 8,
num_lev= None,imgsum= imgsum_, norm=norm, cal_error= True )
else:
g2,lag_steps = cal_g2p(FD,roi_mask,bad_frame_list,good_start, num_buf = 8,
num_lev= None,imgsum= imgsum_, norm=norm, cal_error= False )
run_time(t0)
#g2_err.shape, g2.shape
lag_steps = lag_steps[:g2.shape[0]]
g2.shape[1]
if run_one_time:
taus = lag_steps * timeperframe
try:
g2_pds = save_g2_general( g2, taus=taus,qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
qz = np.array( list( qval_dict.values() ) )[:g2.shape[1],1],
uid=uid_+'_g2.csv', path= data_dir, return_res=True )
except:
g2_pds = save_g2_general( g2, taus=taus,qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
uid=uid_+'_'+q_mask_name+'_g2.csv', path= data_dir, return_res=True )
if cal_g2_error:
try:
g2_err_pds = save_g2_general( g2_err, taus=taus,qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
qz = np.array( list( qval_dict.values() ) )[:g2.shape[1],1],
uid=uid_+'_g2_err.csv', path= data_dir, return_res=True )
except:
g2_err_pds = save_g2_general( g2_err, taus=taus,qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
uid=uid_+'_'+q_mask_name+'_g2_err.csv', path= data_dir, return_res=True )
#g2.shape
if run_one_time:
g2_fit_result, taus_fit, g2_fit = get_g2_fit_general( g2, taus,
function = fit_g2_func, vlim=[0.95, 1.05], fit_range= None,
fit_variables={'baseline':False, 'beta': True, 'alpha':True,'relaxation_rate':True,},
guess_values={'baseline':1.0,'beta': 0.03,'alpha':1.0,'relaxation_rate':0.0005},
guess_limits = dict( baseline =[.9, 1.3], alpha=[0, 2],
beta = [0, 1], relaxation_rate= [1e-7, 1000]) ,)
g2_fit_paras = save_g2_fit_para_tocsv(g2_fit_result, filename= uid_ +'_'+q_mask_name +'_g2_fit_paras.csv', path=data_dir )
scat_geometry_
if run_one_time:
if cal_g2_error:
g2_fit_err = np.zeros_like(g2_fit)
plot_g2_general( g2_dict={1:g2, 2:g2_fit}, taus_dict={1:taus, 2:taus_fit},
vlim=[0.95, 1.05], g2_err_dict= {1:g2_err, 2: g2_fit_err},
qval_dict = dict(itertools.islice(qval_dict.items(),g2.shape[1])), fit_res= g2_fit_result, geometry= scat_geometry_,filename= uid_+'_g2',
path= data_dir, function= fit_g2_func, ylabel='g2', append_name= '_fit')
else:
plot_g2_general( g2_dict={1:g2, 2:g2_fit}, taus_dict={1:taus, 2:taus_fit}, vlim=[0.95, 1.05],
qval_dict = dict(itertools.islice(qval_dict.items(),g2.shape[1])), fit_res= g2_fit_result, geometry= scat_geometry_,filename= uid_+'_g2',
path= data_dir, function= fit_g2_func, ylabel='g2', append_name= '_fit')
if run_one_time:
if True:
fs, fe = 0, 8
#fs,fe=0, 6
qval_dict_ = {k:qval_dict[k] for k in list(qval_dict.keys())[fs:fe] }
D0, qrate_fit_res = get_q_rate_fit_general( qval_dict_, g2_fit_paras['relaxation_rate'][fs:fe],
geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict_, g2_fit_paras['relaxation_rate'][fs:fe], qrate_fit_res,
geometry= scat_geometry_,uid=uid_ , path= data_dir )
else:
D0, qrate_fit_res = get_q_rate_fit_general( qval_dict, g2_fit_paras['relaxation_rate'],
fit_range=[0, 26], geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict, g2_fit_paras['relaxation_rate'], qrate_fit_res,
geometry= scat_geometry_,uid=uid_ ,
show_fit=False, path= data_dir, plot_all_range=False)
#plot1D( x= qr, y=g2_fit_paras['beta'], ls='-', m = 'o', c='b', ylabel=r'$\beta$', xlabel=r'$Q( \AA^{-1} ) $' )
define_good_series = False
#define_good_series = True
if define_good_series:
good_start = 5
FD = Multifile(filename, beg = good_start, end = 1000)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
data_pixel = None
if run_two_time:
data_pixel = Get_Pixel_Arrayc( FD, pixelist, norm= norm ).get_data()
import time
t0=time.time()
g12b=None
if run_two_time:
g12b = auto_two_Arrayc( data_pixel, roi_mask, index = None )
if run_dose:
np.save( data_dir + 'uid=%s_g12b'%uid, g12b)
run_time( t0 )
if run_two_time:
show_C12(g12b, q_ind= 2, qlabel=dict(itertools.islice(qval_dict.items(),g2.shape[1])),N1= FD.beg,logs=False, N2=min( FD.end,10000), vmin= 1.0, vmax=1.18,timeperframe=timeperframe,save=True, path= data_dir, uid = uid_ ,cmap=plt.cm.jet)#cmap=cmap_albula)
multi_tau_steps = True
if run_two_time:
if lag_steps is None:
num_bufs=8
noframes = FD.end - FD.beg
num_levels = int(np.log( noframes/(num_bufs-1))/np.log(2) +1) +1
tot_channels, lag_steps, dict_lag = multi_tau_lags(num_levels, num_bufs)
max_taus= lag_steps.max()
#max_taus= lag_steps.max()
max_taus = Nimg
t0=time.time()
#tausb = np.arange( g2b.shape[0])[:max_taus] *timeperframe
if multi_tau_steps:
lag_steps_ = lag_steps[ lag_steps <= g12b.shape[0] ]
g2b = get_one_time_from_two_time(g12b)[lag_steps_]
tausb = lag_steps_ *timeperframe
else:
tausb = (np.arange( g12b.shape[0]) *timeperframe)[:-200]
g2b = (get_one_time_from_two_time(g12b))[:-200]
run_time(t0)
g2b_pds = save_g2_general( g2b, taus=tausb, qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
qz=None, uid=uid_+'_'+q_mask_name+'_g2b.csv', path= data_dir, return_res=True )
if run_two_time:
g2b_fit_result, tausb_fit, g2b_fit = get_g2_fit_general( g2b, tausb,
function = fit_g2_func, vlim=[0.95, 1.05], fit_range= None,
fit_variables={'baseline':False, 'beta': True, 'alpha':True,'relaxation_rate':True},
guess_values={'baseline':1.0,'beta': 0.15,'alpha':1.0,'relaxation_rate':1e-3,},
guess_limits = dict( baseline =[1, 1.8], alpha=[0, 2],
beta = [0, 1], relaxation_rate= [1e-8, 5000]) )
g2b_fit_paras = save_g2_fit_para_tocsv(g2b_fit_result, filename= uid_ +'_'+q_mask_name+'_g2b_fit_paras.csv', path=data_dir )
#plot1D( x = tausb[1:], y =g2b[1:,0], ylim=[0.95, 1.46], xlim = [0.0001, 10], m='', c='r', ls = '-',
# logx=True, title='one_time_corelation', xlabel = r"$\tau $ $(s)$", )
if run_two_time:
plot_g2_general( g2_dict={1:g2b, 2:g2b_fit}, taus_dict={1:tausb, 2:tausb_fit}, vlim=[0.95, 1.05],
qval_dict=dict(itertools.islice(qval_dict.items(),g2.shape[1])), fit_res= g2b_fit_result, geometry=scat_geometry_,filename=uid_+'_g2',
path= data_dir, function= fit_g2_func, ylabel='g2', append_name= '_b_fit')
if run_two_time:
D0b, qrate_fit_resb = get_q_rate_fit_general( dict(itertools.islice(qval_dict.items(),g2.shape[1])), g2b_fit_paras['relaxation_rate'],
fit_range=[0, 10], geometry= scat_geometry_ )
#qval_dict, g2b_fit_paras['relaxation_rate']
if run_two_time:
if True:
fs, fe = 0,8
#fs, fe = 0,12
qval_dict_ = {k:qval_dict[k] for k in list(qval_dict.keys())[fs:fe] }
D0b, qrate_fit_resb = get_q_rate_fit_general( qval_dict_, g2b_fit_paras['relaxation_rate'][fs:fe], geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict_, g2b_fit_paras['relaxation_rate'][fs:fe], qrate_fit_resb,
geometry= scat_geometry_,uid=uid_ +'_two_time' , path= data_dir )
else:
D0b, qrate_fit_resb = get_q_rate_fit_general( qval_dict, g2b_fit_paras['relaxation_rate'],
fit_range=[0, 10], geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict, g2b_fit_paras['relaxation_rate'], qrate_fit_resb,
geometry= scat_geometry_,uid=uid_ +'_two_time', show_fit=False,path= data_dir, plot_all_range= True )
if run_two_time and run_one_time:
plot_g2_general( g2_dict={1:g2, 2:g2b}, taus_dict={1:taus, 2:tausb},vlim=[0.99, 1.007],
qval_dict=dict(itertools.islice(qval_dict.items(),g2.shape[1])), g2_labels=['from_one_time', 'from_two_time'],
geometry=scat_geometry_,filename=uid_+'_g2_two_g2', path= data_dir, ylabel='g2', )
#run_dose = True
if run_dose:
get_two_time_mulit_uids( [uid], roi_mask, norm= norm, bin_frame_number=1,
path= data_dir0, force_generate=False, compress_path = cmp_path + '/' )
try:
print( md['transmission'] )
except:
md['transmission'] =1
exposuretime
if run_dose:
N = len(imgs)
print(N)
#exposure_dose = md['transmission'] * exposuretime* np.int_([ N/16, N/8, N/4 ,N/2, 3*N/4, N*0.99 ])
exposure_dose = md['transmission'] * exposuretime* np.int_([ N/8, N/4 ,N/2, 3*N/4, N*0.99 ])
print( exposure_dose )
if run_dose:
taus_uids, g2_uids = get_series_one_time_mulit_uids( [ uid ], qval_dict, good_start=good_start,
path= data_dir0, exposure_dose = exposure_dose, num_bufs =8, save_g2= False,
dead_time = 0, trans = [ md['transmission'] ] )
if run_dose:
plot_dose_g2( taus_uids, g2_uids, ylim=[1.0, 1.2], vshift= 0.00,
qval_dict = qval_dict, fit_res= None, geometry= scat_geometry_,
filename= '%s_dose_analysis'%uid_,
path= data_dir, function= None, ylabel='g2_Dose', g2_labels= None, append_name= '' )
if run_dose:
qth_interest = 1
plot_dose_g2( taus_uids, g2_uids, qth_interest= qth_interest, ylim=[0.98, 1.2], vshift= 0.00,
qval_dict = qval_dict, fit_res= None, geometry= scat_geometry_,
filename= '%s_dose_analysis'%uidstr,
path= data_dir, function= None, ylabel='g2_Dose', g2_labels= None, append_name= '' )
0.33/0.00134
if run_four_time:
t0=time.time()
g4 = get_four_time_from_two_time(g12b, g2=g2b)[:int(max_taus)]
run_time(t0)
if run_four_time:
taus4 = np.arange( g4.shape[0])*timeperframe
g4_pds = save_g2_general( g4, taus=taus4, qr=np.array( list( qval_dict.values() ) )[:,0],
qz=None, uid=uid_ +'_g4.csv', path= data_dir, return_res=True )
if run_four_time:
plot_g2_general( g2_dict={1:g4}, taus_dict={1:taus4},vlim=[0.95, 1.05], qval_dict=qval_dict, fit_res= None,
geometry=scat_geometry_,filename=uid_+'_g4',path= data_dir, ylabel='g4')
#run_xsvs =True
if run_xsvs:
max_cts = get_max_countc(FD, roi_mask )
#max_cts = 15 #for eiger 500 K
qind, pixelist = roi.extract_label_indices( roi_mask )
noqs = len( np.unique(qind) )
nopr = np.bincount(qind, minlength=(noqs+1))[1:]
#time_steps = np.array( utils.geometric_series(2, len(imgs) ) )
time_steps = [0,1] #only run the first two levels
num_times = len(time_steps)
times_xsvs = exposuretime + (2**( np.arange( len(time_steps) ) ) -1 ) * timeperframe
print( 'The max counts are: %s'%max_cts )
if run_xsvs:
if roi_avg is None:
times_roi, mean_int_sets = cal_each_ring_mean_intensityc(FD, roi_mask, timeperframe = None, )
roi_avg = np.average( mean_int_sets, axis=0)
t0=time.time()
spec_bins, spec_his, spec_std, spec_sum = xsvsp( FD, np.int_(roi_mask), norm=None,
max_cts=int(max_cts+2), bad_images=bad_frame_list, only_two_levels=True )
spec_kmean = np.array( [roi_avg * 2**j for j in range( spec_his.shape[0] )] )
run_time(t0)
spec_pds = save_bin_his_std( spec_bins, spec_his, spec_std, filename=uid_+'_spec_res.csv', path=data_dir )
if run_xsvs:
ML_val, KL_val,K_ = get_xsvs_fit( spec_his, spec_sum, spec_kmean,
spec_std, max_bins=2, fit_range=[1,60], varyK= False )
#print( 'The observed average photon counts are: %s'%np.round(K_mean,4))
#print( 'The fitted average photon counts are: %s'%np.round(K_,4))
print( 'The difference sum of average photon counts between fit and data are: %s'%np.round(
abs(np.sum( spec_kmean[0,:] - K_ )),4))
print( '#'*30)
qth= 0
print( 'The fitted M for Qth= %s are: %s'%(qth, ML_val[qth]) )
print( K_[qth])
print( '#'*30)
if run_xsvs:
qr = [qval_dict[k][0] for k in list(qval_dict.keys()) ]
plot_xsvs_fit( spec_his, ML_val, KL_val, K_mean = spec_kmean, spec_std=spec_std,
xlim = [0,10], vlim =[.9, 1.1],
uid=uid_, qth= qth_interest, logy= True, times= times_xsvs, q_ring_center=qr, path=data_dir)
plot_xsvs_fit( spec_his, ML_val, KL_val, K_mean = spec_kmean, spec_std = spec_std,
xlim = [0,15], vlim =[.9, 1.1],
uid=uid_, qth= None, logy= True, times= times_xsvs, q_ring_center=qr, path=data_dir )
if run_xsvs:
contrast_factorL = get_contrast( ML_val)
spec_km_pds = save_KM( spec_kmean, KL_val, ML_val, qs=qr, level_time=times_xsvs, uid=uid_, path = data_dir )
#spec_km_pds
if run_xsvs:
plot_g2_contrast( contrast_factorL, g2b, times_xsvs, tausb, qr,
vlim=[0.8,1.2], qth = qth_interest, uid=uid_,path = data_dir, legend_size=14)
plot_g2_contrast( contrast_factorL, g2b, times_xsvs, tausb, qr,
vlim=[0.8,1.2], qth = None, uid=uid_,path = data_dir, legend_size=4)
#from chxanalys.chx_libs import cmap_vge, cmap_albula, Javascript
md['mask_file']= mask_path + mask_name
md['roi_mask_file']= fp
md['mask'] = mask
#md['NOTEBOOK_FULL_PATH'] = data_dir + get_current_pipeline_fullpath(NFP).split('/')[-1]
md['good_start'] = good_start
md['bad_frame_list'] = bad_frame_list
md['avg_img'] = avg_img
md['roi_mask'] = roi_mask
md['setup_pargs'] = setup_pargs
if scat_geometry == 'gi_saxs':
md['Qr'] = Qr
md['Qz'] = Qz
md['qval_dict'] = qval_dict
md['beam_center_x'] = inc_x0
md['beam_center_y']= inc_y0
md['beam_refl_center_x'] = refl_x0
md['beam_refl_center_y'] = refl_y0
elif scat_geometry == 'gi_waxs':
md['beam_center_x'] = center[1]
md['beam_center_y']= center[0]
else:
md['qr']= qr
#md['qr_edge'] = qr_edge
md['qval_dict'] = qval_dict
md['beam_center_x'] = center[1]
md['beam_center_y']= center[0]
md['beg'] = FD.beg
md['end'] = FD.end
md['t_print0'] = t_print0
md['qth_interest'] = qth_interest
md['metadata_file'] = data_dir + 'uid=%s_md.pkl'%uid
psave_obj( md, data_dir + 'uid=%s_md.pkl'%uid ) #save the setup parameters
save_dict_csv( md, data_dir + 'uid=%s_md.csv'%uid, 'w')
Exdt = {}
if scat_geometry == 'gi_saxs':
for k,v in zip( ['md', 'roi_mask','qval_dict','avg_img','mask','pixel_mask', 'imgsum', 'bad_frame_list', 'qr_1d_pds'],
[md, roi_mask, qval_dict, avg_img,mask,pixel_mask, imgsum, bad_frame_list, qr_1d_pds] ):
Exdt[ k ] = v
elif scat_geometry == 'saxs':
for k,v in zip( ['md', 'q_saxs', 'iq_saxs','iqst','qt','roi_mask','qval_dict','avg_img','mask','pixel_mask', 'imgsum', 'bad_frame_list'],
[md, q_saxs, iq_saxs, iqst, qt,roi_mask, qval_dict, avg_img,mask,pixel_mask, imgsum, bad_frame_list] ):
Exdt[ k ] = v
elif scat_geometry == 'gi_waxs':
for k,v in zip( ['md', 'roi_mask','qval_dict','avg_img','mask','pixel_mask', 'imgsum', 'bad_frame_list'],
[md, roi_mask, qval_dict, avg_img,mask,pixel_mask, imgsum, bad_frame_list] ):
Exdt[ k ] = v
if run_waterfall:Exdt['wat'] = wat
if run_t_ROI_Inten:Exdt['times_roi'] = times_roi;Exdt['mean_int_sets']=mean_int_sets
if run_one_time:
if run_invariant_analysis:
for k,v in zip( ['taus','g2','g2_fit_paras', 'time_stamp','invariant'], [taus,g2,g2_fit_paras,time_stamp,invariant] ):Exdt[ k ] = v
else:
for k,v in zip( ['taus','g2','g2_fit_paras' ], [taus,g2,g2_fit_paras ] ):Exdt[ k ] = v
if run_two_time:
for k,v in zip( ['tausb','g2b','g2b_fit_paras', 'g12b'], [tausb,g2b,g2b_fit_paras,g12b] ):Exdt[ k ] = v
#for k,v in zip( ['tausb','g2b','g2b_fit_paras', ], [tausb,g2b,g2b_fit_paras] ):Exdt[ k ] = v
if run_dose:
for k,v in zip( [ 'taus_uids', 'g2_uids' ], [taus_uids, g2_uids] ):Exdt[ k ] = v
if run_four_time:
for k,v in zip( ['taus4','g4'], [taus4,g4] ):Exdt[ k ] = v
if run_xsvs:
for k,v in zip( ['spec_kmean','spec_pds','times_xsvs','spec_km_pds','contrast_factorL'],
[ spec_kmean,spec_pds,times_xsvs,spec_km_pds,contrast_factorL] ):Exdt[ k ] = v
#%run chxanalys_link/chxanalys/Create_Report.py
export_xpcs_results_to_h5( 'uid=%s_%s_Res.h5'%(md['uid'],q_mask_name), data_dir, export_dict = Exdt )
#extract_dict = extract_xpcs_results_from_h5( filename = 'uid=%s_Res.h5'%md['uid'], import_dir = data_dir )
#g2npy_filename = data_dir + '/' + 'uid=%s_g12b.npy'%uid
#print(g2npy_filename)
#if os.path.exists( g2npy_filename):
# print('Will delete this file=%s.'%g2npy_filename)
# os.remove( g2npy_filename )
#extract_dict = extract_xpcs_results_from_h5( filename = 'uid=%s_Res.h5'%md['uid'], import_dir = data_dir )
#extract_dict = extract_xpcs_results_from_h5( filename = 'uid=%s_Res.h5'%md['uid'], import_dir = data_dir )
pdf_out_dir = os.path.join('/XF11ID/analysis/', CYCLE, username, 'Results/')
pdf_filename = "XPCS_Analysis_Report2_for_uid=%s%s%s.pdf"%(uid,pdf_version,q_mask_name)
if run_xsvs:
pdf_filename = "XPCS_XSVS_Analysis_Report_for_uid=%s%s%s.pdf"%(uid,pdf_version,q_mask_name)
#%run /home/yuzhang/chxanalys_link/chxanalys/Create_Report.py
data_dir
make_pdf_report( data_dir, uid, pdf_out_dir, pdf_filename, username,
run_fit_form,run_one_time, run_two_time, run_four_time, run_xsvs, run_dose,
report_type= scat_geometry, report_invariant= run_invariant_analysis,
md = md )
#%run /home/yuzhang/chxanalys_link/chxanalys/chx_olog.py
if att_pdf_report:
os.environ['HTTPS_PROXY'] = 'https://proxy:8888'
os.environ['no_proxy'] = 'cs.nsls2.local,localhost,127.0.0.1'
update_olog_uid_with_file( uid[:6], text='Add XPCS Analysis PDF Report',
filename=pdf_out_dir + pdf_filename, append_name='_R1' )
if save_oavs:
os.environ['HTTPS_PROXY'] = 'https://proxy:8888'
os.environ['no_proxy'] = 'cs.nsls2.local,localhost,127.0.0.1'
update_olog_uid_with_file( uid[:6], text='Add OVA images',
filename= data_dir + 'uid=%s_OVA_images.png'%uid, append_name='_img' )
# except:
uid
#save_current_pipeline( NFP, data_dir)
#get_current_pipeline_fullpath(NFP)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Control Runs Here
Step2: Make a directory for saving results
Step3: Load Metadata & Image Data
Step4: Don't Change the lines below here
Step5: Load ROI defined by "XPCS_Setup" Pipeline
Step6: Load ROI mask depending on data analysis type
Step7: get data
Step8: Load Chip mask depeding on detector
Step9: Overwrite Some Metadata if Wrong Input
Step10: Apply Mask
Step11: Check several frames average intensity
Step12: Compress Data
Step13: Get bad frame list by a polynominal fit
Step14: Creat new mask by masking the bad pixels and get new avg_img
Step15: Plot time~ total intensity of each frame
Step16: Get Dynamic Mask (currently designed for 500K)
Step17: Static Analysis
Step18: Time Depedent I(q) Analysis
Step19: GiSAXS Scattering Geometry
Step20: Static Analysis for gisaxs
Step21: Make a Profile Plot
Step22: Dynamic Analysis for gi_saxs
Step23: GiWAXS Scattering Geometry
Step24: Extract the labeled array
Step25: Number of pixels in each q box
Step26: Check one ROI intensity
Step27: Do a waterfall analysis
Step28: Analysis for mass center of reflective beam center
Step29: One time Correlation
Step30: Fit g2
Step31: For two-time
Step32: Run Dose dependent analysis
Step33: Four Time Correlation
Step34: Speckle Visiblity
Step35: Do historam
Step36: Do historam fit by negtive binominal function with maximum likehood method
Step37: Plot fit results
Step38: Get contrast
Step39: Plot contrast with g2 results
Step40: Export Results to a HDF5 File
Step41: Creat PDF Report
Step42: Attach the PDF report to Olog
Step43: Save the OVA image
Step44: The End!
Step45: Save the current pipeline in Results folder
|
15,478 | <ASSISTANT_TASK:>
Python Code:
import pypsa
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(rc={"figure.figsize": (9, 5)})
network = pypsa.Network()
for i in range(3):
network.add("Bus", "electric bus {}".format(i), v_nom=20.0)
network.add("Bus", "heat bus {}".format(i), carrier="heat")
network.buses
network.buses["carrier"].value_counts()
for i in range(3):
network.add(
"Line",
"line {}".format(i),
bus0="electric bus {}".format(i),
bus1="electric bus {}".format((i + 1) % 3),
x=0.1,
s_nom=1000,
)
network.lines
for i in range(3):
network.add(
"Link",
"heat pump {}".format(i),
bus0="electric bus {}".format(i),
bus1="heat bus {}".format(i),
p_nom=100,
efficiency=3.0,
)
network.links
network.add("Carrier", "gas", co2_emissions=0.27)
network.add("Carrier", "biomass", co2_emissions=0.0)
network.carriers
network.add(
"Generator",
"gas generator",
bus="electric bus 0",
p_nom=100,
marginal_cost=50,
carrier="gas",
efficiency=0.3,
)
network.add(
"Generator",
"biomass generator",
bus="electric bus 1",
p_nom=100,
marginal_cost=100,
efficiency=0.3,
carrier="biomass",
)
for i in range(3):
network.add(
"Generator",
"boiler {}".format(i),
bus="heat bus {}".format(i),
p_nom=1000,
efficiency=0.9,
marginal_cost=20.0,
carrier="gas",
)
network.generators
for i in range(3):
network.add(
"Load",
"electric load {}".format(i),
bus="electric bus {}".format(i),
p_set=i * 10,
)
for i in range(3):
network.add(
"Load",
"heat load {}".format(i),
bus="heat bus {}".format(i),
p_set=(3 - i) * 10,
)
network.loads
def run_lopf():
network.lopf()
df = pd.concat(
[
network.generators_t.p.loc["now"],
network.links_t.p0.loc["now"],
network.loads_t.p.loc["now"],
],
keys=["Generators", "Links", "Line"],
names=["Component", "index"],
).reset_index(name="Production")
sns.barplot(data=df, x="index", y="Production", hue="Component")
plt.title(f"Objective: {network.objective}")
plt.xticks(rotation=90)
plt.tight_layout()
run_lopf()
network.links.marginal_cost = 10
run_lopf()
network.add("GlobalConstraint", "co2_limit", sense="<=", constant=0.0)
run_lopf()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Add three buses of AC and heat carrier each
Step2: Add three lines in a ring
Step3: Connect the electric to the heat buses with heat pumps with COP 3
Step4: Add carriers
Step5: Add a gas generator at bus 0, a biomass generator at bus 1 and a boiler at all heat buses
Step6: Add electric loads and heat loads.
Step7: We define a function for the LOPF
Step8: Now, rerun with marginal costs for the heat pump operation.
Step9: Finally, rerun with no CO2 emissions.
|
15,479 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
X_train = np.linspace(0, 1, 100)
X_test = np.linspace(0, 1, 1000)
@np.vectorize
def target(x):
return x > 0.5
Y_train = target(X_train) + np.random.randn(*X_train.shape) * 0.1
Y_test = target(X_test) + np.random.randn(*X_test.shape) * 0.1
plt.figure(figsize = (16, 9));
plt.scatter(X_train, Y_train, s=50);
plt.title('Train dataset');
plt.xlabel('X');
plt.ylabel('Y');
def loss_mse(predict, true):
return np.mean((predict - true) ** 2)
def stamp_fit(x, y):
root_prediction = np.mean(y)
root_loss = loss_mse(root_prediction, y)
gain = []
_, thresholds = np.histogram(x)
thresholds = thresholds[1:-1]
for i in thresholds:
left_predict = np.mean(y[x < i])
left_weight = np.sum(x < i) / x.shape[0]
right_predict = np.mean(y[x >= i])
right_weight = np.sum(x >= i) / x.shape[0]
loss = left_weight * loss_mse(left_predict, y[x < i]) + right_weight * loss_mse(right_predict, y[x >= i])
gain.append(root_loss - loss)
threshold = thresholds[np.argmax(gain)]
left_predict = np.mean(y[x < threshold])
right_predict = np.mean(y[x >= threshold])
return threshold, left_predict, right_predict
@np.vectorize
def stamp_predict(x, threshold, predict_l, predict_r):
prediction = predict_l if x < threshold else predict_r
return prediction
predict_params = stamp_fit(X_train, Y_train)
prediction = stamp_predict(X_test, *predict_params)
loss_mse(prediction, Y_test)
plt.figure(figsize = (16, 9));
plt.scatter(X_test, Y_test, s=50);
plt.plot(X_test, prediction, 'r');
plt.title('Test dataset');
plt.xlabel('X');
plt.ylabel('Y');
from sklearn.tree import DecisionTreeRegressor
def get_grid(data):
x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1
y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1
return np.meshgrid(np.arange(x_min, x_max, 0.01),
np.arange(y_min, y_max, 0.01))
data_x = np.random.normal(size=(100, 2))
data_y = (data_x[:, 0] ** 2 + data_x[:, 1] ** 2) ** 0.5
plt.figure(figsize=(8, 8));
plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=100, cmap='spring');
plt.figure(figsize=(20, 6))
for i in range(3):
clf = DecisionTreeRegressor(random_state=42)
indecies = np.random.randint(data_x.shape[0], size=int(data_x.shape[0] * 0.9))
clf.fit(data_x[indecies], data_y[indecies])
xx, yy = get_grid(data_x)
predicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.subplot2grid((1, 3), (0, i))
plt.pcolormesh(xx, yy, predicted, cmap='winter')
plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=30, cmap='winter', edgecolor='k')
plt.figure(figsize=(14, 14))
for i, max_depth in enumerate([2, 4, None]):
for j, min_samples_leaf in enumerate([15, 5, 1]):
clf = DecisionTreeRegressor(max_depth=max_depth, min_samples_leaf=min_samples_leaf)
clf.fit(data_x, data_y)
xx, yy = get_grid(data_x)
predicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.subplot2grid((3, 3), (i, j))
plt.pcolormesh(xx, yy, predicted, cmap='spring')
plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=30, cmap='spring', edgecolor='k')
plt.title('max_depth=' + str(max_depth) + ', min_samples_leaf: ' + str(min_samples_leaf))
def median(X):
return np.median(X)
def make_sample_cauchy(n_samples):
sample = np.random.standard_cauchy(size=n_samples)
return sample
X = make_sample_cauchy(int(1e2))
plt.hist(X, bins=int(1e1));
med = median(X)
med
def make_sample_bootstrap(X):
size = X.shape[0]
idx_range = range(size)
new_idx = np.random.choice(idx_range, size, replace=True)
return X[new_idx]
K = 500
median_boot_samples = []
for i in range(K):
boot_sample = make_sample_bootstrap(X)
meadian_boot_sample = median(boot_sample)
median_boot_samples.append(meadian_boot_sample)
median_boot_samples = np.array(median_boot_samples)
mean = np.mean(median_boot_samples)
std = np.std(median_boot_samples)
print(mean, std)
plt.hist(median_boot_samples, bins=int(50));
from sklearn.ensemble import RandomForestRegressor
clf = RandomForestRegressor(n_estimators=100)
clf.fit(data_x, data_y)
xx, yy = get_grid(data_x)
predicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.figure(figsize=(8, 8));
plt.pcolormesh(xx, yy, predicted, cmap='spring');
plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=100, cmap='spring', edgecolor='k');
from sklearn.datasets import load_boston
data = load_boston()
X = data.data
y = data.target
from sklearn.model_selection import KFold, cross_val_score
cv = KFold(shuffle=True, random_state=1011)
regr = DecisionTreeRegressor()
print(cross_val_score(regr, X, y, cv=cv,
scoring='r2').mean())
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import RandomForestRegressor
# usuall cv code
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='tree'></a>
Step2: <a id='stamp'></a>
Step3: <a id='lim'></a>
Step4: Sensitivity with respect to the subsample
Step5: Sensitivity with respect to the hyper parameters
Step6: To overcome this disadvantages, we will consider bagging or bootstrap aggregation
Step7: So, our model median will be
Step8: Exact variance formula for sample cauchy median is following
Step9: Second, for $K$ bootstrap samples your shoud estimate its median.
Step10: Now we can obtain mean and variance from median_boot_samples as we are usually done it in statistics
Step11: Please, put your estimation of std rounded to the 3 decimals at the form
Step12: <a id='rf'></a>
Step13: You can note, that all boundaries become much more smoother. Now we will compare methods on the Boston Dataset
Step14: Task 1
Step15: Find best parameter with CV. Please put score at the https
|
15,480 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import spatial
def sim1(n):
v1 = np.random.randint(0, 100, n)
v2 = np.random.randint(0, 100, n)
return 1 - spatial.distance.cosine(v1, v2)
def sim2(n):
v1 = np.random.randint(0, 100, n)
v2 = np.random.randint(0, 100, n)
return np.dot(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2)
import math
def sim3(n):
v1 = np.random.randint(0, 100, n)
v2 = np.random.randint(0, 100, n)
return sum(v1 * v2) / math.sqrt(sum(v1 ** 2)) / math.sqrt(sum(v2 ** 2))
from itertools import izip
def dot_product(v1, v2):
return sum(map(lambda x: x[0] * x[1], izip(v1, v2)))
def sim4(n):
v1 = np.random.randint(0, 100, n)
v2 = np.random.randint(0, 100, n)
prod = dot_product(v1, v2)
len1 = math.sqrt(dot_product(v1, v1))
len2 = math.sqrt(dot_product(v2, v2))
return prod / (len1 * len2)
%timeit sim1(400)
%timeit sim2(400)
%timeit sim3(400)
%timeit sim4(400)
from datetime import datetime as dt
start = dt.now()
start.date(), start.time(), start
dt.now() - start
import logging
fmtstr = '%(asctime)s [%(levelname)s][%(name)s] %(message)s'
datefmtstr = '%Y/%m/%d %H:%M:%S'
if len(logging.getLogger().handlers) >= 1:
logging.getLogger().handlers[0].setFormatter(logging.Formatter(fmtstr, datefmtstr))
else:
logging.basicConfig(format=fmtstr, datefmt=datefmtstr)
# 如果直接呼叫 logging.warning,就是使用root logger
logging.warning("please set %d in %s", 100, "length")
# 在root logger下面增加child logger
aaa_logger = logging.getLogger('aaa')
bbb_logger = aaa_logger.getChild('bbb')
ccc_logger = bbb_logger.getChild('ccc')
aaa_logger.warn("hello")
bbb_logger.warn("hello")
# 當logger是樹狀結構時,logger的名稱會變成 aaa.bbb.ccc
ccc_logger.warn("hello")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 在python中計算cosine similarity最快的方法是什麼?
Step2: 結論
Step3: logging
Step4: 如果從某個module呼叫時,就用
|
15,481 | <ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds, axis = 0)
Y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
k_to_accuracies[k] = []
for i in xrange(num_folds):
train_index = range(num_folds)
del train_index[i]
X_temp = np.zeros((0, X_train.shape[1]))
Y_temp = []
for j in train_index:
X_temp = np.append(X_temp, X_train_folds[j], axis = 0)
Y_temp = np.append(Y_temp, Y_train_folds[j])
classifier = KNearestNeighbor()
classifier.train(X_temp, Y_temp)
dists = classifier.compute_distances_no_loops(X_train_folds[i])
y_test_pred = classifier.predict_labels(dists, k=k)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == Y_train_folds[i])
accuracy = float(num_correct) / Y_train_folds[i].shape[0]
k_to_accuracies[k].append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
|
15,482 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*30].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
# check data shape
print("input samples:", train_features.shape[0])
print("input features:", train_features.shape[1])
batch = np.random.choice(train_features.index, size=128)
count = 0
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
if count == 0:
print(record.shape)
print(target.shape)
inputs = np.array(record, ndmin=2).T
targets = np.array(target, ndmin=2).T
print(inputs.shape)
print(targets.shape)
count += 1
print("count:", count)
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = (lambda x: 1 / (1 + np.exp(-x)))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
# shape symbol: i - numInputs (56), h - numHidden, o - numOutputs
# inputs(i, 1) , not batched
# targets(1, 1)
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
# shape: inputs(i, 1).T dot weights(h, i).T => (1, h)
hidden_inputs = np.dot(inputs.T, self.weights_input_to_hidden.T) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
# shape: inputs(1, h) dot weights(o, h).T => (1, o)
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output.T) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
# shape(1, o)
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
# shape: inputs(1, o) dot weights(o, h) => (1, h)
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output) # errors propagated to the hidden layer
hidden_grad = hidden_errors * hidden_outputs * (1 - hidden_outputs) # hidden layer gradients
# TODO: Update the weights
# shape: (1, o).T dot (1, h) => (o, h)
self.weights_hidden_to_output += self.lr * np.dot(output_errors.T, hidden_outputs) # update hidden-to-output weights with gradient descent step
# shape: (1, h).T dot (1, i) => (h, i)
#self.weights_input_to_hidden += self.lr * np.dot(hidden_grad.T, inputs.T) # update input-to-hidden weights with gradient descent step
self.weights_input_to_hidden += self.lr * hidden_grad.T * inputs.T # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
#hidden_inputs = # signals into hidden layer
#hidden_outputs = # signals from hidden layer
hidden_inputs = np.dot(inputs.T, self.weights_input_to_hidden.T) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
#final_inputs = # signals into final output layer
#final_outputs = # signals from final output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output.T) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs.T
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 2000 # 100
learning_rate = 0.0699 # 0.1
hidden_nodes = 28 # input feature has 56, half is 28 # 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
#if False:
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
print(network.weights_input_to_hidden)
network.train(inputs, targets)
print(network.weights_input_to_hidden)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: Thinking about your results
|
15,483 | <ASSISTANT_TASK:>
Python Code:
import IPython
print("pyspark version:" + str(sc.version))
print("Ipython version:" + str(IPython.__version__))
# map
x = sc.parallelize([1,2,3]) # sc = spark context, parallelize creates an RDD from the passed object
y = x.map(lambda x: (x,x**2))
print(x.collect()) # collect copies RDD elements to a list on the driver
print(y.collect())
# flatMap
x = sc.parallelize([1,2,3])
y = x.flatMap(lambda x: (x, 100*x, x**2))
print(x.collect())
print(y.collect())
# mapPartitions
x = sc.parallelize([1,2,3], 2)
def f(iterator): yield sum(iterator)
y = x.mapPartitions(f)
print(x.glom().collect()) # glom() flattens elements on the same partition
print(y.glom().collect())
# mapPartitionsWithIndex
x = sc.parallelize([1,2,3], 2)
def f(partitionIndex, iterator): yield (partitionIndex,sum(iterator))
y = x.mapPartitionsWithIndex(f)
print(x.glom().collect()) # glom() flattens elements on the same partition
print(y.glom().collect())
# getNumPartitions
x = sc.parallelize([1,2,3], 2)
y = x.getNumPartitions()
print(x.glom().collect())
print(y)
# filter
x = sc.parallelize([1,2,3])
y = x.filter(lambda x: x%2 == 1) # filters out even elements
print(x.collect())
print(y.collect())
# distinct
x = sc.parallelize(['A','A','B'])
y = x.distinct()
print(x.collect())
print(y.collect())
# sample
x = sc.parallelize(range(7))
ylist = [x.sample(withReplacement=False, fraction=0.5) for i in range(5)] # call 'sample' 5 times
print('x = ' + str(x.collect()))
for cnt,y in zip(range(len(ylist)), ylist):
print('sample:' + str(cnt) + ' y = ' + str(y.collect()))
# takeSample
x = sc.parallelize(range(7))
ylist = [x.takeSample(withReplacement=False, num=3) for i in range(5)] # call 'sample' 5 times
print('x = ' + str(x.collect()))
for cnt,y in zip(range(len(ylist)), ylist):
print('sample:' + str(cnt) + ' y = ' + str(y)) # no collect on y
# union
x = sc.parallelize(['A','A','B'])
y = sc.parallelize(['D','C','A'])
z = x.union(y)
print(x.collect())
print(y.collect())
print(z.collect())
# intersection
x = sc.parallelize(['A','A','B'])
y = sc.parallelize(['A','C','D'])
z = x.intersection(y)
print(x.collect())
print(y.collect())
print(z.collect())
# sortByKey
x = sc.parallelize([('B',1),('A',2),('C',3)])
y = x.sortByKey()
print(x.collect())
print(y.collect())
# sortBy
x = sc.parallelize(['Cat','Apple','Bat'])
def keyGen(val): return val[0]
y = x.sortBy(keyGen)
print(y.collect())
# glom
x = sc.parallelize(['C','B','A'], 2)
y = x.glom()
print(x.collect())
print(y.collect())
# cartesian
x = sc.parallelize(['A','B'])
y = sc.parallelize(['C','D'])
z = x.cartesian(y)
print(x.collect())
print(y.collect())
print(z.collect())
# groupBy
x = sc.parallelize([1,2,3])
y = x.groupBy(lambda x: 'A' if (x%2 == 1) else 'B' )
print(x.collect())
print([(j[0],[i for i in j[1]]) for j in y.collect()]) # y is nested, this iterates through it
# pipe
x = sc.parallelize(['A', 'Ba', 'C', 'AD'])
y = x.pipe('grep -i "A"') # calls out to grep, may fail under Windows
print(x.collect())
print(y.collect())
# foreach
from __future__ import print_function
x = sc.parallelize([1,2,3])
def f(el):
'''side effect: append the current RDD elements to a file'''
f1=open("./foreachExample.txt", 'a+')
print(el,file=f1)
open('./foreachExample.txt', 'w').close() # first clear the file contents
y = x.foreach(f) # writes into foreachExample.txt
print(x.collect())
print(y) # foreach returns 'None'
# print the contents of foreachExample.txt
with open("./foreachExample.txt", "r") as foreachExample:
print (foreachExample.read())
# foreachPartition
from __future__ import print_function
x = sc.parallelize([1,2,3],5)
def f(parition):
'''side effect: append the current RDD partition contents to a file'''
f1=open("./foreachPartitionExample.txt", 'a+')
print([el for el in parition],file=f1)
open('./foreachPartitionExample.txt', 'w').close() # first clear the file contents
y = x.foreachPartition(f) # writes into foreachExample.txt
print(x.glom().collect())
print(y) # foreach returns 'None'
# print the contents of foreachExample.txt
with open("./foreachPartitionExample.txt", "r") as foreachExample:
print (foreachExample.read())
# collect
x = sc.parallelize([1,2,3])
y = x.collect()
print(x) # distributed
print(y) # not distributed
# reduce
x = sc.parallelize([1,2,3])
y = x.reduce(lambda obj, accumulated: obj + accumulated) # computes a cumulative sum
print(x.collect())
print(y)
# fold
x = sc.parallelize([1,2,3])
neutral_zero_value = 0 # 0 for sum, 1 for multiplication
y = x.fold(neutral_zero_value,lambda obj, accumulated: accumulated + obj) # computes cumulative sum
print(x.collect())
print(y)
# aggregate
x = sc.parallelize([2,3,4])
neutral_zero_value = (0,1) # sum: x+0 = x, product: 1*x = x
seqOp = (lambda aggregated, el: (aggregated[0] + el, aggregated[1] * el))
combOp = (lambda aggregated, el: (aggregated[0] + el[0], aggregated[1] * el[1]))
y = x.aggregate(neutral_zero_value,seqOp,combOp) # computes (cumulative sum, cumulative product)
print(x.collect())
print(y)
# max
x = sc.parallelize([1,3,2])
y = x.max()
print(x.collect())
print(y)
# min
x = sc.parallelize([1,3,2])
y = x.min()
print(x.collect())
print(y)
# sum
x = sc.parallelize([1,3,2])
y = x.sum()
print(x.collect())
print(y)
# count
x = sc.parallelize([1,3,2])
y = x.count()
print(x.collect())
print(y)
# histogram (example #1)
x = sc.parallelize([1,3,1,2,3])
y = x.histogram(buckets = 2)
print(x.collect())
print(y)
# histogram (example #2)
x = sc.parallelize([1,3,1,2,3])
y = x.histogram([0,0.5,1,1.5,2,2.5,3,3.5])
print(x.collect())
print(y)
# mean
x = sc.parallelize([1,3,2])
y = x.mean()
print(x.collect())
print(y)
# variance
x = sc.parallelize([1,3,2])
y = x.variance() # divides by N
print(x.collect())
print(y)
# stdev
x = sc.parallelize([1,3,2])
y = x.stdev() # divides by N
print(x.collect())
print(y)
# sampleStdev
x = sc.parallelize([1,3,2])
y = x.sampleStdev() # divides by N-1
print(x.collect())
print(y)
# sampleVariance
x = sc.parallelize([1,3,2])
y = x.sampleVariance() # divides by N-1
print(x.collect())
print(y)
# countByValue
x = sc.parallelize([1,3,1,2,3])
y = x.countByValue()
print(x.collect())
print(y)
# top
x = sc.parallelize([1,3,1,2,3])
y = x.top(num = 3)
print(x.collect())
print(y)
# takeOrdered
x = sc.parallelize([1,3,1,2,3])
y = x.takeOrdered(num = 3)
print(x.collect())
print(y)
# take
x = sc.parallelize([1,3,1,2,3])
y = x.take(num = 3)
print(x.collect())
print(y)
# first
x = sc.parallelize([1,3,1,2,3])
y = x.first()
print(x.collect())
print(y)
# collectAsMap
x = sc.parallelize([('C',3),('A',1),('B',2)])
y = x.collectAsMap()
print(x.collect())
print(y)
# keys
x = sc.parallelize([('C',3),('A',1),('B',2)])
y = x.keys()
print(x.collect())
print(y.collect())
# values
x = sc.parallelize([('C',3),('A',1),('B',2)])
y = x.values()
print(x.collect())
print(y.collect())
# reduceByKey
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
y = x.reduceByKey(lambda agg, obj: agg + obj)
print(x.collect())
print(y.collect())
# reduceByKeyLocally
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
y = x.reduceByKeyLocally(lambda agg, obj: agg + obj)
print(x.collect())
print(y)
# countByKey
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
y = x.countByKey()
print(x.collect())
print(y)
# join
x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)])
y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)])
z = x.join(y)
print(x.collect())
print(y.collect())
print(z.collect())
# leftOuterJoin
x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)])
y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)])
z = x.leftOuterJoin(y)
print(x.collect())
print(y.collect())
print(z.collect())
# rightOuterJoin
x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)])
y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)])
z = x.rightOuterJoin(y)
print(x.collect())
print(y.collect())
print(z.collect())
# partitionBy
x = sc.parallelize([(0,1),(1,2),(2,3)],2)
y = x.partitionBy(numPartitions = 3, partitionFunc = lambda x: x) # only key is passed to paritionFunc
print(x.glom().collect())
print(y.glom().collect())
# combineByKey
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
createCombiner = (lambda el: [(el,el**2)])
mergeVal = (lambda aggregated, el: aggregated + [(el,el**2)]) # append to aggregated
mergeComb = (lambda agg1,agg2: agg1 + agg2 ) # append agg1 with agg2
y = x.combineByKey(createCombiner,mergeVal,mergeComb)
print(x.collect())
print(y.collect())
# aggregateByKey
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
zeroValue = [] # empty list is 'zero value' for append operation
mergeVal = (lambda aggregated, el: aggregated + [(el,el**2)])
mergeComb = (lambda agg1,agg2: agg1 + agg2 )
y = x.aggregateByKey(zeroValue,mergeVal,mergeComb)
print(x.collect())
print(y.collect())
# foldByKey
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
zeroValue = 1 # one is 'zero value' for multiplication
y = x.foldByKey(zeroValue,lambda agg,x: agg*x ) # computes cumulative product within each key
print(x.collect())
print(y.collect())
# groupByKey
x = sc.parallelize([('B',5),('B',4),('A',3),('A',2),('A',1)])
y = x.groupByKey()
print(x.collect())
print([(j[0],[i for i in j[1]]) for j in y.collect()])
# flatMapValues
x = sc.parallelize([('A',(1,2,3)),('B',(4,5))])
y = x.flatMapValues(lambda x: [i**2 for i in x]) # function is applied to entire value, then result is flattened
print(x.collect())
print(y.collect())
# mapValues
x = sc.parallelize([('A',(1,2,3)),('B',(4,5))])
y = x.mapValues(lambda x: [i**2 for i in x]) # function is applied to entire value
print(x.collect())
print(y.collect())
# groupWith
x = sc.parallelize([('C',4),('B',(3,3)),('A',2),('A',(1,1))])
y = sc.parallelize([('B',(7,7)),('A',6),('D',(5,5))])
z = sc.parallelize([('D',9),('B',(8,8))])
a = x.groupWith(y,z)
print(x.collect())
print(y.collect())
print(z.collect())
print("Result:")
for key,val in list(a.collect()):
print(key, [list(i) for i in val])
# cogroup
x = sc.parallelize([('C',4),('B',(3,3)),('A',2),('A',(1,1))])
y = sc.parallelize([('A',8),('B',7),('A',6),('D',(5,5))])
z = x.cogroup(y)
print(x.collect())
print(y.collect())
for key,val in list(z.collect()):
print(key, [list(i) for i in val])
# sampleByKey
x = sc.parallelize([('A',1),('B',2),('C',3),('B',4),('A',5)])
y = x.sampleByKey(withReplacement=False, fractions={'A':0.5, 'B':1, 'C':0.2})
print(x.collect())
print(y.collect())
# subtractByKey
x = sc.parallelize([('C',1),('B',2),('A',3),('A',4)])
y = sc.parallelize([('A',5),('D',6),('A',7),('D',8)])
z = x.subtractByKey(y)
print(x.collect())
print(y.collect())
print(z.collect())
# subtract
x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)])
y = sc.parallelize([('C',8),('A',2),('D',1)])
z = x.subtract(y)
print(x.collect())
print(y.collect())
print(z.collect())
# keyBy
x = sc.parallelize([1,2,3])
y = x.keyBy(lambda x: x**2)
print(x.collect())
print(y.collect())
# repartition
x = sc.parallelize([1,2,3,4,5],2)
y = x.repartition(numPartitions=3)
print(x.glom().collect())
print(y.glom().collect())
# coalesce
x = sc.parallelize([1,2,3,4,5],2)
y = x.coalesce(numPartitions=1)
print(x.glom().collect())
print(y.glom().collect())
# zip
x = sc.parallelize(['B','A','A'])
y = x.map(lambda x: ord(x)) # zip expects x and y to have same #partitions and #elements/partition
z = x.zip(y)
print(x.collect())
print(y.collect())
print(z.collect())
# zipWithIndex
x = sc.parallelize(['B','A','A'],2)
y = x.zipWithIndex()
print(x.glom().collect())
print(y.collect())
# zipWithUniqueId
x = sc.parallelize(['B','A','A'],2)
y = x.zipWithUniqueId()
print(x.glom().collect())
print(y.collect())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a href="http
Step2: <a href="http
Step3: <a href="http
Step4: <a href="http
Step5: <a href="http
Step6: <a href="http
Step7: <a href="http
Step8: <a href="http
Step9: <a href="http
Step10: <a href="http
Step11: <a href="http
Step12: <a href="http
Step13: <a href="http
Step14: <a href="http
Step15: <a href="http
Step16: <a href="http
Step17: <a href="http
Step18: <a href="http
Step19: <a href="http
Step20: <a href="http
Step21: <a href="http
Step22: <a href="http
Step23: <a href="http
Step24: <a href="http
Step25: <a href="http
Step26: <a href="http
Step27: <a href="http
Step28: <a href="http
Step29: <a href="http
Step30: <a href="http
Step31: <a href="http
Step32: <a href="http
Step33: <a href="http
Step34: <a href="http
Step35: <a href="http
Step36: <a href="http
Step37: <a href="http
Step38: <a href="http
Step39: <a href="http
Step40: <a href="http
Step41: <a href="http
Step42: <a href="http
Step43: <a href="http
Step44: <a href="http
Step45: <a href="http
Step46: <a href="http
Step47: <a href="http
Step48: <a href="http
Step49: <a href="http
Step50: <a href="http
Step51: <a href="http
Step52: <a href="http
Step53: <a href="http
Step54: <a href="http
Step55: <a href="http
Step56: <a href="http
Step57: <a href="http
Step58: <a href="http
Step59: <a href="http
Step60: <a href="http
Step61: <a href="http
Step62: <a href="http
Step63: <a href="http
Step64: <a href="http
Step65: <a href="http
|
15,484 | <ASSISTANT_TASK:>
Python Code:
import numpy
#assuming the data file is in the data/ folder
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
print(data)
weight_kg = 55 #assigns value 55 to weight_kg
print(weight_kg) #we can print to the screen
print("weight in kg", weight_kg)
weight_kg = 70
print("weight in kg", weight_kg)
weight_kg * 2
weight_lb = weight_kg * 2.2
print('weigh in lb:', weight_lb)
print("weight in lb:", weight_kg*2.2)
print(data)
whos
print(data)
print(type(data)) #we can get type of object
print(data.shape)
print('first value in data', data[0,0]) #use index in square brackets
print('4th value in data', data[0,3]) #use index in square brackets
print('first value in 3rd row data', data[3,0]) #use index in square brackets
!head -3 data/inflammation-01.csv
print('middle value in data', data[30,20]) # get the middle value - notice here i didn't use print
data[0:4, 0:10] #select whole sections of matrix, 1st 10 days & 4 patients
data[5:10,0:10]
data[:3, 36:]
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
print(element[:4])
print(element[4:])
print(:)
#oxygen
print(element[-1])
print(element[-2])
print(element[2:-1])
doubledata = data * 2.0 #we can perform math on array
doubledata
data[:3, 36:]
doubledata[:3, 36:]
tripledata = doubledata + data
print('tripledata:')
print(tripledata[:3, 36:])
print(data.mean())
print('maximum inflammation: ', data.max())
print('minimum inflammation: ', data.min())
print('standard deviation:', data.std())
%matplotlib inline
import matplotlib.pyplot as plt
data
plt.imshow(data)
image = plt.imshow(data)
plt.savefig('timsheatmap.png')
avg_inflam = data.mean(axis=0) #asix zero is by each day
print(data.mean(axis=0))
print(data.mean(axis=0).shape) #Nx1 vector of averages
print(data.mean(axis=1)) #avg inflam per patient across all days
print(data.mean(axis=1).shape)
print(avg_inflam)
day_avg_plot = plt.plot(avg_inflam)
data.mean(axis=0).shape
data.shape
data.mean(axis=1).shape
max_plot = plt.plot(data.max(axis=0))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing a library akin to getting lab equipment out of a locker and setting up on bench
Step2: numpy.loadtex() is a function call, runs loadtxt in numpy
Step3: print above shows several things at once by separating with commas
Step4: whos #ipython command to see what variables & mods you have
Step5: What does the following program print out?
Step6: data refers to N-dimensional array
Step7: data has 60 rows and 40 columns
Step8: programming languages like MATLAB and R start counting at 1
Step9: slice 0
Step10: dont' have to include uper and lower bound
Step11: A section of an array is called a slice. We can take slices of character strings as well
Step12: operation on arrays is done on each individual element of the array
Step13: we can also do arithmetic operation with another array of same shape (same dims)
Step14: we can do more than simple arithmetic
Step15: mean is a method of the array (function)
Step16: however, we are usually more interested in partial stats, e.g. max value per patient or the avg value per day
Step17: let's visualize this data with matplotlib library
Step18: nice, but ipython/jupyter proved us with 'magic' functions and one lets us display our plot inline
Step19: now let's look at avg inflammation over days (columns)
Step20: avg per day across all patients in the var day_avg_plot
|
15,485 | <ASSISTANT_TASK:>
Python Code:
import setup_mysql_database
import numpy as np # numerical libraries
import scipy as sp
import pandas as pd # for data analysis
import pandas.io.sql as sql # for interfacing with MySQL database
from scipy import linalg # linear algebra libraries
from scipy import optimize
from __future__ import division, print_function # good defensive measure
import matplotlib as mpl # a big library with plotting functionality
import matplotlib.pyplot as plt # a subset of matplotlib with most of the useful tools
%matplotlib inline
# extract from MySQL database info on rank points and height for both winner and loser, store in dataframe
with engine.begin() as connection:
rawdata = pd.read_sql_query(SELECT winner_rank_points, loser_rank_points, winner_ht, loser_ht FROM matches \
WHERE tourney_date > '20160101' \
AND winner_rank_points IS NOT NULL \
AND loser_rank_points IS NOT NULL \
AND winner_ht IS NOT NULL \
AND loser_ht IS NOT NULL, connection)
# this nx2 array contains the differences in rankings points and the differences in height
X = pd.concat([rawdata.iloc[:,1]-rawdata.iloc[:,0],rawdata.iloc[:,3]-rawdata.iloc[:,2]],axis=1).values
# this nx1 binary array indicates whether the match was a "success" or a "failure", as predicted by ranking differences
y = (X[:,0] > 0)
# for numerical well-behavedness, we need to scale and center the data
X=(X-np.mean(X,axis=0))/np.std(X,axis=0)
# plot the normalized data
fig, ax = plt.subplots(1,1)
ax.plot(X[y,0],X[y,1],"ro")
ax.plot(X[~y,0],X[~y,1],"bo")
ax.set_xlabel('Rank difference')
ax.set_ylabel('Height')
ax.set_title('Higher-rank-wins as a function of rank difference and height')
# change y from True/False binary into 1/0 binary
yv=y*1
# prepend column of 1s to X
Xv=np.insert(X,0,1,axis=1)
def sigmoid(z):
'''
Usage: sigmoid(z)
Description: Computes value of sigmoid function for scalar.
For vector or matrix, computes values of sigmoid function for each entry.
'''
return 1/(1+np.exp(-z));
# define a cost function
def costFunction(theta,X,y,lam):
'''
Computes the cost and gradient for logistic regression.
Input:
theta (3x1 vector of parameters)
X (nx3 matrix of feature values, first column all 1s)
y (nx1 binary vector of outcomes, 1=higher ranked player won, 0 otherwise)
lam (scalar: regularization paramter)
Output:
cost (scalar value of cost)
'''
# number of data points
m = len(y)
# make sure vectors are column vectors
theta = theta.reshape(-1,1)
y = y.reshape(-1,1)
# input to sigmoid function will be a column vector
z = np.dot(X,theta)
# cost function
J = (1/m)*np.sum(np.dot(-y.transpose(),np.log(sigmoid(z))) - \
np.dot((1-y.transpose()),np.log(1-sigmoid(z)))) + \
(lam/(2*m))*np.sum(theta[1:len(theta)+1]**2);
# gradient
regterm = np.insert(theta[1:len(theta)+1],0,0)
grad = (1/m)*np.sum((sigmoid(z) - y)*X,0) + (lam/m)*regterm
return J, grad
# check that cost function works
theta = np.array([1,2,3])
lam = 10
cost, grad = costFunction(theta, Xv, yv,lam)
print("cost:", cost)
print("grad:", grad)
def callbackF(theta):
global NFeval
global Xv
global yv
global lam
cost,grad = costFunction(theta,Xv,yv,lam)
print("%4d %3.6f %3.6f %3.6f %3.6f %3.6f %3.6f %3.6f" % \
(NFeval, theta[0], theta[1], theta[2], cost, grad[0], grad[1], grad[2]))
NFeval+=1
# run optimization
NFeval = 1
initial_theta = np.array([0.,0.,0.])
print("iter t1 t2 t3 cost grad1 grad2 grad3")
res = sp.optimize.minimize(lambda t: costFunction(t,Xv,yv,lam), initial_theta, method='CG',\
jac=True,options={'maxiter':100,'disp':True}, callback=callbackF)
# plot the normalized data with regression line
theta = res.x
fig, ax = plt.subplots(1,1)
ax.plot(X[y,0],X[y,1],"ro")
ax.plot(X[~y,0],X[~y,1],"bo")
xplot = np.array([-1,1])
yplot = (-1/theta[2])*(theta[1]*xplot+theta[0])
ax.plot(xplot,yplot,'g',linewidth=2)
ax.set_xlabel('Rank difference')
ax.set_ylabel('Height')
ax.set_title('Higher-rank-wins as a function of age and height')
ax.set_ylim((-5,5))
# we'll use the SVM package in the scikit library
from sklearn import svm
# produce a dense grid of points in rectangle around the data
def make_meshgrid(x, y, h=.02):
Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
# produce a contour plot with predicted outcomes from SVM classifier
def plot_contours(ax, clf, xx, yy, **params):
Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
# extract from MySQL database info on rank points and height for both winner and loser, store in dataframe
with engine.begin() as connection:
rawdata = pd.read_sql_query(SELECT winner_rank_points, loser_rank_points, winner_age, loser_age, winner_ht, loser_ht \
FROM matches \
WHERE tourney_date > '20170101' \
AND winner_rank_points IS NOT NULL \
AND loser_rank_points IS NOT NULL \
AND winner_age IS NOT NULL \
AND loser_age IS NOT NULL \
AND winner_ht IS NOT NULL \
AND loser_ht IS NOT NULL, connection)
# this nx2 array contains the differences in ages and the differences in height
X = pd.concat([rawdata.iloc[:,2]-rawdata.iloc[:,3], \
rawdata.iloc[:,4]-rawdata.iloc[:,5]], axis=1).values
# this nx1 binary array indicates whether the match was a "success" or a "failure", as predicted by ranking differences
y = (rawdata.iloc[:,0]-rawdata.iloc[:,1]).values > 0
# for numerical well-behavedness, we need to scale and center the data
X=(X-np.mean(X,axis=0))/np.std(X,axis=0)
# plot the normalized data
fig, ax = plt.subplots(1,1)
ax.plot(X[y,0],X[y,1],"ro")
ax.plot(X[~y,0],X[~y,1],"bo")
ax.set_xlabel('Age')
ax.set_ylabel('Height')
ax.set_title('Higher-rank-wins as a function of age and height')
# find the SVM classifier
clf = svm.SVC()
clf.fit(X, y)
# generate a dense grid for producing a contour plot
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
# feed the grid into the plot_contours routinge
fig, ax = plt.subplots(1, 1)
plot_contours(ax, clf, xx, yy,
cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('Rank points')
ax.set_ylabel('First serve %')
ax.set_xticks(())
ax.set_yticks(())
ax.set_title('SVM classifier for height/age data')
# name of database
db_name = "tennis"
# name of db user
username = "testuser"
# db password for db user
password = "test623"
# location of atp data files
atpfile_directory = "../data/tennis_atp-master/"
# location of odds data files
oddsfiles_directory = "../data/odds_data/"
#%%
#
# PACKAGES
#
import sqlalchemy # pandas-mysql interface library
import sqlalchemy.exc # exception handling
from sqlalchemy import create_engine # needed to define db interface
import glob # for file manipulation
import sys # for defining behavior under errors
#%%
#
# This cell tries to connect to the mysql database "db_name" with the login
# info supplied above. If it succeeds, it prints out the version number of
# mysql, if it fails, it exits gracefully.
#
# create an engine for interacting with the MySQL database
try:
eng_str = 'mysql+mysqldb://' + username + ':' + password + '@localhost/' + db_name
engine = create_engine(eng_str)
connection = engine.connect()
version = connection.execute("SELECT VERSION()")
print("Database version : ")
print(version.fetchone())
# report what went wrong if this fails.
except sqlalchemy.exc.DatabaseError as e:
reason = e.message
print("Error %s:" % (reason))
sys.exit(1)
# close the connection
finally:
if connection:
connection.close()
else:
print("Failed to create connection.")
# extract from MySQL database info odds
with engine.begin() as connection:
rawdata = pd.read_sql_query(SELECT PSW, PSL, WRank, LRank FROM odds \
WHERE PSW IS NOT NULL \
AND PSL IS NOT NULL \
AND WRank IS NOT NULL \
AND LRank IS NOT NULL;, connection)
investment = len(rawdata)
good_call_idx = (rawdata["LRank"]-rawdata["WRank"]>0)
winner_odds = rawdata["PSW"]
gain = sum(winner_odds*good_call_idx) + sum(good_call_idx==True)
roi = gain - investment
print("total invested: ", investment)
print("return on investment: ", roi)
temp = rawdata.iloc[:,0]*RankIdx.values
gain = sum(>0)+
loss =
net = gain-loss
net
investment
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: II. <a name="logisticregression"> Logistic regression demo
Step3: We'll be using the scipy function optimize.minimize to calculate the classifier. To make this code as easily generalizable as possible, we derive some quantities of interest from the dataframe and store these in a numpy array. The array will be the object that we'll manipulate in most of the following calculations.
Step4: In preparation for calculating the regression classifier, we'll prepend a column of 1s to the matrix X and change y into a vector of 1s and 0s (instead of Trues and Falses.)
Step5: To perform the regression, we'll need to define the sigmoid function and a cost function. The former can take a scalar, vector, or matrix, and return the elementwise value of
Step6: The cost function is designed to take a regularization parameter lambda. For a non-regularized solution, lambda can be set equal to 0. The cost function returns both a cost and the gradient for any given value of parameters $\theta$.
Step7: Small test
Step8: For diagnostic purposes, we define a callback function that will print information about the state and gradient as the optimization algorithm proceeds.
Step9: Finally, we run the optimization.
Step10: To see how it did, we replot the data with the logistic classifier superimposed over the top.
Step11: Comments
Step14: After classifying the SVM classifier, we'll need some helper functions to form contour plots. These helper functions are borrowed from the scikit documentation, http
Step16: We'll run our test on a slightly different set of data than last time. Here, we'll still classify matches as 1 if the higher ranked player wins and 0 otherwise, but we'll focus on age and height as our predictive features.
Step17: We'll use the scikit svm package to train an SVM classifier on this data. We'll plot the results as a contour graph.
Step19: Note that these features produce a fairly mixed set of points, so they are unlikely to be highly predictive. But it is interesting to note the pocket of blue in the lower left corner
|
15,486 | <ASSISTANT_TASK:>
Python Code:
N = 4
def performQueries(l , r , prefix ) :
l -= 1
r -= 1
flag = False
count = 0
for i in range(26 ) :
cnt = prefix[r ][i ]
if(l > 0 ) :
cnt -= prefix[l - 1 ][i ]
if(cnt % 2 == 1 ) :
flag = True
count += cnt - 1
else :
count += cnt
if(flag ) :
count += 1
return count
def preCalculate(s , prefix ) :
n = len(s )
for i in range(n ) :
prefix[i ][ord(s[i ] ) - ord(' a ' ) ] += 1
for i in range(1 , n ) :
for j in range(26 ) :
prefix[i ][j ] += prefix[i - 1 ][j ]
s = "amim "
prefix =[[ 0 for i in range(26 ) ] for i in range(N ) ]
preCalculate(s , prefix )
queries =[[ 1 , 4 ] ,[3 , 4 ] ]
q = len(queries )
for i in range(q ) :
print(performQueries(queries[i ][0 ] , queries[i ][1 ] , prefix ) )
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
15,487 | <ASSISTANT_TASK:>
Python Code:
def fixpoint(S0, f):
Result = S0.copy() # don't change S0
while True:
NewElements = { x for o in Result
for x in f(o)
}
if NewElements.issubset(Result):
return Result
Result |= NewElements
def fixpoint2(S0, f):
Result = S0.copy() # don't change S0
while True:
NewElements = f(Result)
if NewElements.issubset(Result):
return Result
Result |= NewElements
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The function fixpoint2 takes two arguments
|
15,488 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy as ps
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_excel('lab-1-1.xlsx', 'table-1')
data.head(len(data))
u = data.values[:, 2]
print(u.mean())
print(u.std())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Положим $L=3.2$ м — длина нити, $g=9.8$ м/c — величина ускорения свободного падения, $M=3$ кг — масса балистического маятника.
Step2: Посчитаем средний разброс скоростей.
|
15,489 | <ASSISTANT_TASK:>
Python Code:
import bioframe
df = bioframe.read_table(
'https://www.encodeproject.org/files/ENCFF001XKR/@@download/ENCFF001XKR.bed.gz',
schema='bed9'
)
display(df[0:3])
df = bioframe.read_table(
"https://www.encodeproject.org/files/ENCFF401MQL/@@download/ENCFF401MQL.bed.gz",
schema='narrowPeak')
display(df[0:3])
df = bioframe.read_table(
'https://www.encodeproject.org/files/ENCFF001VRS/@@download/ENCFF001VRS.bed.gz',
schema='bed12'
)
display(df[0:3])
bioframe.SCHEMAS['bed6']
bw_url = 'http://genome.ucsc.edu/goldenPath/help/examples/bigWigExample.bw'
df = bioframe.read_bigwig(bw_url, "chr21", start=10_000_000, end=10_010_000)
df.head(5)
df['value'] *= 100
df.head(5)
chromsizes = bioframe.fetch_chromsizes('hg19')
bioframe.to_bigwig(df, chromsizes, 'times100.bw')
# note: requires UCSC bedGraphToBigWig binary, which can be installed as
# !conda install -y -c bioconda ucsc-bedgraphtobigwig
bb_url = 'http://genome.ucsc.edu/goldenPath/help/examples/bigBedExample.bb'
bioframe.read_bigbed(bb_url, "chr21", start=48000000).head()
bioframe.read_chromsizes(
'https://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/hg38.chrom.sizes'
)
bioframe.read_chromsizes('https://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/hg38.chrom.sizes',
filter_chroms=False)
dm6_url = 'https://hgdownload.soe.ucsc.edu/goldenPath/dm6/database/chromInfo.txt.gz'
bioframe.read_chromsizes(dm6_url,
filter_chroms=True,
chrom_patterns=("^chr2L$", "^chr2R$", "^chr3L$", "^chr3R$", "^chr4$", "^chrX$")
)
bioframe.read_chromsizes(dm6_url, chrom_patterns=["^chr\d+L$", "^chr\d+R$", "^chr4$", "^chrX$", "^chrM$"])
chromsizes = bioframe.fetch_chromsizes('hg38')
chromsizes[-5:]
# # bioframe also has locally stored information for certain assemblies that can be
# # read as follows
# bioframe.get_seqinfo()
# bioframe.get_chromsizes('hg38', unit='primary', type=('chromosome', 'non-nuclear'), )
display(
bioframe.fetch_centromeres('hg38')[:3]
)
client = bioframe.UCSCClient('hg38')
client.fetch_cytoband()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bioframe provides multiple methods to convert data stored in common genomic file formats to pandas dataFrames in bioframe.io.
Step2: The schema argument looks up file type from a registry of schemas stored in the bioframe.SCHEMAS dictionary
Step3: UCSC Big Binary Indexed files (BigWig, BigBed)
Step4: Reading genome assembly information
Step5: Bioframe provides a convenience function to fetch chromosome sizes from UCSC given an assembly name
Step6: Bioframe can also generate a list of centromere positions using information from some UCSC assemblies
Step7: These functions are just wrappers for a UCSC client. Users can also use UCSCClient directly
|
15,490 | <ASSISTANT_TASK:>
Python Code:
%load_ext noworkflow
%now_set_default graph.height=200
%%now_run -e Tracer
def f(x, y=3):
"Calculate x!/(x - y)!"
return x * f(x - 1, y - 1) if y else 1
a = 10
b = a - 2
c = f(b)
print(c)
trial = _
trial.dot
%%now_prolog {trial.id}
var_name({trial.id}, Id, 'b'), slice({trial.id}, Id, Vars)
%%now_prolog
var_name({trial.id}, 2, Name)
%%now_sql
select count(id) from function_activation
where trial_id={trial.id}
trial.duration_text
len(list(trial.activations))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Analysis
Step2: Prolog queries
Step3: SQL queries
Step4: ORM
|
15,491 | <ASSISTANT_TASK:>
Python Code:
from textblob import TextBlob
texto = '''In new lawsuits brought against the ride-sharing companies Uber and Lyft, the top prosecutors in Los Angeles
and San Francisco counties make an important point about the lightly regulated sharing economy. The consumers who
participate deserve a very clear picture of the risks they're taking'''
t = TextBlob(texto)
print('Tenemos', len(t.sentences), 'oraciones.\n')
for sentence in t.sentences:
print(sentence)
# imprimimos las oraciones
for sentence in t.sentences:
print(sentence)
print("--------------")
# y las palabras
print(t.words)
print(texto.split())
print("el texto de ejemplo contiene", len(t.noun_phrases), "entidades")
for element in t.noun_phrases:
print("-", element)
# jugando con lemas, singulares y plurales
for word in t.words:
if word.endswith("s"):
print(word.lemmatize(), word, word.singularize())
else:
print(word.lemmatize(), word, word.pluralize())
# ¿cómo podemos hacer la lematización más inteligente?
for element in t.tags:
# solo lematizamos sustantivos
if element[1] == "NN":
print(element[0], element[0].lemmatize(), element[0].pluralize() )
elif element[1] == "NNS":
print(element[0], element[0].lemmatize(), element[0].singularize())
# y formas verbales
if element[1].startswith("VB"):
print(element[0], element[0].lemmatize("v"))
# análisis sintáctico
print(t.parse())
# de chino a inglés y español
oracion_zh = "中国探月工程 亦稱嫦娥工程,是中国启动的第一个探月工程,于2003年3月1日正式启动"
t_zh = TextBlob(oracion_zh)
print(t_zh.translate(from_lang="zh-CN", to="en"))
print(t_zh.translate(from_lang="zh-CN", to="es"))
print("--------------")
t_es = TextBlob(u"La deuda pública ha marcado nuevos récords en España en el tercer trimestre")
print(t_es.translate(to="el"))
print(t_es.translate(to="ru"))
print(t_es.translate(to="eu"))
print(t_es.translate(to="fi"))
print(t_es.translate(to="fr"))
print(t_es.translate(to="nl"))
print(t_es.translate(to="gl"))
print(t_es.translate(to="ca"))
print(t_es.translate(to="zh"))
print(t_es.translate(to="la"))
# con el slang no funciona tan bien
print("--------------")
t_ita = TextBlob("Sono andato a Milano e mi sono divertito un bordello.")
print(t_ita.translate(to="en"))
print(t_ita.translate(to="es"))
# WordNet
from textblob import Word
from textblob.wordnet import VERB
# ¿cuántos synsets tiene "car"
word = Word("car")
print(word.synsets)
# dame los synsets de la palabra "hack" como verbo
print(Word("hack").get_synsets(pos=VERB))
# imprime la lista de definiciones de "car"
print(Word("car").definitions)
# recorre la jerarquía de hiperónimos
for s in word.synsets:
print(s.hypernym_paths())
# análisis de opinión
opinion1 = TextBlob("This new restaurant is great. I had so much fun!! :-P")
print(opinion1.sentiment)
opinion2 = TextBlob("Google News to close in Spain.")
print(opinion2.sentiment)
print(opinion1.sentiment.polarity)
if opinion1.sentiment.subjectivity > 0.5:
print("Hey, esto es una opinion")
# corrección ortográfica
b1 = TextBlob("I havv goood speling!")
print(b1.correct())
b2 = TextBlob("Mi naem iz Jonh!")
print(b2.correct())
b3 = TextBlob("Boyz dont cri")
print(b3.correct())
b4 = TextBlob("psychological posesion achivemen comitment")
print(b4.correct())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vamos a crear nuestro primer ejemplo de textblob a través del objeto TextBlob. Piensa en estos textblobs como una especie de cadenas de texto de Python, analaizadas y enriquecidas con algunas características extra.
Step2: Procesando oraciones, palabras y entidades
Step3: La propiedad .noun_phrases nos permite acceder a la lista de entidades (en realidad, son sintagmas nominales) incluídos en nuestro textblob. Así es como funciona.
Step4: Análisis sintático
Step5: Traducción automática
Step6: WordNet
Step7: Análisis de opinion
Step8: Otras curiosidades
|
15,492 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
from torch.nn import Parameter
from torch.nn.functional import mse_loss
from torch.autograd import Variable
from torch.nn.functional import relu
def sample_from_ground_truth(n_samples=100, std=0.1):
x = torch.FloatTensor(n_samples, 1).uniform_(-1, 1)
epsilon = torch.FloatTensor(n_samples, 1).normal_(0, std)
y = 2 * x + epsilon
return x, y
n_samples = 100
std = 3
x, y = sample_from_ground_truth(n_samples=100, std=std)
class SimpleMLP(nn.Module):
def __init__(self, w=None):
super(SimpleMLP, self).__init__()
self.w1 = Parameter(torch.FloatTensor((1,)))
self.w2 = Parameter(torch.FloatTensor((1,)))
if w is None:
self.reset_parameters()
else:
self.set_parameters(w)
def reset_parameters(self):
self.w1.uniform_(-.1, .1)
self.w2.uniform_(-.1, .1)
def set_parameters(self, w):
with torch.no_grad():
self.w1[0] = w[0]
self.w2[0] = w[1]
def forward(self, x):
return self.w1 * relu(self.w2 * x)
from math import fabs
def make_grids(x, y, model_constructor, expected_risk_func, grid_size=100):
n_samples = len(x)
assert len(x) == len(y)
# Grid logic
x_max, y_max, x_min, y_min = 5, 5, -5, -5
w1 = np.linspace(x_min, x_max, grid_size, dtype=np.float32)
w2 = np.linspace(y_min, y_max, grid_size, dtype=np.float32)
W1, W2 = np.meshgrid(w1, w2)
W = np.concatenate((W1[:, :, None], W2[:, :, None]), axis=2)
W = torch.from_numpy(W)
# We will store the results in this tensor
risks = torch.FloatTensor(n_samples, grid_size, grid_size)
expected_risk = torch.FloatTensor(grid_size, grid_size)
with torch.no_grad():
for i in range(grid_size):
for j in range(grid_size):
model = model_constructor(W[i, j])
pred = model(x)
loss = mse_loss(pred, y, reduction="none")
risks[:, i, j] = loss.view(-1)
expected_risk[i, j] = expected_risk_func(W[i, j, 0], W[i, j, 1])
empirical_risk = torch.mean(risks, dim=0)
return W1, W2, risks.numpy(), empirical_risk.numpy(), expected_risk.numpy()
def expected_risk_simple_mlp(w1, w2):
Question: Can you derive this your-self?
return .5 * (8 / 3 - (4 / 3) * w1 * w2 + 1 / 3 * w1 ** 2 * w2 ** 2) + std ** 2
W1, W2, risks, empirical_risk, expected_risk = make_grids(
x, y, SimpleMLP, expected_risk_func=expected_risk_simple_mlp)
from torch.optim import SGD
def train(model, x, y, lr=.1, n_epochs=1):
optimizer = SGD(model.parameters(), lr=lr)
iterate_rec = []
grad_rec = []
for epoch in range(n_epochs):
# Iterate over the dataset one sample at a time:
# batch_size=1
for this_x, this_y in zip(x, y):
this_x = this_x[None, :]
this_y = this_y[None, :]
optimizer.zero_grad()
pred = model(this_x)
loss = mse_loss(pred, this_y)
loss.backward()
with torch.no_grad():
iterate_rec.append(
[model.w1.clone()[0], model.w2.clone()[0]]
)
grad_rec.append(
[model.w1.grad.clone()[0], model.w2.grad.clone()[0]]
)
optimizer.step()
return np.array(iterate_rec), np.array(grad_rec)
init = torch.FloatTensor([3, -4])
model = SimpleMLP(init)
iterate_rec, grad_rec = train(model, x, y, lr=.01)
print(iterate_rec[-1])
import matplotlib.colors as colors
class LevelsNormalize(colors.Normalize):
def __init__(self, levels, clip=False):
self.levels = levels
vmin, vmax = levels[0], levels[-1]
colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
quantiles = np.linspace(0, 1, len(self.levels))
return np.ma.masked_array(np.interp(value, self.levels, quantiles))
def plot_map(W1, W2, risks, emp_risk, exp_risk, sample, iter_):
all_risks = np.concatenate((emp_risk.ravel(), exp_risk.ravel()))
x_center, y_center = emp_risk.shape[0] // 2, emp_risk.shape[1] // 2
risk_at_center = exp_risk[x_center, y_center]
low_levels = np.percentile(all_risks[all_risks <= risk_at_center],
q=np.linspace(0, 100, 11))
high_levels = np.percentile(all_risks[all_risks > risk_at_center],
q=np.linspace(10, 100, 10))
levels = np.concatenate((low_levels, high_levels))
norm = LevelsNormalize(levels=levels)
cmap = plt.get_cmap('RdBu_r')
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(12, 4))
risk_levels = levels.copy()
risk_levels[0] = min(risks[sample].min(), risk_levels[0])
risk_levels[-1] = max(risks[sample].max(), risk_levels[-1])
ax1.contourf(W1, W2, risks[sample], levels=risk_levels,
norm=norm, cmap=cmap)
ax1.scatter(iterate_rec[iter_, 0], iterate_rec[iter_, 1],
color='orange')
if any(grad_rec[iter_] != 0):
ax1.arrow(iterate_rec[iter_, 0], iterate_rec[iter_, 1],
-0.1 * grad_rec[iter_, 0], -0.1 * grad_rec[iter_, 1],
head_width=0.3, head_length=0.5, fc='orange', ec='orange')
ax1.set_title('Pointwise risk')
ax2.contourf(W1, W2, emp_risk, levels=levels, norm=norm, cmap=cmap)
ax2.plot(iterate_rec[:iter_ + 1, 0], iterate_rec[:iter_ + 1, 1],
linestyle='-', marker='o', markersize=6,
color='orange', linewidth=2, label='SGD trajectory')
ax2.legend()
ax2.set_title('Empirical risk')
cf = ax3.contourf(W1, W2, exp_risk, levels=levels, norm=norm, cmap=cmap)
ax3.scatter(iterate_rec[iter_, 0], iterate_rec[iter_, 1],
color='orange', label='Current sample')
ax3.set_title('Expected risk (ground truth)')
plt.colorbar(cf, ax=ax3)
ax3.legend()
fig.suptitle('Iter %i, sample % i' % (iter_, sample))
plt.show()
for sample in range(0, 100, 10):
plot_map(W1, W2, risks, empirical_risk, expected_risk, sample, sample)
# %load solutions/linear_mlp.py
# from matplotlib.animation import FuncAnimation
# from IPython.display import HTML
# fig, ax = plt.subplots(figsize=(8, 8))
# all_risks = np.concatenate((empirical_risk.ravel(),
# expected_risk.ravel()))
# x_center, y_center = empirical_risk.shape[0] // 2, empirical_risk.shape[1] // 2
# risk_at_center = expected_risk[x_center, y_center]
# low_levels = np.percentile(all_risks[all_risks <= risk_at_center],
# q=np.linspace(0, 100, 11))
# high_levels = np.percentile(all_risks[all_risks > risk_at_center],
# q=np.linspace(10, 100, 10))
# levels = np.concatenate((low_levels, high_levels))
# norm = LevelsNormalize(levels=levels)
# cmap = plt.get_cmap('RdBu_r')
# ax.set_title('Pointwise risk')
# def animate(i):
# for c in ax.collections:
# c.remove()
# for l in ax.lines:
# l.remove()
# for p in ax.patches:
# p.remove()
# risk_levels = levels.copy()
# risk_levels[0] = min(risks[i].min(), risk_levels[0])
# risk_levels[-1] = max(risks[i].max(), risk_levels[-1])
# ax.contourf(W1, W2, risks[i], levels=risk_levels,
# norm=norm, cmap=cmap)
# ax.plot(iterate_rec[:i + 1, 0], iterate_rec[:i + 1, 1],
# linestyle='-', marker='o', markersize=6,
# color='orange', linewidth=2, label='SGD trajectory')
# return []
# anim = FuncAnimation(fig, animate,# init_func=init,
# frames=100, interval=300, blit=True)
# anim.save("stochastic_landscape_minimal_mlp.mp4")
# plt.close(fig)
# HTML(anim.to_html5_video())
# fig, ax = plt.subplots(figsize=(8, 7))
# cf = ax.contourf(W1, W2, empirical_risk, levels=levels, norm=norm, cmap=cmap)
# ax.plot(iterate_rec[:100 + 1, 0], iterate_rec[:100 + 1, 1],
# linestyle='-', marker='o', markersize=6,
# color='orange', linewidth=2, label='SGD trajectory')
# ax.legend()
# plt.colorbar(cf, ax=ax)
# ax.set_title('Empirical risk')
# fig.savefig('empirical_loss_landscape_minimal_mlp.png')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data is generated from a simple model
Step2: We propose a minimal single hidden layer perceptron model with a single hidden unit and no bias. The model has two tunable parameters $w_1$, and $w_2$, such that
Step4: As in the previous notebook, we define a function to sample from and plot loss landscapes.
Step5: risks[k, i, j] holds loss value $\ell(f(w_1^{(i)} , w_2^{(j)}, x_k), y_k)$ for a single data point $(x_k, y_k)$;
Step6: Let's define our train loop and train our model
Step7: We now plot
Step8: Observe and comment.
Step9: Utilities to generate the slides figures
|
15,493 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)}
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
# Network building
def build_model():
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
sentence = "Moonlight is by far the best movie of 2016."
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Exercise
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own sentence!
|
15,494 | <ASSISTANT_TASK:>
Python Code:
import pg8000
conn = pg8000.connect(database="homework2")
conn.rollback()
cursor = conn.cursor()
statement = "SELECT movie_title FROM uitem WHERE scifi = 1 AND horror = 1 ORDER BY release_date DESC"
cursor.execute(statement)
for row in cursor:
print(row[0])
cursor = conn.cursor()
statement = "SELECT COUNT(*) FROM uitem WHERE musical = 1 OR childrens = 1"
cursor.execute(statement)
for row in cursor:
print(row[0])
cursor = conn.cursor()
statement = "SELECT DISTINCT(occupation), COUNT(*) FROM uuser GROUP BY occupation HAVING COUNT(*) > 50"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
cursor = conn.cursor()
statement = "SELECT DISTINCT(movie_title) FROM udata JOIN uitem ON uitem.movie_id = udata.item_id WHERE EXTRACT(YEAR FROM release_date) < 1992 AND rating = 5 GROUP BY movie_title"
#TA-STEPHAN: Try using this statement
#statement = "SELECT DISTINCT uitem.movie_title, udata.rating FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE documentary = 1 AND udata.rating = 5 AND uitem.release_date < '1992-01-01';"
# if "any" has to be taken in the sense of "every":
# statement = "SELECT movie_title FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE EXTRACT(YEAR FROM release_date) < 1992 GROUP BY movie_title HAVING MIN(rating) = 5"
cursor.execute(statement)
for row in cursor:
print(row[0])
conn.rollback()
cursor = conn.cursor()
statement = "SELECT movie_title), AVG(rating) FROM udata JOIN uitem ON uitem.movie_id = udata.item_id WHERE horror = 1 GROUP BY movie_title ORDER BY AVG(rating) LIMIT 10"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
cursor = conn.cursor()
statement = "SELECT movie_title, AVG(rating) FROM udata JOIN uitem ON uitem.movie_id = udata.item_id WHERE horror = 1 GROUP BY movie_title HAVING COUNT(rating) > 10 ORDER BY AVG(rating) LIMIT 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
Step2: Problem set 1
Step3: Problem set 2
Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Step5: Problem set 3
Step6: Problem set 4
Step7: BONUS
|
15,495 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot') # Look Pretty
def plotDecisionBoundary(model, X, y):
fig = plt.figure()
ax = fig.add_subplot(111)
padding = 0.6
resolution = 0.0025
colors = ['royalblue','forestgreen','ghostwhite']
# Calculate the boundaris
x_min, x_max = X[:, 0].min(), X[:, 0].max()
y_min, y_max = X[:, 1].min(), X[:, 1].max()
x_range = x_max - x_min
y_range = y_max - y_min
x_min -= x_range * padding
y_min -= y_range * padding
x_max += x_range * padding
y_max += y_range * padding
# Create a 2D Grid Matrix. The values stored in the matrix
# are the predictions of the class at at said location
xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),
np.arange(y_min, y_max, resolution))
# What class does the classifier say?
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour map
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.terrain)
# Plot the test original points as well...
for label in range(len(np.unique(y))):
indices = np.where(y == label)
plt.scatter(X[indices, 0], X[indices, 1], c=colors[label], label=str(label), alpha=0.8)
p = model.get_params()
plt.axis('tight')
plt.title('K = ' + str(p['n_neighbors']))
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# I hope your KNeighbors classifier model from earlier was named 'knn'
# If not, adjust the following line:
plotDecisionBoundary(knn, X_train, y_train)
# .. your code here ..
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A Convenience Function
Step2: The Assignment
Step3: Copy the wheat_type series slice out of X, and into a series called y. Then drop the original wheat_type column from the X
Step4: Do a quick, "ordinal" conversion of y. In actuality our classification isn't ordinal, but just as an experiment...
Step5: Do some basic nan munging. Fill each row's nans with the mean of the feature
Step6: Split X into training and testing data sets using train_test_split(). Use 0.33 test size, and use random_state=1. This is important so that your answers are verifiable. In the real world, you wouldn't specify a random_state
Step7: Create an instance of SKLearn's Normalizer class and then train it using its .fit() method against your training data. The reason you only fit against your training data is because in a real-world situation, you'll only have your training data to train with! In this lab setting, you have both train+test data; but in the wild, you'll only have your training data, and then unlabeled data you want to apply your models to.
Step8: With your trained pre-processor, transform both your training AND testing data. Any testing data has to be transformed with your preprocessor that has ben fit against your training data, so that it exist in the same feature-space as the original data used to train your models.
Step9: Just like your preprocessing transformation, create a PCA transformation as well. Fit it against your training data, and then project your training and testing features into PCA space using the PCA model's .transform() method. This has to be done because the only way to visualize the decision boundary in 2D would be if your KNN algo ran in 2D as well
Step10: Create and train a KNeighborsClassifier. Start with K=9 neighbors. Be sure train your classifier against the pre-processed, PCA- transformed training data above! You do not, of course, need to transform your labels.
Step11: Display the accuracy score of your test data/labels, computed by your KNeighbors model. You do NOT have to run .predict before calling .score, since .score will take care of running your predictions for you automatically.
Step12: Bonus
|
15,496 | <ASSISTANT_TASK:>
Python Code:
from orangecontrib.associate.fpgrowth import *
import pandas as pd
from numpy import *
questions = correctedScientific.columns
correctedScientificText = [[] for _ in range(correctedScientific.shape[0])]
for q in questions:
for index in range(correctedScientific.shape[0]):
r = correctedScientific.index[index]
if correctedScientific.loc[r, q]:
correctedScientificText[index].append(q)
#correctedScientificText
len(correctedScientificText)
# Get frequent itemsets with support > 25%
# run time < 1 min
support = 0.20
itemsets = frequent_itemsets(correctedScientificText, math.floor(len(correctedScientificText) * support))
#dict(itemsets)
# Generate rules according to confidence, confidence > 85 %
# run time < 5 min
confidence = 0.80
rules = association_rules(dict(itemsets), confidence)
#list(rules)
# Transform rules generator into a Dataframe
rulesDataframe = pd.DataFrame([(ant, cons, supp, conf) for ant, cons, supp, conf in rules])
rulesDataframe.rename(columns = {0:"antecedants", 1:"consequents", 2:"support", 3:"confidence"}, inplace=True)
rulesDataframe.head()
# Save the mined rules to file
rulesDataframe.to_csv("results/associationRulesMiningSupport"+str(support)+"percentsConfidence"+str(confidence)+"percents.csv")
# Sort rules by confidence
confidenceSortedRules = rulesDataframe.sort_values(by = ["confidence", "support"], ascending=[False, False])
confidenceSortedRules.head(50)
# Sort rules by size of consequent set
rulesDataframe["consequentSize"] = rulesDataframe["consequents"].apply(lambda x: len(x))
consequentSortedRules = rulesDataframe.sort_values(by = ["consequentSize", "confidence", "support"], ascending=[False, False, False])
consequentSortedRules.head(50)
# Select only pairs (rules with antecedent and consequent of size one)
# Sort pairs according to confidence
rulesDataframe["fusedRule"] = rulesDataframe[["antecedants", "consequents"]].apply(lambda x: frozenset().union(*x), axis=1)
rulesDataframe["ruleSize"] = rulesDataframe["fusedRule"].apply(lambda x: len(x))
pairRules = rulesDataframe.sort_values(by=["ruleSize", "confidence", "support"], ascending=[True, False, False])
pairRules.head(30)
correctedScientific.columns
# Sort questions by number of apparition in consequents
for q in scientificQuestions:
rulesDataframe[q+"c"] = rulesDataframe["consequents"].apply(lambda x: 1 if q in x else 0)
occurenceInConsequents = rulesDataframe.loc[:,scientificQuestions[0]+"c":scientificQuestions[-1]+"c"].sum(axis=0)
occurenceInConsequents.sort_values(inplace=True, ascending=False)
occurenceInConsequents
# Sort questions by number of apparition in antecedants
for q in scientificQuestions:
rulesDataframe[q+"a"] = rulesDataframe["antecedants"].apply(lambda x: 1 if q in x else 0)
occurenceInAntecedants = rulesDataframe.loc[:,scientificQuestions[0]+"a":scientificQuestions[-1]+"a"].sum(axis=0)
occurenceInAntecedants.sort_values(inplace=True, ascending=False)
occurenceInAntecedants
sortedPrePostProgression = pd.read_csv("../../data/sortedPrePostProgression.csv")
sortedPrePostProgression.index = sortedPrePostProgression.iloc[:,0]
sortedPrePostProgression = sortedPrePostProgression.drop(sortedPrePostProgression.columns[0], axis = 1)
del sortedPrePostProgression.index.name
sortedPrePostProgression.loc['occ_ant',:] = 0
sortedPrePostProgression.loc['occ_csq',:] = 0
sortedPrePostProgression
for questionA, occsA in enumerate(occurenceInAntecedants):
questionVariableName = occurenceInAntecedants.index[questionA][:-1]
question = globals()[questionVariableName]
questionC = questionVariableName + "c"
sortedPrePostProgression.loc['occ_ant',question] = occsA
occsC = occurenceInConsequents.loc[questionC]
sortedPrePostProgression.loc['occ_csq',question] = occsC
#print(questionVariableName+"='"+question+"'")
#print("\t"+questionVariableName+"a="+str(occsA)+","+questionC+"="+str(occsC))
#print()
sortedPrePostProgression.T
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Search for interesting rules
|
15,497 | <ASSISTANT_TASK:>
Python Code:
# To access the Travel Mode Choice data
import statsmodels.datasets
# To perform the dataset conversion
import pylogit as pl
# Access the dataset
mode_data = statsmodels.datasets.modechoice.load_pandas()
# Get a pandas dataframe of the mode choice data
long_df = mode_data["data"]
# Look at the dataframe to ensure that it loaded correctly
long_df.head()
# ind_vars is a list of strings denoting the column
# headings of data that varies across choice situations,
# but not across alternatives. In our data, this is
# the household income and party size.
individual_specific_variables = ["hinc", "psize"]
# alt_specific_vaars is a list of strings denoting the
# column headings of data that vary not only across
# choice situations but also across all alternatives.
# These are columns such as the "level of service"
# variables.
alternative_specific_variables = ["invc", "invt", "gc"]
# subset_specific_vars is a dictionary. Each key is a
# string that denotes a variable that is subset specific.
# Each value is a list of alternative ids, over which the
# variable actually varies. Note that subset specific
# variables vary across choice situations and across some
# (but not all) alternatives. This is most common when
# using variables that are not meaningfully defined for
# all alternatives. An example of this in our dataset is
# terminal time ("ttme"). This variable is not meaningfully
# defined for the "car" alternative. Therefore, it is always
# zero. Note "4" is the id for the "car" alternative
subset_specific_variables = {"ttme": [1, 2, 3]}
# obs_id_col is the column denoting the id of the choice
# situation. If one was using a panel dataset, with multiple
# choice situations per unit of observation, the column
# denoting the unit of observation would be listed in
# ind_vars (i.e. with the individual specific variables)
observation_id_column = "individual"
# alt_id_col is the column denoting the id of the alternative
# corresponding to a given row.
alternative_id_column = "mode"
# choice_col is the column denoting whether the alternative
# on a given row was chosen in the corresponding choice situation
choice_column = "choice"
# Lastly, alt_name_dict is not necessary. However, it is useful.
# It records the names corresponding to each alternative, if there
# are any, and allows for the creation of meaningful column names
# in the wide-format data (such as when creating the columns
# denoting the available alternatives in each choice situation).
# The keys of alt_name_dict are the unique alternative ids, and
# the values are the names of each alternative.
alternative_name_dict = {1: "air",
2: "train",
3: "bus",
4: "car"}
# Finally, we can create the wide format dataframe
wide_df = pl.convert_long_to_wide(long_df,
individual_specific_variables,
alternative_specific_variables,
subset_specific_variables,
observation_id_column,
alternative_id_column,
choice_column,
alternative_name_dict)
# Let's look at the created dataframe, transposed for easy viewing
wide_df.head().T
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the needed dataset
Step2: Create the needed variables for the conversion function.
Step3: Create the wide-format dataframe
|
15,498 | <ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
# 引入绘图包
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
# 投掷硬币10次,正面朝上的次数;重复100次
n, p = 10, .5
np.random.binomial(n, p, 100)
sum(np.random.binomial(9, 0.1, 20000) == 0) / 20000
lb = 5
s = np.random.poisson(lb, 10000)
count, bins, ignored = plt.hist(s, 14, normed=True)
# 取a = -1, b = 0, 样本数10000
a, b = -1, 0
s = np.random.uniform(a, b, 10000)
# 所有样本的值均大于a
np.all(s >= a)
# 所有样本的值均小于b
np.all(s < b)
# 绘制样本直方图及密度函数
count, bins, ignored = plt.hist(s, 15, normed=True)
plt.plot(bins, np.ones_like(bins) / (b - a), linewidth=2, color='r')
plt.show()
# 取theta = 1,绘制样本直方图及密度函数
theta = 1
f = lambda x: math.e ** (-x / theta) / theta
s = np.random.exponential(theta, 10000)
count, bins, ignored = plt.hist(s, 100, normed=True)
plt.plot(bins, f(bins), linewidth=2, color='r')
plt.show()
# 取均值0,标准差0.1
mu, sigma = 0, 0.1
s = np.random.normal(mu, sigma, 1000)
# 验证均值
abs(mu - np.mean(s)) < 0.01
# 验证标准差
abs(sigma - np.std(s, ddof=1)) < 0.01
# 绘制样本直方图及密度函数
count, bins, ignored = plt.hist(s, 30, normed=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. 随机变量 Random Variable
Step2: 一个现实生活中的例子。一家钻井公司探索九个矿井,预计每个开采成功率为0.1;九个矿井全部开采失败的概率是多少?
Step3: 将试验次数增加,可以模拟出更加逼近准确值的结果。
Step4: 5. 均匀分布 Uniform Distribution
Step5: 6. 指数分布 Exponential Distribution
Step6: 7. 正态分布 Normal Distribution
|
15,499 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import ext_datos as ext
import procesar as pro
import time_plot as tplt
dia1 = ext.extraer_data('dia1')
cd ..
dia2 = ext.extraer_data('dia2')
cd ..
dia3 = ext.extraer_data('dia3')
cd ..
dia4 = ext.extraer_data('dia4')
motoresdia1 = pro.procesar(dia1)
motoresdia2 = pro.procesar(dia2)
motoresdia3 = pro.procesar(dia3)
motoresdia4 = pro.procesar(dia4)
motoresdia4.motorRpm_m1[motoresdia4.motorRpm_m1>1].plot(kind='hist', bins=50)
motoresdia1.motorRpm_m2[motoresdia1.motorRpm_m2>1].plot(kind='hist', bins=50)
motoresdia2.motorRpm_m1[motoresdia2.motorRpm_m1>1].plot(kind='hist', bins=50)
motoresdia3.motorRpm_m1[motoresdia3.motorRpm_m1>1].plot(kind='hist', bins=50)
motoresdia4.motorTemp_m1.plot()
motoresdia4[motoresdia4.busCurrent_m1 == 0].busVoltage_m1.plot()
motoresdia4[motoresdia4.motorRpm_m1>1].motorRpm_m1.mean()
motoresdia3[motoresdia3.motorRpm_m1>1].motorRpm_m1.mean()
motoresdia2[motoresdia2.motorRpm_m1>1].motorRpm_m1.mean()
motoresdia1[motoresdia1.motorRpm_m2>1].motorRpm_m2.mean()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importamos las librerías creadas para trabajar
Step2: Generamos los datasets de todos los días
Step3: Se procesan las listas anteriores, se concatenan por motor según
|