Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
---|---|---|
15,100 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
import sys
sys.path.insert(0,"../lib/")
from scipy.stats import norm
from climateData import WeatherData
weatherData = WeatherData(years=[1900,2014], optimiseFactor=True)
weatherData.data.head()
data = weatherData.data[ weatherData.data["Year"] >= 1980 ]
data = data[ np.isclose(data["_LastYearsStd"], -9999.0) == False ]
data["avg_diff"] = data["_LastYearsAvg"] - data["_ThisYear"]
data_tmax = data[ data["Element"] == "TMAX" ]
hist_avg, bin_edges_avg = np.histogram( np.abs(np.abs(np.asarray( data_tmax["avg_diff"] ))), bins=100 )
hist_std, bin_edges_std = np.histogram( np.abs(np.asarray( data_tmax["_LastYearsStd"] )), bins=100 )
fig = plt.figure(figsize=(10, 8), dpi=200)
ax = fig.add_subplot(111)
ax.tick_params(axis='both', which='major', labelsize=12)
label_avg = "Distribution of differences of monthly average\ntemperatures compared to past years"
label_std = "Distribution of standard deviations of monthly\naverage temperatures"
ax.bar(bin_edges_avg[:-1], hist_avg, width = 1.1, facecolor="red", alpha=0.9, label=label_avg);
ax.bar(bin_edges_std[:-1], hist_std, width = 1.1, facecolor="blue", alpha=0.6, label=label_std, zorder=5);
plt.legend();
data["avg_diff_fold"] = np.abs(data["avg_diff"]) / data["_LastYearsStd"]
data_tmax = data[ data["Element"] == "TMAX" ]
tmpData = np.abs(np.asarray( data_tmax["avg_diff_fold"] ))
tmpData = tmpData[ tmpData < np.percentile(tmpData, 99.9) ]
hist_avg_fold, bin_edges_avg_fold = np.histogram(tmpData, bins=100, density=True)
mu, std = norm.fit(np.concatenate((-tmpData,tmpData), axis=1))
x = np.linspace(0, 5, 100)
p = norm.pdf(x, mu, std)
print("Fitted a normal distribution at %.1f with standard deviation %.2f" %(mu, std))
fig = plt.figure(figsize=(10, 8), dpi=200)
ax = fig.add_subplot(111)
ax.tick_params(axis='both', which='major', labelsize=12)
label_avg_fold = "Distribution of fold differences of monthly average\ntemperatures compared to past years"
ax.bar(bin_edges_avg_fold[:-1], hist_avg_fold, width = 0.04, facecolor="green", edgecolor="green", alpha=0.9, label=label_avg_fold);
ax.plot(x, 2*p, 'k', linewidth=2)
plt.legend();
cutoff = np.percentile(tmpData, 85)
print("The cutoff point is set to %.2f" %cutoff)
bin_years = list()
for i in range(1,len(bin_edges_avg_fold)):
start, end = bin_edges_avg_fold[i-1], bin_edges_avg_fold[i]
tmp = data_tmax[ data_tmax["avg_diff_fold"] > start ]
tmp = tmp[ tmp["avg_diff_fold"] < end ]
bin_years.append(tmp["Year"])
avg_time = [ np.average(item) for item in bin_years ]
avg_time_X = [ i*0.05 for i in range(1,len(avg_time)+1) ] # make the plot go from 0 to 5 and not from 0 to 100
fig = plt.figure(figsize=(8, 6), dpi=200)
ax = fig.add_subplot(111)
ax.tick_params(axis='both', which='major', labelsize=12)
ax.plot(avg_time_X, avg_time, label="Average year of the histogram bin");
ax.axhline(np.average(data_tmax["Year"]), 0, 100, color="red", label="Total average of years");
plt.legend(loc=2, fontsize=16);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we have the weatherData object with its data variable containing four extra columns, i.e. Month, _LastYearsAvg, _LastYearsStd, and _ThisYear. The columns _LastYearsAvg and _LastYearsStd contain the average and standard deviation of all previous years of month Month respectively. Columns _ThisYear contains the average of the current month of the year.
Step2: First let's remove the early years in which we are not interested in. Note also that there might be months containing -9999. This can happen if they constitute the first months in which the measurement was available (and no last years are available). In addition create a new column containing the difference of the measured averages, thus making them better comparable.
Step3: In order get a feel for a good classification for extreme climate events we can look at the distribution of the difference between the average values and the distribution of the standard deviations. I will take the maximum temperature reading as an example.
Step4: This already gives us an indication that there are events that are extreme in the sense that they deviate by more than 1-2 standard deviations from the average of past years. However this still does not give us a robust and good indicator of when to call a climate event extreme.
Step5: For plotting we will remove the few events that deviate extremely and would render the plotting impossible
Step6: Here I will take prior knowlede (I looked already at the plot and went back one step) and assume that the distribution will look like a normal distribution. To visually emphasize this point we can fit the distribution and add the fit to the plot.
Step7: From the plot we can see that there is no obvious cutoff point that we could choose so we will have to use common sense. I would argue that a good measure would be to declare the 25% highest values as extreme. This will give us a cuttoff point of
Step8: What the plot above is not telling us is how the individual bins of the histogram are populated in time. By that I mean that each event in the histogram is linked to the year in which the measurement was taken. We can now ask the question if events that deviate far from the all time averages are more likely to have occured in the recent past or if they are equally distributed.
|
15,101 | <ASSISTANT_TASK:>
Python Code:
from matplotlib.pylab import plt
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
step = 2
train_offset = 0
features = mnist['data'][train_offset:-1:step].copy()/255
target = mnist['target'][train_offset:-1:step].copy()
test_offset = 1
features_test = mnist['data'][test_offset:-1:step].copy()/255
target_test = mnist['target'][test_offset:-1:step].copy()
assert(train_offset != test_offset)
plt.plot(target)
plt.show()
# %load "DrawNN.py"
#Code from https://gist.github.com/craffel/2d727968c3aaebd10359
import matplotlib.pyplot as plt
def draw_neural_net(ax, left, right, bottom, top, layer_sizes, bias=0, draw_edges=False):
'''
Draw a neural network cartoon using matplotilb.
:usage:
>>> fig = plt.figure(figsize=(12, 12))
>>> draw_neural_net(fig.gca(), .1, .9, .1, .9, [4, 7, 2])
:parameters:
- ax : matplotlib.axes.AxesSubplot
The axes on which to plot the cartoon (get e.g. by plt.gca())
- left : float
The center of the leftmost node(s) will be placed here
- right : float
The center of the rightmost node(s) will be placed here
- bottom : float
The center of the bottommost node(s) will be placed here
- top : float
The center of the topmost node(s) will be placed here
- layer_sizes : list of int
List of layer sizes, including input and output dimensionality
- bias : Boolean
Draw an extra bias node at each layer
- draw_edges : Boolean
If false, omit edge connections
'''
n_layers = len(layer_sizes)
v_spacing = (top - bottom)/float(max(layer_sizes)+bias)
h_spacing = (right - left)/float(len(layer_sizes) - 1)
# Nodes
for n, layer_size in enumerate(layer_sizes):
layer_top = v_spacing*(layer_size - 1)/2. + (top + bottom)/2.
bias_node = (bias if n<len(layer_sizes)-1 else 0)
for m in range(layer_size+bias_node ):
node_color = 'w' if m<layer_size else 'b'
circle = plt.Circle((n*h_spacing + left, layer_top - m*v_spacing), v_spacing/8.,
color=node_color, ec='k', zorder=4)
ax.add_artist(circle)
# Edges
if draw_edges:
for n, (layer_size_a, layer_size_b) in enumerate(zip(layer_sizes[:-1], layer_sizes[1:])):
layer_top_a = v_spacing*(layer_size_a - 1)/2. + (top + bottom)/2.
layer_top_b = v_spacing*(layer_size_b - 1)/2. + (top + bottom)/2.
for m in range(layer_size_a+bias):
for o in range(layer_size_b):
line = plt.Line2D([n*h_spacing + left, (n + 1)*h_spacing + left],
[layer_top_a - m*v_spacing, layer_top_b - o*v_spacing],
c='k')
ax.add_artist(line)
import torch
import torch.autograd
from torch.autograd import Variable
# The sizes of layers from input to output
#sizes = [3,3,10,20,3,4]
sizes = [28*28, 100, 10]
# Generate the network
g = torch.tanh
sm = torch.nn.Softmax(dim=0)
identity = lambda x: x
D = len(sizes)
Weight = []
Bias = []
Func = []
for i in range(D-1):
# For layer i, Weights are a S_{i+1} \times S_{i} matrix
W = Variable(torch.randn(sizes[i+1],sizes[i]).double(), requires_grad=True)
# For layer i, Biases are a S_{i+1} \times 1 matrix (a vector)
b = Variable(torch.randn(sizes[i+1],1).double(), requires_grad=True)
Weight.append(W)
Bias.append(b)
Func.append(g)
#Reset the final layer to sigmoid mapping
#Func[-1] = torch.sigmoid
Func[-1] = sm
# Define the exact functional form
# x: A S_0 \times B matrix where each column is a training example with S_0 features
def mlp_fun(x, Weight, Bias, Func):
f = Variable(x, requires_grad=False)
NumOfLayers = len(Weight)
for i in range(NumOfLayers):
#print(f)
f = Func[i](torch.matmul(Weight[i], f) + Bias[i])
#print(Func[i])
#print(f)
return f
%matplotlib inline
sizes_plot = sizes.copy()
sizes_plot[0] = 28
fig = plt.figure(figsize=(8, 8))
ax = fig.gca()
ax.axis('off')
draw_neural_net(ax, .1, .9, .1, .9, sizes_plot, bias=1, draw_edges=False)
#def draw_output(x)
#IDX = [100,200, 400, 500, 600, 650, 700, 701, 702, 900, 1000, 1100, 1200, 1300]
IDX = range(1,len(features_test),315)
for idx in IDX:
#x = features_test[idx]
x = features[idx]
#x = features[idx]
plt.figure(figsize=(3,3))
#x[x>0] = 1
plt.subplot(1,2,1)
plt.imshow(x.reshape(28,28), cmap='gray_r')
f = mlp_fun(torch.DoubleTensor(x.reshape(28*28,1)), Weight, Bias, Func)
plt.subplot(1,2,2)
plt.imshow(f.data, cmap='gray_r')
plt.xticks([])
#print(f)
plt.show()
# Cross Entropy
Error = torch.nn.CrossEntropyLoss(size_average=True, reduce=True)
# Number of examples in the Training set
N = len(target)
eta = 1
MAX_ITER = 50000
BatchSize = min(1000, N)
EE = []
for epoch in range(MAX_ITER):
idx = np.random.choice(N, size=BatchSize, replace=False)
x = features[idx]
#idx = [epoch%N]
f = mlp_fun(torch.DoubleTensor(x.T), Weight, Bias, Func)
y = Variable(torch.LongTensor(target[idx].reshape([len(idx)])), requires_grad=False)
# Measure the error
E = Error(f.transpose(0,1), y)
EE.append(E.data.numpy())
# Compute the derivative of the error with respect to Weights and Biases
E.backward()
# Take the step and reset weights
for i in range(D-1):
Weight[i].data.add_(-eta*Weight[i].grad.data)
Bias[i].data.add_(-eta*Bias[i].grad.data)
Weight[i].grad.zero_()
Bias[i].grad.zero_()
if epoch % 1000 == 0:
print(epoch, E.data[0])
plt.plot(EE)
plt.show()
sm = torch.nn.Softmax(dim=0)
inp = Variable(torch.randn(2, 3))
print(inp)
print(sm(inp))
loss_fn = torch.nn.BCELoss()
E = Variable(torch.DoubleTensor([0]))
for idx in [100,101,102]:
x = features[idx]
f = mlp_fun(torch.DoubleTensor(x.reshape(28*28,1)), Weight, Bias, Func)
y = Variable(torch.DoubleTensor(target[idx].reshape([1,1])))
E = E + loss_fn(f, y)
print(E/3)
# Batch Mode
loss_fn = torch.nn.BCELoss(size_average=False)
idx = range(0,100)
x = features[idx]
f = mlp_fun(torch.DoubleTensor(x.T), Weight, Bias, Func)
y = Variable(torch.DoubleTensor(target[idx].reshape([1, len(idx)])))
loss_fn(f, y)
# Number of data points
N = 4
# Number of data points
C = 3
CEloss = torch.nn.CrossEntropyLoss(reduce=True, size_average=False)
#X = np.array([[1,-1,-1],[2,1,0],[-1,3,-2],[1,2,3]])
X = np.random.randn(N, C)
x = torch.DoubleTensor(X)
print(x)
y = torch.LongTensor(N).random_(C)
print(y)
inp = Variable(x, requires_grad=True)
target = Variable(y)
out = CEloss(inp, target)
out
xx = x.numpy()
t = y.numpy()
E = 0
for n,i in enumerate(t):
E += -xx[n,i] + np.log(np.sum(np.exp(xx[n,:])))
E
x.numpy()
x
A = torch.LongTensor(*[10,3])
A.random_(100)
f = mlp_fun(torch.DoubleTensor(x.reshape(28*28,1)), Weight, Bias, Func)
loss_fn(f, Variable(torch.DoubleTensor([[0]])))
loss_fn(Variable(torch.DoubleTensor([0])),Variable(torch.DoubleTensor([0.2])))
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 100, 10, 2
# Create random Tensors to hold inputs and outputs
x = Variable(torch.randn(N, D_in), requires_grad=False)
y = Variable(torch.randn(N, D_out), requires_grad=False)
# Use the nn package to define our model and loss function.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
loss_fn = torch.nn.MSELoss(size_average=True, reduce=True)
# Use the optim package to define an Optimizer that will update the weights of
# the model for us. Here we will use Adam; the optim package contains many other
# optimization algoriths. The first argument to the Adam constructor tells the
# optimizer which Tensors it should update.
learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(500):
# Forward pass: compute predicted y by passing x to the model.
y_pred = model(x)
# Compute and print loss.
loss = loss_fn(y_pred, y)
print(t, loss.data[0])
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable
# weights of the model). This is because by default, gradients are
# accumulated in buffers( i.e, not overwritten) whenever .backward()
# is called. Checkout docs of torch.autograd.backward for more details.
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model
# parameters
loss.backward()
# Calling the step function on an Optimizer makes an update to its
# parameters
optimizer.step()
mlp_fun(torch.DoubleTensor(x.reshape(28*28,1)), Weight, Bias, Func)
# Softmax
import numpy as np
x = np.random.randn(2)
#x = np.array([-1, -1, 1])
sm = np.exp(x)
sm = sm/np.sum(sm)
print(x)
print(sm)
plt.imshow(sm.reshape(1,len(sm)), cmap='gray_r', vmin=0, vmax=1)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A NN plotter
Step2: Generate a Network
Step3: Visualize the network
Step4: Visualize the Output for a few examples
Step5: Training
Step6: Binary Cross Entropy Loss
Step7: Cross Entropy Loss
Step8: $$
|
15,102 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-aerchem', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
15,103 | <ASSISTANT_TASK:>
Python Code:
# some nlp tools
import spacy
import gensim
import pyLDAvis.gensim as gensimvis
import pyLDAvis
# tools to get data from s3
import boto3
from io import StringIO
import json
import requests
# would probably need these
import numpy as np
import pandas as pd
import itertools
nlp = spacy.load('en')
boto_session = boto3.session.Session(profile_name = 'wwymakAdmin')
s3 = boto_session.resource('s3')
def to_StringIO(key, bucket_name, s3 = s3):
obj = s3.Object(bucket_name, key)
return StringIO(obj.get()['Body'].read())
bkt = 'discursive'
# you can change default here to be whatever profile works on your server for listing the dicursive bucket
s = boto3.session.Session(profile_name='wwymakAdmin')
s3 = s.resource('s3')
bucket = s3.Bucket(bkt)
# file format is YYYY/M/DD/HH/filename.json
prefix = '2017/1/16/'
# the base bucket name for the discursive project
S3_BASE_URL = 'https://s3-us-west-2.amazonaws.com/discursive/'
def parse_tweets(tweets_arr, tweets_dictionary, tweets_corpus, all_tweet_tokens, tweets_dictionary_filepath, tweets_corpus_filepath):
#remove the retweets
tweets_text_documents = [x['text'] for x in tweets_arr if not str.startswith(str.lower(x['text']), 'rt')]
tokenized_tweets = []
# Process tweets using Spacy NLP pipeline.
for doc in nlp.pipe(tweets_text_documents, n_threads=4, batch_size=100):
ents = doc.ents # Named entities.
# Keep only words (no numbers, no punctuation).
# Lemmatize tokens, remove punctuation and remove stopwords.
doc = [token.lemma_ for token in doc if token.is_alpha and not token.is_stop]
# Remove common words from a stopword list.
#doc = [token for token in doc if token not in STOPWORDS]
# Add named entities, but only if they are a compound of more than word.
#doc.extend([str(entity) for entity in ents if len(entity) > 1])
tokenized_tweets.append(doc)
dictionary = gensim.corpora.Dictionary(tokenized_tweets)
corpus = [dictionary.doc2bow(x) for x in tokenized_tweets]
# print(len(corpus))
all_tweet_tokens.extend(tokenized_tweets)
tweets_dictionary = tweets_dictionary.merge_with(dictionary)
#save current dict
tweets_dictionary.save(tweets_dictionary_filepath)
tweets_corpus.extend(corpus)
# save current corpus
gensim.corpora.MmCorpus.serialize(tweets_corpus_filepath, tweets_corpus)
#initialise empty dictionary
tweets_dictionary = gensim.corpora.Dictionary([])
tweets_corpus = []
all_tweets_tokens = []
for obj in bucket.objects.filter(Prefix=prefix):
filename = obj.key
r = requests.get(S3_BASE_URL + filename)
#make sure only to execut if response is successful
if(r.status_code == 200):
parse_tweets(r.json(), tweets_dictionary,tweets_corpus, all_tweets_tokens, 'tweets_dictionary.dict', 'tweets_corpus.mm')
print(filename)
current_corpus = gensim.corpora.MmCorpus('tweets_corpus.mm')
current_dictionary = gensim.corpora.Dictionary.load('tweets_dictionary.dict')
len(tweets_corpus)
#train gensim lda model --no optimsation at the mo. Just experimenting
lda_model = gensim.models.LdaModel(tweets_corpus, id2word=tweets_dictionary, num_topics=30)
`#save model to file
lda_model.save('twitter_lda.model')
#print some topics to see if they make any sense...
lda_model.print_topics(10, 5)
#display the model with pyLDAvis for exploration
twitter_data_for_vis = gensimvis.prepare(lda_model, tweets_corpus, tweets_dictionary)
pyLDAvis.display(twitter_data_for_vis)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h3> Connecting to S3 </h3>
Step2: <h4>Parsing tweets</h4>
Step3: If you already have a corpus and dictionary you can just load them.
|
15,104 | <ASSISTANT_TASK:>
Python Code:
!wget -N https://s3.amazonaws.com/lasagne/recipes/datasets/mnist_cluttered_60x60_6distortions.npz
def load_data():
data = np.load(mnist_cluttered)
X_train, y_train = data['x_train'], np.argmax(data['y_train'], axis=-1)
X_valid, y_valid = data['x_valid'], np.argmax(data['y_valid'], axis=-1)
X_test, y_test = data['x_test'], np.argmax(data['y_test'], axis=-1)
# reshape for convolutions
X_train = X_train.reshape((X_train.shape[0], 1, DIM, DIM))
X_valid = X_valid.reshape((X_valid.shape[0], 1, DIM, DIM))
X_test = X_test.reshape((X_test.shape[0], 1, DIM, DIM))
print "Train samples:", X_train.shape
print "Validation samples:", X_valid.shape
print "Test samples:", X_test.shape
return dict(
X_train=lasagne.utils.floatX(X_train),
y_train=y_train.astype('int32'),
X_valid=lasagne.utils.floatX(X_valid),
y_valid=y_valid.astype('int32'),
X_test=lasagne.utils.floatX(X_test),
y_test=y_test.astype('int32'),
num_examples_train=X_train.shape[0],
num_examples_valid=X_valid.shape[0],
num_examples_test=X_test.shape[0],
input_height=X_train.shape[2],
input_width=X_train.shape[3],
output_dim=10,)
data = load_data()
plt.figure(figsize=(7,7))
plt.imshow(data['X_train'][101].reshape(DIM, DIM), cmap='gray', interpolation='none')
plt.title('Cluttered MNIST', fontsize=20)
plt.axis('off')
plt.show()
def build_model(input_width, input_height, output_dim,
batch_size=BATCH_SIZE):
ini = lasagne.init.HeUniform()
l_in = lasagne.layers.InputLayer(shape=(None, 1, input_width, input_height),)
# Localization network
b = np.zeros((2, 3), dtype=theano.config.floatX)
b[0, 0] = 1
b[1, 1] = 1
b = b.flatten()
loc_l1 = pool(l_in, pool_size=(2, 2))
loc_l2 = conv(
loc_l1, num_filters=20, filter_size=(5, 5), W=ini)
loc_l3 = pool(loc_l2, pool_size=(2, 2))
loc_l4 = conv(loc_l3, num_filters=20, filter_size=(5, 5), W=ini)
loc_l5 = lasagne.layers.DenseLayer(
loc_l4, num_units=50, W=lasagne.init.HeUniform('relu'))
loc_out = lasagne.layers.DenseLayer(
loc_l5, num_units=6, b=b, W=lasagne.init.Constant(0.0),
nonlinearity=lasagne.nonlinearities.identity)
# Transformer network
l_trans1 = lasagne.layers.TransformerLayer(l_in, loc_out, downsample_factor=3.0)
print "Transformer network output shape: ", l_trans1.output_shape
# Classification network
class_l1 = conv(
l_trans1,
num_filters=32,
filter_size=(3, 3),
nonlinearity=lasagne.nonlinearities.rectify,
W=ini,
)
class_l2 = pool(class_l1, pool_size=(2, 2))
class_l3 = conv(
class_l2,
num_filters=32,
filter_size=(3, 3),
nonlinearity=lasagne.nonlinearities.rectify,
W=ini,
)
class_l4 = pool(class_l3, pool_size=(2, 2))
class_l5 = lasagne.layers.DenseLayer(
class_l4,
num_units=256,
nonlinearity=lasagne.nonlinearities.rectify,
W=ini,
)
l_out = lasagne.layers.DenseLayer(
class_l5,
num_units=output_dim,
nonlinearity=lasagne.nonlinearities.softmax,
W=ini,
)
return l_out, l_trans1
model, l_transform = build_model(DIM, DIM, NUM_CLASSES)
model_params = lasagne.layers.get_all_params(model, trainable=True)
X = T.tensor4()
y = T.ivector()
# training output
output_train = lasagne.layers.get_output(model, X, deterministic=False)
# evaluation output. Also includes output of transform for plotting
output_eval, transform_eval = lasagne.layers.get_output([model, l_transform], X, deterministic=True)
sh_lr = theano.shared(lasagne.utils.floatX(LEARNING_RATE))
cost = T.mean(T.nnet.categorical_crossentropy(output_train, y))
updates = lasagne.updates.adam(cost, model_params, learning_rate=sh_lr)
train = theano.function([X, y], [cost, output_train], updates=updates)
eval = theano.function([X], [output_eval, transform_eval])
def train_epoch(X, y):
num_samples = X.shape[0]
num_batches = int(np.ceil(num_samples / float(BATCH_SIZE)))
costs = []
correct = 0
for i in range(num_batches):
idx = range(i*BATCH_SIZE, np.minimum((i+1)*BATCH_SIZE, num_samples))
X_batch = X[idx]
y_batch = y[idx]
cost_batch, output_train = train(X_batch, y_batch)
costs += [cost_batch]
preds = np.argmax(output_train, axis=-1)
correct += np.sum(y_batch == preds)
return np.mean(costs), correct / float(num_samples)
def eval_epoch(X, y):
output_eval, transform_eval = eval(X)
preds = np.argmax(output_eval, axis=-1)
acc = np.mean(preds == y)
return acc, transform_eval
valid_accs, train_accs, test_accs = [], [], []
try:
for n in range(NUM_EPOCHS):
train_cost, train_acc = train_epoch(data['X_train'], data['y_train'])
valid_acc, valid_trainsform = eval_epoch(data['X_valid'], data['y_valid'])
test_acc, test_transform = eval_epoch(data['X_test'], data['y_test'])
valid_accs += [valid_acc]
test_accs += [test_acc]
train_accs += [train_acc]
if (n+1) % 20 == 0:
new_lr = sh_lr.get_value() * 0.7
print "New LR:", new_lr
sh_lr.set_value(lasagne.utils.floatX(new_lr))
print "Epoch {0}: Train cost {1}, Train acc {2}, val acc {3}, test acc {4}".format(
n, train_cost, train_acc, valid_acc, test_acc)
except KeyboardInterrupt:
pass
plt.figure(figsize=(9,9))
plt.plot(1-np.array(train_accs), label='Training Error')
plt.plot(1-np.array(valid_accs), label='Validation Error')
plt.legend(fontsize=20)
plt.xlabel('Epoch', fontsize=20)
plt.ylabel('Error', fontsize=20)
plt.show()
plt.figure(figsize=(7,14))
for i in range(3):
plt.subplot(321+i*2)
plt.imshow(data['X_test'][i].reshape(DIM, DIM), cmap='gray', interpolation='none')
if i == 0:
plt.title('Original 60x60', fontsize=20)
plt.axis('off')
plt.subplot(322+i*2)
plt.imshow(test_transform[i].reshape(DIM//3, DIM//3), cmap='gray', interpolation='none')
if i == 0:
plt.title('Transformed 20x20', fontsize=20)
plt.axis('off')
plt.tight_layout()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building the model
Step2: Training
Step3: Plot results
|
15,105 | <ASSISTANT_TASK:>
Python Code:
is_stylegan_v1 = False
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
from datetime import datetime
from tqdm import tqdm
# ffmpeg installation location, for creating videos
plt.rcParams['animation.ffmpeg_path'] = str('/usr/bin/ffmpeg')
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
from IPython.display import display
from ipywidgets import Button
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
%load_ext autoreload
%autoreload 2
# StyleGAN2 Repo
sys.path.append('/tf/notebooks/stylegan2')
# StyleGAN Utils
from stylegan_utils import load_network, gen_image_fun, synth_image_fun, create_video
# v1 override
if is_stylegan_v1:
from stylegan_utils import load_network_v1 as load_network
from stylegan_utils import gen_image_fun_v1 as gen_image_fun
from stylegan_utils import synth_image_fun_v1 as synth_image_fun
import run_projector
import projector
import training.dataset
import training.misc
# Data Science Utils
sys.path.append(os.path.join(*[os.pardir]*3, 'data-science-learning'))
from ds_utils import generative_utils
res_dir = Path.home() / 'Documents/generated_data/stylegan'
MODELS_DIR = Path.home() / 'Documents/models/stylegan2'
MODEL_NAME = 'original_ffhq'
SNAPSHOT_NAME = 'stylegan2-ffhq-config-f'
Gs, Gs_kwargs, noise_vars = load_network(str(MODELS_DIR / MODEL_NAME / SNAPSHOT_NAME) + '.pkl')
Z_SIZE = Gs.input_shape[1]
IMG_SIZE = Gs.output_shape[2:]
IMG_SIZE
img = gen_image_fun(Gs, np.random.randn(1, Z_SIZE), Gs_kwargs, noise_vars)
plt.imshow(img)
def plot_direction_grid(dlatent, direction, coeffs):
fig, ax = plt.subplots(1, len(coeffs), figsize=(15, 10), dpi=100)
for i, coeff in enumerate(coeffs):
new_latent = (dlatent.copy() + coeff*direction)
ax[i].imshow(synth_image_fun(Gs, new_latent, Gs_kwargs, randomize_noise=False))
ax[i].set_title(f'Coeff: {coeff:0.1f}')
ax[i].axis('off')
plt.show()
# load learned direction
direction = np.load('/tf/media/datasets/stylegan/learned_directions.npy')
nb_latents = 5
# generate dlatents from mapping network
dlatents = Gs.components.mapping.run(np.random.randn(nb_latents, Z_SIZE), None, truncation_psi=1.)
for i in range(nb_latents):
plot_direction_grid(dlatents[i:i+1], direction, np.linspace(-2, 2, 5))
# Setup plot image
dpi = 100
fig, ax = plt.subplots(dpi=dpi, figsize=(7, 7))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0, wspace=0)
plt.axis('off')
im = ax.imshow(gen_image_fun(Gs, np.random.randn(1, Z_SIZE),Gs_kwargs, noise_vars, truncation_psi=1))
#prevent any output for this cell
plt.close()
# fetch attributes names
directions_dir = MODELS_DIR / MODEL_NAME / 'directions' / 'set01'
attributes = [e.stem for e in directions_dir.glob('*.npy')]
# get names or projected images
data_dir = res_dir / 'projection' / MODEL_NAME / SNAPSHOT_NAME / ''
entries = [p.name for p in data_dir.glob("*") if p.is_dir()]
entries.remove('tfrecords')
# set target latent to play with
#dlatents = Gs.components.mapping.run(np.random.randn(1, Z_SIZE), None, truncation_psi=0.5)
#target_latent = dlatents[0:1]
#target_latent = np.array([np.load("/out_4/image_latents2000.npy")])
%matplotlib inline
@interact
def i_direction(attribute=attributes,
entry=entries,
coeff=(-10., 10.)):
direction = np.load(directions_dir / f'{attribute}.npy')
target_latent = np.array([np.load(data_dir / entry / "image_latents1000.npy")])
new_latent_vector = target_latent.copy() + coeff*direction
im.set_data(synth_image_fun(Gs, new_latent_vector, Gs_kwargs, True))
ax.set_title('Coeff: %0.1f' % coeff)
display(fig)
dest_dir = Path("C:/tmp/tmp_mona")
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
fig.savefig(dest_dir / (timestamp + '.png'), bbox_inches='tight')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Network
Step2: Explore Directions
Step3: Interactive
|
15,106 | <ASSISTANT_TASK:>
Python Code:
from keras.applications.resnet50 import ResNet50
# https://keras.io/applications/#classify-imagenet-classes-with-resnet50
model = ResNet50(weights='imagenet')
from clipper_admin import ClipperConnection, DockerContainerManager
clipper_conn = ClipperConnection(DockerContainerManager())
clipper_conn.start_clipper(cache_size=1) # Disable PredictionCache
!docker ps -a
import io
import numpy as np
from PIL import Image
from keras.preprocessing.image import img_to_array
from keras.applications.resnet50 import preprocess_input, decode_predictions
def predict(model, inputs):
def _predict_one(one_input_arr):
try:
image = Image.open(io.BytesIO(one_input_arr))
if image.mode != "RGB":
image = image.convert("RGB")
image = image.resize((224, 224))
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
image = preprocess_input(image)
return decode_predictions(preds=model.predict(image), top=3)[0]
except Exception as e:
print(e)
return []
return [_predict_one(i) for i in inputs]
import clipper_admin.deployers.keras as keras_deployer
app_name = 'keras-test-app'
model_name = 'keras-test-model'
keras_deployer.deploy_keras_model(clipper_conn=clipper_conn,
name=model_name,
version='1',
input_type='bytes',
func=predict,
model_path_or_object=model,
num_replicas=1,
batch_size=1, # Disable adaptive batching policy
pkgs_to_install=['pillow'])
clipper_conn.register_application(name=app_name,
input_type="bytes",
default_output="-1.0",
slo_micros=10000000) # 10s
clipper_conn.link_model_to_app(app_name=app_name,
model_name=model_name)
import time
time.sleep(30)
!wget https://harishnarayanan.org/images/writing/artistic-style-transfer/output_13_0.png -O elephant.jpg
!wget http://kikei.github.io/images/plots/2018-08-05-rabbit2.jpg -O rabbit.jpg
from IPython.display import display
from PIL import Image
display(Image.open('elephant.jpg'))
display(Image.open('rabbit.jpg'))
import json
import base64
import requests
from datetime import datetime
from keras.preprocessing import image
headers = {'Content-type': 'application/json'}
url = "http://{addr}/{app_name}/predict".format(
addr=clipper_conn.get_query_addr(),
app_name=app_name)
start = datetime.now()
req_json = json.dumps({ "input": base64.b64encode(open('elephant.jpg', "rb").read()).decode() })
r = requests.post(url, headers=headers, data=req_json)
end = datetime.now()
latency = (end - start).total_seconds() * 1000.0
print("'%s', %f ms" % (r.text, latency))
start = datetime.now()
req_json = json.dumps({ "input": base64.b64encode(open('rabbit.jpg', "rb").read()).decode() })
r = requests.post(url, headers=headers, data=req_json)
end = datetime.now()
latency = (end - start).total_seconds() * 1000.0
print("'%s', %f ms" % (r.text, latency))
clipper_conn.stop_all()
!docker rm -f $(docker ps -a -q) && docker image prune -f
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize the Clipper cluster
Step2: Define 'predict' function
Step3: Deploy Keras model and 'predict' function to the Clipper cluster
Step4: Download sample images
Step5: Send some requests to the Clipper cluster
Step6: First request, which is so slow due to downloading a file from https
Step7: Second request, which is moderate!
Step8: Clean-up
|
15,107 | <ASSISTANT_TASK:>
Python Code:
import plotly.graph_objects as go
import numpy as np
Plotly version of the HSV colorscale, corresponding to S=1, V=1, where S is saturation and V is the value.
pl_hsv = [[0.0, 'rgb(0, 255, 255)'],
[0.0833, 'rgb(0, 127, 255)'],
[0.1667, 'rgb(0, 0, 255)'],
[0.25, 'rgb(127, 0, 255)'],
[0.3333, 'rgb(255, 0, 255)'],
[0.4167, 'rgb(255, 0, 127)'],
[0.5, 'rgb(255, 0, 0)'],
[0.5833, 'rgb(255, 127, 0)'],
[0.6667, 'rgb(254, 255, 0)'],
[0.75, 'rgb(127, 255, 0)'],
[0.8333, 'rgb(0, 255, 0)'],
[0.9167, 'rgb(0, 255, 127)'],
[1.0, 'rgb(0, 255, 255)']]
def evaluate_function(func, re=(-1,1), im=(-1,1), N=100):
# func is the complex function to be ploted
# re, im are the interval ends on the real and imaginary axes, defining the rectangular region in the complex plane
# N gives the number of points in an interval of length 1
l = re[1]-re[0]
h = im[1]-im[0]
resL = int(N*l) #horizontal resolution
resH = int(N*h) #vertical resolution
X = np.linspace(re[0], re[1], resL)
Y = np.linspace(im[0], im[1], resH)
x, y = np.meshgrid(X,Y)
z = x+1j*y
return X, Y, z
tickvals = [-np.pi, -2*np.pi/3, -np.pi/3, 0, np.pi/3, 2*np.pi/3, np.pi]
ticktext=['-\u03c0', '-2\u03c0/3', '-\u03c0/3', '0', '\u03c0/3', '2\u03c0/3', '\u03c0']
from functools import partial
def func(t, z):
return z**4+np.exp(2*t*1j)/z**2*np.exp(1j*t)
f0 = partial(func, 0)
x, y, z = evaluate_function(f0, re=(-1.5, 1.5), im=(-1.5,1.5), N=50)
w = f0(z)
argument = np.angle(w)
fig = go.Figure(go.Heatmap(x=x, y=y, z=argument, colorscale=pl_hsv,
colorbar=dict(thickness=20, tickvals=tickvals,
ticktext=ticktext,
title='arg(f(z))')))
frames = []
t = np.linspace(0, 3, 45) #6, 85
for s in t:
g = partial(func, s)
w = g(z)
argument = np.angle(w)
frames.append(go.Frame(data=[go.Heatmap(z=argument)]))
fig.update(frames=frames);
fig.update_layout(width=500, height=475,
updatemenus=[dict(type='buttons',
y=1,
x=1.45,
active=0,
buttons=[dict(label='Play',
method='animate',
args=[None,
dict(frame=dict(duration=10,
redraw=True),
transition=dict(duration=0),
fromcurrent=True,
mode='immediate')])])]);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For a particular parameter, the corresponding function of the given family is represented as heatmap of its argument, which illustrates
Step2: Define a family of complex functions, depending on the parameter t. To animate the motion of function zeros and poles, as t varies,
|
15,108 | <ASSISTANT_TASK:>
Python Code:
greek = ["Alpha", "Beta", "Gamma", "Delta"]
for element in greek:
print(element)
for i, e in enumerate(greek):
print(e + " is at index: " + str(i))
list_of_tuples = [(1,2,3),(4,5,6), (7,8,9)]
for (a, b, c) in list_of_tuples:
print(a + b + c)
for e1, e2 in zip(greek, greek):
print("Double Greek: " + e1 + e2)
for i in (1,2,3):
print(i)
elements = {
"H": "Hydrogen",
"He": "Helium",
"Li": "Lithium",
}
for key in elements: # Over the keys
print(key)
for key, val in elements.items(): # Over the keys and values
print(key + ": " + val)
for val in elements.values():
print(val)
x = 5
if x % 2 == 0: # Checks if x is even
print(str(x) + " is even!!!")
if x % 2 == 1: # Checks if x is odd
print(str(x) + " is odd!!!")
if x % 2 == 0: # Checks if x is even
print(str(x) + " is even!!!")
else: # x is odd
print(str(x) + " is odd!!!")
if x % 4 == 0:
print("x divisible by 4")
elif x % 3 == 0 and x % 5:
print("x divisible by 3 and 5")
elif x % 1 == 0:
print("x divisible by 1")
else:
print("I give up")
r1 = []
for i in range(10):
r1.append(i**2)
# Equivalent to the loop above.
r2 = [i**2 for i in range(10)]
print(r1)
print(r2)
r1 = []
for i in range(30):
if (i%2 == 0 and i%3 == 0):
r1.append(i)
# Equivalent to the loop above.
r2 = [i for i in range(30) if i%2==0 and i%3==0]
print(r1)
print(r2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sometimes you need the index of an element and the element itself while you iterate through a list. This can be achived with the enumerate function.
Step2: The above is taking advantage of tuple decomposition, because enumerate generates a list of tuples. Let's look at this more explicitly.
Step3: If you have two or more lists you want to iterate over together you can use the zip function.
Step4: You can also iterate over tuples
Step5: and dictionaries in various ways
Step6: Conditionals
Step7: In the above example the second condition is somewhat redundant because if x is not even it is odd. There's a better way of expressing this.
Step8: You can have more than two conditions.
Step9: Note how only the first conditional branch that matches gets executed. The else conditions is a catch all condition that would have executed if none of the other branches had been chosen.
|
15,109 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step5: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
Step6: Exercise
Step7: Training, Validation, Test
Step8: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step9: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step10: Embedding
Step11: LSTM cell
Step12: RNN forward pass
Step13: Output
Step14: Validation accuracy
Step15: Batching
Step16: Training
Step17: Testing
|
15,110 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from quantecon.models import LakeModel
alpha = 0.012
lamb = 0.2486
b = 0.001808
d = 0.0008333
g = b-d
N0 = 100.
e0 = 0.92
u0 = 1-e0
T = 50
LM0 = LakeModel(lamb,alpha,b,d)
x0 = LM0.find_steady_state()# initial conditions
print("Initial Steady State: %s" % x0)
LM1 = LakeModel(0.2,alpha,b,d)
xbar = LM1.find_steady_state() # new steady state
X_path = np.vstack(LM1.simulate_stock_path(x0*N0,T)) # simulate stocks
x_path = np.vstack(LM1.simulate_rate_path(x0,T)) # simulate rates
print("New Steady State: %s" % xbar)
plt.figure(figsize=[10,9])
plt.subplot(3,1,1)
plt.plot(X_path[:,0])
plt.title(r'Employment')
plt.subplot(3,1,2)
plt.plot(X_path[:,1])
plt.title(r'Unemployment')
plt.subplot(3,1,3)
plt.plot(X_path.sum(1))
plt.title(r'Labor Force')
plt.figure(figsize=[10,6])
plt.subplot(2,1,1)
plt.plot(x_path[:,0])
plt.hlines(xbar[0],0,T,'r','--')
plt.title(r'Employment Rate')
plt.subplot(2,1,2)
plt.plot(x_path[:,1])
plt.hlines(xbar[1],0,T,'r','--')
plt.title(r'Unemployment Rate')
bhat = 0.003
T_hat = 20
LM1 = LakeModel(lamb,alpha,bhat,d)
X_path1 = np.vstack(LM1.simulate_stock_path(x0*N0,T_hat)) # simulate stocks
x_path1 = np.vstack(LM1.simulate_rate_path(x0,T_hat)) # simulate rates
X_path2 = np.vstack(LM0.simulate_stock_path(X_path1[-1,:2],T-T_hat+1)) # simulate stocks
x_path2 = np.vstack(LM0.simulate_rate_path(x_path1[-1,:2],T-T_hat+1)) # simulate rates
x_path = np.vstack([x_path1,x_path2[1:]]) # note [1:] to avoid doubling period 20
X_path = np.vstack([X_path1,X_path2[1:]]) # note [1:] to avoid doubling period 20
plt.figure(figsize=[10,9])
plt.subplot(3,1,1)
plt.plot(X_path[:,0])
plt.title(r'Employment')
plt.subplot(3,1,2)
plt.plot(X_path[:,1])
plt.title(r'Unemployment')
plt.subplot(3,1,3)
plt.plot(X_path.sum(1))
plt.title(r'Labor Force')
plt.figure(figsize=[10,6])
plt.subplot(2,1,1)
plt.plot(x_path[:,0])
plt.hlines(x0[0],0,T,'r','--')
plt.title(r'Employment Rate')
plt.subplot(2,1,2)
plt.plot(x_path[:,1])
plt.hlines(x0[1],0,T,'r','--')
plt.title(r'Unemployment Rate')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now construct the class containing the initial conditions of the problem
Step2: New legislation changes $\lambda$ to $0.2$
Step3: Now plot stocks
Step4: And how the rates evolve
Step5: We see that it takes 20 periods for the economy to converge to it's new steady state levels
Step6: We simulate for 20 periods at the new parameters
Step7: Now using the state after 20 periods for the new initial conditions we simulate for the additional 30 periods
Step8: Finally we combine these two paths and plot
Step9: And the rates
|
15,111 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.datasets as data
%matplotlib inline
sns.set_context('poster')
sns.set_style('white')
sns.set_color_codes()
plot_kwds = {'alpha' : 0.5, 's' : 80, 'linewidths':0}
moons, _ = data.make_moons(n_samples=50, noise=0.05)
blobs, _ = data.make_blobs(n_samples=50, centers=[(-0.75,2.25), (1.0, 2.0)], cluster_std=0.25)
test_data = np.vstack([moons, blobs])
plt.scatter(test_data.T[0], test_data.T[1], color='b', **plot_kwds)
import hdbscan
clusterer = hdbscan.HDBSCAN(min_cluster_size=5, gen_min_span_tree=True)
clusterer.fit(test_data)
clusterer.minimum_spanning_tree_.plot(edge_cmap='Spectral', edge_linewidth=4)
clusterer.single_linkage_tree_.plot()
clusterer.condensed_tree_.plot()
clusterer.condensed_tree_.plot(select_clusters=True)
cluster_colors = [sns.color_palette(desat=sat)[col]
if col >= 0 else (0.5, 0.5, 0.5) for col, sat in
zip(clusterer.labels_, clusterer.probabilities_)]
plt.scatter(test_data.T[0], test_data.T[1], c=cluster_colors, **plot_kwds)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The next thing we'll need is some data. To make for an illustrative example we'll need the data size to be fairly small so we can see what is going on. It will also be useful to have several clusters, preferably of different kinds. Fortunately sklearn has facilities for generating sample clustering data so I'll make use of that and make a dataset of one hundred data points.
Step2: Now, the best way to explain HDBSCAN is actually just use it and then go through the steps that occurred along the way teasing out what is happening at each step. So let's load up the hdbscan library and get to work.
Step3: So now that we have clustered the data -- what actually happened? We can break it out into a series of steps
Step4: Build the cluster hierarchy
Step5: This brings us to the point where robust single linkage stops. We want more though; a cluster hierarchy is good, but we really want a set of flat clusters. We could do that by drawing a a horizontal line through the above diagram and selecting the clusters that it cuts through. This is in practice what DBSCAN effectively does (declaring any singleton clusters at the cut level as noise). The question is, how do we know where to draw that line? DBSCAN simply leaves that as a (very unintuitive) parameter. Worse, we really want to deal with variable density clusters and any choice of cut line is a choice of mutual reachability distance to cut at, and hence a single fixed density level. Ideally we want to be able to cut the tree at different places to select our clusters. This is where the next steps of HDBSCAN begin and create the difference from robust single linkage.
Step6: This is much easier to look at and deal with, particularly in as simple a clustering problem as our current test dataset. However we still need to pick out clusters to use as a flat clustering. Looking at the plot above should give you some ideas about how one might go about doing this.
Step7: Now that we have the clusters it is a simple enough matter to turn that into cluster labelling as per the sklearn API. Any point not in a selected cluster is simply a noise point (and assigned the label -1). We can do a little more though
|
15,112 | <ASSISTANT_TASK:>
Python Code:
try:
%load_ext autoreload
%autoreload 2
%reset
except:
print 'NOT IPYTHON'
from __future__ import division
import os
import sys
import glob
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
import statsmodels.api as sm
import scipy.io as io
import h5py
import matplotlib
import matplotlib.pyplot as plt
echobase_path = '/Users/akhambhati/Developer/hoth_research/Echobase'
#echobase_path = '/data/jag/akhambhati/hoth_research/Echobase'
sys.path.append(echobase_path)
import Echobase
convert_conn_vec_to_adj_matr = Echobase.Network.Transforms.configuration.convert_conn_vec_to_adj_matr
convert_adj_matr_to_cfg_matr = Echobase.Network.Transforms.configuration.convert_adj_matr_to_cfg_matr
rcParams = Echobase.Plotting.fig_format.update_rcparams(matplotlib.rcParams)
path_Remotes = '/Users/akhambhati/Remotes'
#path_Remotes = '/data/jag/bassett-lab/akhambhati'
path_CoreData = path_Remotes + '/CORE.fMRI_cogcontrol.medaglia'
path_PeriphData = path_Remotes + '/RSRCH.NMF_CogControl'
path_ExpData = path_PeriphData + '/e01-FuncNetw'
path_AtlasData = path_Remotes + '/CORE.MRI_Atlases'
path_Figures = './e01-Figures'
for path in [path_CoreData, path_PeriphData, path_ExpData, path_Figures]:
if not os.path.exists(path):
print('Path: {}, does not exist'.format(path))
os.makedirs(path)
# Load BOLD
df_navon = io.loadmat('{}/NavonBlockedSeriesScale125.mat'.format(path_CoreData), struct_as_record=False)
df_stroop = io.loadmat('{}/StroopBlockedSeriesScale125.mat'.format(path_CoreData), struct_as_record=False)
n_subj = 28
n_fix_block = 12 # Disregard the final fixation block
n_tsk_block = 6
n_roi = 262
bad_roi = [242]
n_good_roi = n_roi-len(bad_roi)
# Load Motion Data
df_motion = {'Stroop': io.loadmat('{}/StroopMove.mat'.format(path_CoreData))['move'][:, 0],
'Navon': io.loadmat('{}/NavonMove.mat'.format(path_CoreData))['move'][:, 0]}
# Load Behavioral Data
df_blk = io.loadmat('{}/BlockwiseDataCorrectTrialsOnly.mat'.format(path_CoreData))
bad_subj_ix = [1, 6]
good_subj_ix = np.setdiff1d(np.arange(n_subj+2), bad_subj_ix)
df_perf = {'Stroop': {'lo': {'accuracy': df_blk['StroopData'][good_subj_ix, 1, :],
'meanRT': df_blk['StroopData'][good_subj_ix, 4, :],
'medianRT': df_blk['StroopData'][good_subj_ix, 5, :]},
'hi': {'accuracy': df_blk['StroopData'][good_subj_ix, 0, :],
'meanRT': df_blk['StroopData'][good_subj_ix, 2, :],
'medianRT': df_blk['StroopData'][good_subj_ix, 3, :]}
},
'Navon' : {'lo': {'accuracy': df_blk['NavonData'][good_subj_ix, 1, :],
'meanRT': df_blk['NavonData'][good_subj_ix, 4, :],
'medianRT': df_blk['NavonData'][good_subj_ix, 5, :]},
'hi': {'accuracy': df_blk['NavonData'][good_subj_ix, 0, :],
'meanRT': df_blk['NavonData'][good_subj_ix, 2, :],
'medianRT': df_blk['NavonData'][good_subj_ix, 3, :]}
}
}
def comp_fconn(bold, alpha=0.05, dependent=False):
n_roi, n_tr = bold.shape
adj = np.arctanh(np.corrcoef(bold))
cfg_vec = convert_adj_matr_to_cfg_matr(adj.reshape(-1, n_roi, n_roi))[0, :]
# Separate edges based on sign
cfg_vec_pos = cfg_vec.copy()
cfg_vec_pos[cfg_vec_pos < 0] = 0
cfg_vec_neg = -1*cfg_vec.copy()
cfg_vec_neg[cfg_vec_neg < 0] = 0
adj_pos = convert_conn_vec_to_adj_matr(cfg_vec_pos)
adj_neg = convert_conn_vec_to_adj_matr(cfg_vec_neg)
return adj_pos, adj_neg
for subj_id in xrange(n_subj):
proc_item = '{}/Subject_{}.Navon'.format(path_ExpData, subj_id)
print(proc_item)
adj_dict = {'lo': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))}
},
'hi': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
}}
# Process Fixation Blocks
cnt = 0
for fix_block in xrange(n_fix_block):
data = np.array(df_navon['data'][subj_id][fix_block].NFix, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
if (fix_block % 2) == 0:
adj_dict['lo']['fix']['pos'][cnt, :, :], adj_dict['lo']['fix']['neg'][cnt, :, :] = comp_fconn(data)
if (fix_block % 2) == 1:
adj_dict['hi']['fix']['pos'][cnt, :, :], adj_dict['hi']['fix']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
# Process Task Blocks
cnt = 0
for tsk_block in xrange(n_tsk_block):
# Low demand
data = np.array(df_navon['data'][subj_id][tsk_block].NS, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['lo']['task']['pos'][cnt, :, :], adj_dict['lo']['task']['neg'][cnt, :, :] = comp_fconn(data)
# High demand
data = np.array(df_navon['data'][subj_id][tsk_block].S, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['hi']['task']['pos'][cnt, :, :], adj_dict['hi']['task']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
np.savez(proc_item, adj_dict=adj_dict)
for subj_id in xrange(n_subj):
proc_item = '{}/Subject_{}.Stroop'.format(path_ExpData, subj_id)
print(proc_item)
adj_dict = {'lo': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))}
},
'hi': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
}}
# Process Fixation Blocks
cnt = 0
for fix_block in xrange(n_fix_block):
data = np.array(df_stroop['data'][subj_id][fix_block].SFix, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
if (fix_block % 2) == 0:
adj_dict['lo']['fix']['pos'][cnt, :, :], adj_dict['lo']['fix']['neg'][cnt, :, :] = comp_fconn(data)
if (fix_block % 2) == 1:
adj_dict['hi']['fix']['pos'][cnt, :, :], adj_dict['hi']['fix']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
# Process Task Blocks
cnt = 0
for tsk_block in xrange(n_tsk_block):
# Low demand
data = np.array(df_stroop['data'][subj_id][tsk_block].IE, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['lo']['task']['pos'][cnt, :, :], adj_dict['lo']['task']['neg'][cnt, :, :] = comp_fconn(data)
# High demand
data = np.array(df_stroop['data'][subj_id][tsk_block].E, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['hi']['task']['pos'][cnt, :, :], adj_dict['hi']['task']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
np.savez(proc_item, adj_dict=adj_dict)
expr_dict = {}
for expr_id in ['Stroop', 'Navon']:
df_list = glob.glob('{}/Subject_*.{}.npz'.format(path_ExpData, expr_id))
for df_subj in df_list:
subj_id = int(df_subj.split('/')[-1].split('.')[0].split('_')[1])
if subj_id not in expr_dict.keys():
expr_dict[subj_id] = {}
expr_dict[subj_id][expr_id] = df_subj
# Generate a dictionary of all key names
cfg_key_names = ['Subject_ID', 'Experiment_ID', 'Condition_ID', 'Task_ID', 'CorSign_ID', 'Block_ID']
cfg_key_label = {'Subject_ID': np.arange(n_subj),
'Experiment_ID': ['Stroop', 'Navon'],
'Condition_ID': ['lo', 'hi'],
'Task_ID': ['fix', 'task'],
'CorSign_ID': ['pos', 'neg'],
'Block_ID': np.arange(n_tsk_block)}
cfg_obs_lut = np.zeros((len(cfg_key_label[cfg_key_names[0]]),
len(cfg_key_label[cfg_key_names[1]]),
len(cfg_key_label[cfg_key_names[2]]),
len(cfg_key_label[cfg_key_names[3]]),
len(cfg_key_label[cfg_key_names[4]]),
len(cfg_key_label[cfg_key_names[5]])))
# Iterate over all cfg key labels and generate a LUT matrix and a config matrix
key_cnt = 0
cfg_matr = []
for key_0_ii, key_0_id in enumerate(cfg_key_label[cfg_key_names[0]]):
for key_1_ii, key_1_id in enumerate(cfg_key_label[cfg_key_names[1]]):
adj_dict = np.load(expr_dict[key_0_id][key_1_id])['adj_dict'][()]
for key_2_ii, key_2_id in enumerate(cfg_key_label[cfg_key_names[2]]):
for key_3_ii, key_3_id in enumerate(cfg_key_label[cfg_key_names[3]]):
for key_4_ii, key_4_id in enumerate(cfg_key_label[cfg_key_names[4]]):
for key_5_ii, cfg_vec in enumerate(convert_adj_matr_to_cfg_matr(adj_dict[key_2_id][key_3_id][key_4_id])):
cfg_obs_lut[key_0_ii, key_1_ii, key_2_ii,
key_3_ii, key_4_ii, key_5_ii] = key_cnt
cfg_matr.append(cfg_vec)
key_cnt += 1
cfg_matr = np.array(cfg_matr)
cfg_matr_orig = cfg_matr.copy()
# Normalize sum of edge weights to 1
cfg_L1 = np.linalg.norm(cfg_matr, axis=1, ord=1)
cfg_L1[cfg_L1 == 0] = 1.0
cfg_matr = (cfg_matr.T / cfg_L1).T
# Rescale edge weight to unit L2-Norm
cfg_L2 = np.zeros_like(cfg_matr)
for subj_ii in xrange(len(cfg_key_label['Subject_ID'])):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :].reshape(-1), dtype=int)
cfg_L2[grp_ix, :] = np.linalg.norm(cfg_matr[grp_ix, :], axis=0, ord=2)
cfg_L2[cfg_L2 == 0] = 1.0
cfg_matr = cfg_matr / cfg_L2
np.savez('{}/Population.Configuration_Matrix.npz'.format(path_ExpData),
cfg_matr_orig=cfg_matr_orig,
cfg_matr=cfg_matr,
cfg_L2=cfg_L2,
cfg_obs_lut=cfg_obs_lut,
cfg_key_label=cfg_key_label,
cfg_key_names=cfg_key_names)
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = []
for grp_ii in xrange(n_grp):
grp_ix = np.array(cfg_obs_lut[grp_ii, :, :, :, :, :].reshape(-1), dtype=int)
grp_edge_wt.append(np.mean(cfg_matr[grp_ix, :], axis=1))
grp_edge_wt = np.array(grp_edge_wt)
mean_grp_edge_wt = np.mean(grp_edge_wt, axis=1)
grp_ord_ix = np.argsort(mean_grp_edge_wt)[::-1]
### Plot Subject Distribution
print(stats.f_oneway(*(grp_edge_wt)))
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt[grp_ord_ix, :].T, sym='', patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels([])
ax.set_xlabel('Subjects')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Subjects.svg'.format(path_Figures))
plt.show()
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['CorSign_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][:, :, :, grp_ii, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
""
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['CorSign_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.CorSign.svg'.format(path_Figures))
plt.show()
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Task_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][:, :, grp_ii, :, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['Task_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Task.svg'.format(path_Figures))
plt.show()
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Condition_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][:, grp_ii, :, :, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['Condition_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Condition.svg'.format(path_Figures))
plt.show()
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Experiment_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][grp_ii, :, :, :, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['Experiment_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Expriment.svg'.format(path_Figures))
plt.show()
perf_stroop_hi = df_perf['Stroop']['hi']['meanRT'].mean(axis=1)
perf_stroop_lo = df_perf['Stroop']['lo']['meanRT'].mean(axis=1)
perf_stroop_cost = perf_stroop_hi-perf_stroop_lo
perf_navon_hi = df_perf['Navon']['hi']['meanRT'].mean(axis=1)
perf_navon_lo = df_perf['Navon']['lo']['meanRT'].mean(axis=1)
perf_navon_cost = perf_navon_hi-perf_navon_lo
print(stats.ttest_rel(perf_stroop_cost, perf_navon_cost))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot([perf_stroop_cost, perf_navon_cost], patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(2)])
#ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(['Stroop', 'Navon'])
ax.set_xlabel('')
ax.set_ylabel('Reaction Time Cost (Hi-Lo)')
plt.savefig('{}/RT_Cost.Expriment.svg'.format(path_Figures))
plt.show()
import nibabel as nib
df_yeo_atlas = nib.load('{}/Yeo_JNeurophysiol11_MNI152/Yeo2011_7Networks_MNI152_FreeSurferConformed1mm_LiberalMask.nii.gz'.format(path_AtlasData))
yeo_matr = df_yeo_atlas.get_data()[..., 0]
yeo_roi = np.unique(yeo_matr)[1:]
yeo_names = ['VIS', 'SMN', 'DAN', 'VAN', 'LIM', 'FPN', 'DMN']
yeo_xyz = {}
M = df_yeo_atlas.affine[:3, :3]
abc = df_yeo_atlas.affine[:3, 3]
for yeo_id in yeo_roi:
yeo_ijk = np.array(np.nonzero(yeo_matr == yeo_id)).T
yeo_xyz[yeo_id] = M.dot(yeo_ijk.T).T + abc.T
df_laus_atlas = nib.load('{}/Lausanne/ROIv_scale125_dilated.nii.gz'.format(path_AtlasData))
laus_matr = df_laus_atlas.get_data()
laus_roi = np.unique(laus_matr)[1:]
laus_xyz = {}
M = df_laus_atlas.affine[:3, :3]
abc = df_laus_atlas.affine[:3, 3]
for laus_id in laus_roi:
laus_ijk = np.array(np.nonzero(laus_matr == laus_id)).T
laus_xyz[laus_id] = M.dot(laus_ijk.T).T + abc.T
laus_yeo_assign = []
for laus_id in laus_roi:
dists = []
for yeo_id in yeo_roi:
dists.append(np.min(np.sum((yeo_xyz[yeo_id] - laus_xyz[laus_id].mean(axis=0))**2, axis=1)))
laus_yeo_assign.append(yeo_names[np.argmin(dists)])
laus_yeo_assign = np.array(laus_yeo_assign)
pd.DataFrame(laus_yeo_assign).to_csv('{}/Lausanne/ROIv_scale125_dilated.Yeo2011_7Networks_MNI152.csv'.format(path_AtlasData))
# Manually replaced subcortical and cerebellar structures as SUB and CBR, respectively.
# Read in Yeo Atlas
df_laus_yeo = pd.read_csv('{}/LausanneScale125.csv'.format(path_CoreData))
df_laus_yeo = df_laus_yeo[df_laus_yeo.Label_ID != bad_roi[0]+1]
system_lbl = np.array(df_laus_yeo['Yeo2011_7Networks'].as_matrix())
system_name = np.unique(df_laus_yeo['Yeo2011_7Networks'])
n_system = len(system_name)
n_roi = len(system_lbl)
triu_ix, triu_iy = np.triu_indices(n_roi, k=1)
sys_triu_ix, sys_triu_iy = np.triu_indices(n_system, k=0)
# Reorder System Labels and Count ROIs per System
system_srt_ix = np.argsort(system_lbl)
system_cnt = np.array([len(np.flatnonzero(system_lbl == sys_name))
for sys_name in system_name])
system_demarc = np.concatenate(([0], np.cumsum(system_cnt)))
np.savez('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData),
df_laus_yeo=df_laus_yeo,
yeo_lbl=system_lbl,
yeo_name=system_name,
sort_laus_to_yeo=system_srt_ix,
yeo_adj_demarc=system_demarc,
laus_triu=np.triu_indices(n_roi, k=1),
yeo_triu=np.triu_indices(n_system, k=0))
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr']
df_to_yeo = np.load('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData))
n_laus = len(df_to_yeo['yeo_lbl'])
plt.figure(figsize=(5,5));
cnt = 0
for expr_ii, expr_id in enumerate(df['cfg_key_label'][()]['Experiment_ID']):
for sgn_ii, sgn_id in enumerate(df['cfg_key_label'][()]['CorSign_ID']):
grp_ix = np.array(cfg_obs_lut[:, expr_ii, :, :, :, :][:, :, :, sgn_ii, :].reshape(-1), dtype=int)
sel_cfg_matr = cfg_matr[grp_ix, :].mean(axis=0)
adj = convert_conn_vec_to_adj_matr(sel_cfg_matr)
adj_yeo = adj[df_to_yeo['sort_laus_to_yeo'], :][:, df_to_yeo['sort_laus_to_yeo']]
# Plot
ax = plt.subplot(2, 2, cnt+1)
mat = ax.matshow(adj_yeo,
cmap='magma', vmin=0.025)
plt.colorbar(mat, ax=ax, fraction=0.046, pad=0.04)
for xx in df_to_yeo['yeo_adj_demarc']:
ax.vlines(xx, 0, n_laus, color='w', lw=0.5)
ax.hlines(xx, 0, n_laus, color='w', lw=0.5)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_tick_params(width=0)
ax.xaxis.set_tick_params(width=0)
ax.grid(False)
ax.tick_params(axis='both', which='major', pad=-3)
ax.set_xticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_xticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_yticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_yticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_title('{}-{}'.format(expr_id, sgn_id), fontsize=5.0)
cnt += 1
plt.show()
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_matr = df['cfg_matr']
# Compute Brain System Adjacency Matrices
sys_adj_matr = np.zeros((cfg_matr.shape[0], n_system, n_system))
for sys_ii, (sys_ix, sys_iy) in enumerate(zip(sys_triu_ix, sys_triu_iy)):
sys1 = system_name[sys_ix]
sys2 = system_name[sys_iy]
sys1_ix = np.flatnonzero(system_lbl[triu_ix] == sys1)
sys2_iy = np.flatnonzero(system_lbl[triu_iy] == sys2)
inter_sys_ii = np.intersect1d(sys1_ix, sys2_iy)
if len(inter_sys_ii) == 0:
sys1_ix = np.flatnonzero(system_lbl[triu_ix] == sys2)
sys2_iy = np.flatnonzero(system_lbl[triu_iy] == sys1)
inter_sys_ii = np.intersect1d(sys1_ix, sys2_iy)
mean_conn_sys1_sys2 = np.mean(cfg_matr[:, inter_sys_ii], axis=1)
sys_adj_matr[:, sys_ix, sys_iy] = mean_conn_sys1_sys2
sys_adj_matr[:, sys_iy, sys_ix] = mean_conn_sys1_sys2
np.savez('{}/Full_Adj.Yeo2011_7Networks.npz'.format(path_ExpData),
sys_adj_matr=sys_adj_matr,
cfg_obs_lut=df['cfg_obs_lut'],
cfg_key_label=df['cfg_key_label'],
cfg_key_names=df['cfg_key_names'])
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr']
df_to_yeo = np.load('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData))
n_laus = len(df_to_yeo['yeo_lbl'])
coef_ix = np.array(cfg_obs_lut, dtype=int)
cfg_matr_reshape = cfg_matr[coef_ix, :]
for sgn_ii, sgn_id in enumerate(df['cfg_key_label'][()]['CorSign_ID']):
sel_cfg_matr = (cfg_matr_reshape[:, :, :, 1, sgn_ii, :, :]).mean(axis=-2).mean(axis=-2)
sel_cfg_matr_tv = np.nan*np.zeros(cfg_matr.shape[1])
sel_cfg_matr_pv = np.nan*np.zeros(cfg_matr.shape[1])
for cc in xrange(cfg_matr.shape[1]):
tv, pv = stats.ttest_rel(*sel_cfg_matr[:, :, cc].T)
mean_stroop = np.mean(sel_cfg_matr[:, :, cc], axis=0)[0]
mean_navon = np.mean(sel_cfg_matr[:, :, cc], axis=0)[1]
dv = (mean_stroop - mean_navon) / np.std(sel_cfg_matr[:, :, cc].reshape(-1))
sel_cfg_matr_tv[cc] = dv
sel_cfg_matr_pv[cc] = pv
sig_pv = Echobase.Statistics.FDR.fdr.bhp(sel_cfg_matr_pv, alpha=0.05, dependent=True)
sel_cfg_matr_tv[sig_pv == False] = 0.0
adj = convert_conn_vec_to_adj_matr(sel_cfg_matr_tv)
adj_yeo = adj[df_to_yeo['sort_laus_to_yeo'], :][:, df_to_yeo['sort_laus_to_yeo']]
adj_yeo[np.diag_indices_from(adj_yeo)] = np.nan
# Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
mat = ax.matshow(adj_yeo,
cmap='PuOr', vmin=-1.0, vmax=1.0)
plt.colorbar(mat, ax=ax, fraction=0.046, pad=0.04)
for xx in df_to_yeo['yeo_adj_demarc']:
ax.vlines(xx, 0, n_laus, color='k', lw=0.5)
ax.hlines(xx, 0, n_laus, color='k', lw=0.5)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_tick_params(width=0)
ax.xaxis.set_tick_params(width=0)
ax.grid(False)
ax.tick_params(axis='both', which='major', pad=-3)
ax.set_xticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_xticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_yticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_yticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
plt.savefig('{}/Contrast.Expr.{}.svg'.format(path_Figures, sgn_id))
plt.show()
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr']
df_to_yeo = np.load('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData))
n_laus = len(df_to_yeo['yeo_lbl'])
for expr_ii, expr_id in enumerate(df['cfg_key_label'][()]['Experiment_ID']):
for sgn_ii, sgn_id in enumerate(df['cfg_key_label'][()]['CorSign_ID']):
coef_ix = np.array(cfg_obs_lut, dtype=int)
cfg_matr_reshape = cfg_matr[coef_ix, :]
sel_cfg_matr = cfg_matr_reshape[:, expr_ii, :, 1, sgn_ii, :, :].mean(axis=-2)
sel_cfg_matr_tv = np.nan*np.zeros(cfg_matr.shape[1])
sel_cfg_matr_pv = np.nan*np.zeros(cfg_matr.shape[1])
for cc in xrange(cfg_matr.shape[1]):
tv, pv = stats.ttest_rel(*sel_cfg_matr[:, :, cc].T)
mean_lo = np.mean(sel_cfg_matr[:, :, cc], axis=0)[0]
mean_hi = np.mean(sel_cfg_matr[:, :, cc], axis=0)[1]
dv = (mean_hi - mean_lo) / np.std(sel_cfg_matr[:, :, cc].reshape(-1))
sel_cfg_matr_tv[cc] = dv
sel_cfg_matr_pv[cc] = pv
sig_pv = Echobase.Statistics.FDR.fdr.bhp(sel_cfg_matr_pv, alpha=0.05, dependent=True)
sel_cfg_matr_tv[sig_pv == False] = np.nan
adj = convert_conn_vec_to_adj_matr(sel_cfg_matr_tv)
adj_yeo = adj[df_to_yeo['sort_laus_to_yeo'], :][:, df_to_yeo['sort_laus_to_yeo']]
adj_yeo[np.diag_indices_from(adj_yeo)] = np.nan
# Plot
plt.figure(figsize=(3,3), dpi=300)
ax = plt.subplot(111)
mat = ax.matshow(adj_yeo,
cmap='coolwarm', vmin=-0.5, vmax=0.5)
plt.colorbar(mat, ax=ax, fraction=0.046, pad=0.04)
for xx in df_to_yeo['yeo_adj_demarc']:
ax.vlines(xx, 0, n_laus, color='k', lw=0.5)
ax.hlines(xx, 0, n_laus, color='k', lw=0.5)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_tick_params(width=0)
ax.xaxis.set_tick_params(width=0)
ax.grid(False)
ax.tick_params(axis='both', which='major', pad=-3)
ax.set_xticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_xticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_yticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_yticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_title('{}-{}'.format(expr_id, sgn_id), fontsize=5.0)
plt.savefig('{}/Contrast.{}.{}.Hi_Lo.svg'.format(path_Figures, expr_id, sgn_id))
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load CoreData
Step2: Compute Functional Connectivity
Step3: Process Navon
Step4: Process Stroop
Step5: Generate Population Configuration Matrix
Step6: Create Lookup-Table and Full Configuration Matrix
Step7: Checking Correlation Biases
Step8: Positive vs Negative
Step9: Fixation vs Task
Step10: Within Experiment (Hi vs Lo)
Step11: Between Experiment (Stroop vs Navon)
Step12: Performance Between Experiment
Step13: System-Level Connectivity
Step14: System-Level Adjacency Matrices
Step15: Plot Population Average Adjacency Matrices (Expr + Pos/Neg)
Step16: Construct System Adjacency Matrices
Step17: Check Contrasts
Step18: Lo vs Hi
|
15,113 | <ASSISTANT_TASK:>
Python Code:
from collatex import *
collation = Collation()
collation.add_plain_witness("A", "Peter's cat.")
collation.add_plain_witness("B", "Peter's dog.")
table = collate(collation, segmentation=False)
print(table)
from collatex import *
tokens_a = [ { "t": "Peter's" }, { "t": "cat" }, { "t": "." } ]
tokens_b = [ { "t": "Peter's" }, { "t": "dog" }, { "t": "." } ]
witness_a = { "id": "A", "tokens": tokens_a }
witness_b = { "id": "B", "tokens": tokens_b }
input = { "witnesses": [ witness_a, witness_b ] }
table = collate(input, segmentation=False)
print(table)
input = "Peter's cat."
print(input)
import re
input = "Peter's cat."
words = re.split(r'\s+', input)
print(words)
import re
input = "Peter's cat."
words = re.split(r'\s+', input)
tokens_by_word = [re.findall(r'.+\w|\W+$', word) for word in words]
print(tokens_by_word)
import re
input = "Peter's cat."
words = re.split(r'\s+', input)
tokens_by_word = [re.findall(r'.+\w|\W+$', word) for word in words]
tokens = []
for item in tokens_by_word:
tokens.extend(item)
print(tokens)
import re
input = "Peter's cat."
words = re.split(r'\s+', input)
tokens_by_word = [re.findall(r'.+\w|\W+$', word) for word in words]
tokens = []
for item in tokens_by_word:
tokens.extend(item)
token_list = [{"t": token} for token in tokens]
print(token_list)
from collatex import *
import re
def tokenize(input):
words = re.split(r'\s+', input) # split on whitespace
tokens_by_word = [re.findall(r'.+\w|\W+$', word) for word in words] # break off final punctuation
tokens = []
for item in tokens_by_word:
tokens.extend(item)
token_list = [{"t": token} for token in tokens] # create dictionaries for each token
return token_list
input_a = "Peter's cat."
input_b = "Peter's dog."
tokens_a = tokenize(input_a)
tokens_b = tokenize(input_b)
witness_a = { "id": "A", "tokens": tokens_a }
witness_b = { "id": "B", "tokens": tokens_b }
input = { "witnesses": [ witness_a, witness_b ] }
table = collate(input, segmentation=False)
print(table)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For possessives that may be acceptable behavior, but how about contractions like “didn’t” or “A’dam” (short for “Amsterdam”)? If the default tokenization does what you need, so much the better, but if not, you can override it according to your own requirements. Below we describe what CollateX does by default and how to override that behavior and perform your own tokenization.
Step2: Automating the tokenization
Step3: and split it into a list of whitespace-separated words
Step4: Now let’s treat final punctuation as a separate token without splitting on internal punctuation
Step5: The regex says that a token is either a string of any characters that ends in a word character (which will match “Peter’s” with the internal apostrophe as one token, since it ends in “s”, which is a word character) or a string of non-word characters (which will separate “cat.” into two tokens, since as one it wouldn’t end on a word character).
Step6: We’ve now split our witness text into tokens, but instead of returning them as a list of strings, we need to format them into the list of Python dictionaries that CollateX requires
Step7: Since we want to tokenize all of our witnesses, let’s turn our tokenization routine into a Python function that we can call with different input text
|
15,114 | <ASSISTANT_TASK:>
Python Code:
!pip install -U -q google-api-python-client
!pip install -U -q pandas
PROJECT_ID = "sa-data-validation"
MODEL_NAME = 'covertype_classifier'
VERSION_NAME = 'v1'
BQ_DATASET_NAME = 'prediction_logs'
BQ_TABLE_NAME = 'covertype_classifier_logs'
!gcloud config set project $PROJECT_ID
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import pandas as pd
from google.cloud import bigquery
HEADER = ['Elevation', 'Aspect', 'Slope','Horizontal_Distance_To_Hydrology',
'Vertical_Distance_To_Hydrology', 'Horizontal_Distance_To_Roadways',
'Hillshade_9am', 'Hillshade_Noon', 'Hillshade_3pm',
'Horizontal_Distance_To_Fire_Points', 'Wilderness_Area', 'Soil_Type',
'Cover_Type']
TARGET_FEATURE_NAME = 'Cover_Type'
FEATURE_LABELS = ['0', '1', '2', '3', '4', '5', '6']
NUMERIC_FEATURE_NAMES = ['Aspect', 'Elevation', 'Hillshade_3pm',
'Hillshade_9am', 'Hillshade_Noon',
'Horizontal_Distance_To_Fire_Points',
'Horizontal_Distance_To_Hydrology',
'Horizontal_Distance_To_Roadways','Slope',
'Vertical_Distance_To_Hydrology']
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
'Soil_Type': ['2702', '2703', '2704', '2705', '2706', '2717', '3501', '3502',
'4201', '4703', '4704', '4744', '4758', '5101', '6101', '6102',
'6731', '7101', '7102', '7103', '7201', '7202', '7700', '7701',
'7702', '7709', '7710', '7745', '7746', '7755', '7756', '7757',
'7790', '8703', '8707', '8708', '8771', '8772', '8776'],
'Wilderness_Area': ['Cache', 'Commanche', 'Neota', 'Rawah']
}
FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys()) + NUMERIC_FEATURE_NAMES
HEADER_DEFAULTS = [[0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else ['NA']
for feature_name in HEADER]
NUM_CLASSES = len(FEATURE_LABELS)
LABEL_KEY = 'predicted_label'
SCORE_KEY = 'confidence'
SIGNATURE_NAME = 'serving_default'
def _extract_json(column, feature_name):
return "JSON_EXTRACT({}, '$.{}')".format(column, feature_name)
def _replace_brackets(field):
return "REPLACE(REPLACE({}, ']', ''), '[','')".format(field)
def _replace_quotes(field):
return 'REPLACE({}, "\\"","")'.format(field)
def _cast_to_numeric(field):
return "CAST({} AS NUMERIC)".format(field)
def _add_alias(field, feature_name):
return "{} AS {}".format(field, feature_name)
view_name = "vw_"+BQ_TABLE_NAME+"_"+VERSION_NAME
colum_names = FEATURE_NAMES
input_features = ', \r\n '.join(colum_names)
json_features_extraction = []
for feature_name in colum_names:
field = _extract_json('instance', feature_name)
field = _replace_brackets(field)
if feature_name in NUMERIC_FEATURE_NAMES:
field = _cast_to_numeric(field)
else:
field = _replace_quotes(field)
field = _add_alias(field, feature_name)
json_features_extraction.append(field)
json_features_extraction = ', \r\n '.join(json_features_extraction)
json_prediction_extraction = []
for feature_name in [LABEL_KEY, SCORE_KEY]:
field = _extract_json('prediction', feature_name)
field = _replace_brackets(field)
if feature_name == SCORE_KEY:
field = _cast_to_numeric(field)
else:
field = _replace_quotes(field)
field = _add_alias(field, feature_name)
json_prediction_extraction.append(field)
json_prediction_extraction = ', \r\n '.join(json_prediction_extraction)
sql_script = '''
CREATE OR REPLACE VIEW @dataset_name.@view_name
AS
WITH step1
AS
(
SELECT
model,
model_version,
time,
SPLIT(JSON_EXTRACT(raw_data, '$.instances'), '}],[{') instance_list,
SPLIT(JSON_EXTRACT(raw_prediction, '$.predictions'), '}],[{') as prediction_list
FROM
`@project.@dataset_name.@table_name`
WHERE
model = '@model_name' AND
model_version = '@version'
),
step2
AS
(
SELECT
model,
model_version,
time,
REPLACE(REPLACE(instance, '[{', '{'),'}]', '}') AS instance,
REPLACE(REPLACE(prediction, '[{', '{'),'}]', '}') AS prediction,
FROM step1
JOIN UNNEST(step1.instance_list) AS instance
WITH OFFSET AS f1
JOIN UNNEST(step1.prediction_list) AS prediction
WITH OFFSET AS f2
ON f1=f2
),
step3 AS
(
SELECT
model,
model_version,
time,
@json_features_extraction,
@json_prediction_extraction
FROM step2
)
SELECT *
FROM step3
'''
sql_script = sql_script.replace("@project", PROJECT_ID)
sql_script = sql_script.replace("@dataset_name", BQ_DATASET_NAME)
sql_script = sql_script.replace("@table_name", BQ_TABLE_NAME)
sql_script = sql_script.replace("@view_name", view_name)
sql_script = sql_script.replace("@model_name", MODEL_NAME)
sql_script = sql_script.replace("@version", VERSION_NAME)
sql_script = sql_script.replace("@input_features", input_features)
sql_script = sql_script.replace("@json_features_extraction", json_features_extraction)
sql_script = sql_script.replace("@json_prediction_extraction", json_prediction_extraction)
print(sql_script)
client = bigquery.Client(PROJECT_ID)
client.query(query = sql_script)
print("View was created or replaced.")
query = '''
SELECT * FROM
`{}.{}`
LIMIT {}
'''.format(BQ_DATASET_NAME, view_name, 3)
pd.io.gbq.read_gbq(
query, project_id=PROJECT_ID).T
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configure Google Cloud environment settings
Step2: Authenticate your Google Cloud account
Step3: Import libraries
Step4: 1. Define dataset metadata
Step5: 2. Generate the CREATE VIEW script
Step6: Optionally, print the generated script
Step7: 3. Execute the CREATE VIEW script
Step8: 4. Query the view
|
15,115 | <ASSISTANT_TASK:>
Python Code:
import pmagpy.ipmag as ipmag
import pmagpy.pmag as pmag
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_formats = {'svg',}
%%capture
ipmag.download_magic('magic_contribution_11087.txt',
dir_path='./Data/Halls1974',
input_dir_path='./Data/Halls1974',
overwrite=True,print_progress=False)
ipmag.download_magic('magic_contribution_11088.txt',
dir_path='./Data/Swanson-Hysell2014',
input_dir_path='./Data/Swanson-Hysell2014',
overwrite=True,print_progress=False)
Halls1974_sites = pd.read_csv('./Data/Halls1974/pmag_sites.txt',
sep='\t',skiprows=1)
Halls1974_sites.head(4)
Halls1974_sites_r = Halls1974_sites.ix[Halls1974_sites.site_polarity=='r']
Halls1974_sites_r_tc = Halls1974_sites_r.ix[Halls1974_sites_r.site_tilt_correction==100]
Halls1974_sites_r_tc.reset_index(inplace=True)
Halls1974_sites_r_is = Halls1974_sites_r.ix[Halls1974_sites_r.site_tilt_correction==0]
Halls1974_sites_r_is.reset_index(inplace=True)
!mkdir 'Output_files'
Halls1974_r_is_mean = ipmag.fisher_mean(Halls1974_sites_r_is.site_dec.tolist(),
Halls1974_sites_r_is.site_inc.tolist())
Halls1974_r_tc_mean = ipmag.fisher_mean(Halls1974_sites_r_tc.site_dec.tolist(),
Halls1974_sites_r_tc.site_inc.tolist())
print Halls1974_r_tc_mean
print 'The mean for the tilt-corrected Halls (1974) Osler directions is:'
ipmag.print_direction_mean(Halls1974_r_tc_mean)
plt.figure(num=1,figsize=(4,4))
ipmag.plot_net(fignum=1)
ipmag.plot_di(Halls1974_sites_r_is.site_dec.tolist(),
Halls1974_sites_r_is.site_inc.tolist(),color='r',
label='Halls (1974) site means (in situ)')
ipmag.plot_di(Halls1974_sites_r_tc.site_dec.tolist(),
Halls1974_sites_r_tc.site_inc.tolist(),color='b',
label='Halls (1974) site means (tilt-corrected)')
plt.legend(loc=9)
plt.savefig('Example_Notebook_Output/Halls_1974_sites.svg')
plt.figure(num=1,figsize=(4,4))
ipmag.plot_net(fignum=1)
ipmag.plot_di_mean(Halls1974_r_tc_mean['dec'],
Halls1974_r_tc_mean['inc'],
Halls1974_r_tc_mean['alpha95'],'b',
label='Halls (1974) tilt-corrected mean')
ipmag.plot_di_mean(Halls1974_r_is_mean['dec'],
Halls1974_r_is_mean['inc'],
Halls1974_r_is_mean['alpha95'],'r',
label='Halls (1974) insitu mean')
plt.legend(loc=9)
plt.savefig('Example_Notebook_Output/Halls_1974_means.svg')
Halls1974_r_is_mean = ipmag.fisher_mean(Halls1974_sites_r_is.site_dec.tolist(),
Halls1974_sites_r_is.site_inc.tolist())
Halls1974_r_tc_mean = ipmag.fisher_mean(Halls1974_sites_r_tc.site_dec.tolist(),
Halls1974_sites_r_tc.site_inc.tolist())
means = pd.DataFrame([Halls1974_r_is_mean,Halls1974_r_tc_mean],
index=['Halls 1974 Osler R (insitu)','Halls 1974 Osler R (tilt-corrected)'])
means
!pmag_results_extract.py -f Data/Halls1974/pmag_results.txt
SH2014_sites = pd.read_csv('./Data/Swanson-Hysell2014/pmag_results.txt',
sep='\t',skiprows=1)
SH2014_sites.head(1)
SH2014_OslerR_upper = SH2014_sites.ix[SH2014_sites.average_height>2082]
SH2014_OslerR_upper.reset_index(inplace=True)
SH2014_OslerR_upper.head()
SH2014_upperR_dec_is = SH2014_OslerR_upper['tilt_dec_uncorr'].tolist()
SH2014_upperR_inc_is = SH2014_OslerR_upper['tilt_inc_uncorr'].tolist()
SH2014_upperR_dec_tc = SH2014_OslerR_upper['tilt_dec_corr'].tolist()
SH2014_upperR_inc_tc = SH2014_OslerR_upper['tilt_inc_corr'].tolist()
print SH2014_upperR_inc_tc
Halls1974_upperR_dec_is = Halls1974_sites_r_is['site_dec'].tolist()
Halls1974_upperR_inc_is = Halls1974_sites_r_is['site_inc'].tolist()
Halls1974_upperR_dec_tc = Halls1974_sites_r_tc['site_dec'].tolist()
Halls1974_upperR_inc_tc = Halls1974_sites_r_tc['site_inc'].tolist()
combined_upperR_dec_is = np.concatenate((SH2014_upperR_dec_is,
Halls1974_upperR_dec_is), axis=0)
combined_upperR_inc_is = np.concatenate((SH2014_upperR_inc_is,
Halls1974_upperR_inc_is), axis=0)
combined_upperR_dec_tc = np.concatenate((SH2014_upperR_dec_tc,
Halls1974_upperR_dec_tc), axis=0)
combined_upperR_inc_tc = np.concatenate((SH2014_upperR_inc_tc,
Halls1974_upperR_inc_tc), axis=0)
plt.figure(num=1,figsize=(4,4))
ipmag.plot_net(fignum=1)
ipmag.plot_di(combined_upperR_dec_is,
combined_upperR_inc_is,color='r', label='insitu directions')
ipmag.plot_di(combined_upperR_dec_tc,
combined_upperR_inc_tc,color='b', label='tilt-corrected directions')
plt.legend()
plt.show()
OslerUpper_is_mean = ipmag.fisher_mean(combined_upperR_dec_is,
combined_upperR_inc_is)
print "The Fisher mean of the insitu upper Osler R directions:"
ipmag.print_direction_mean(OslerUpper_is_mean)
print ''
OslerUpper_tc_mean = ipmag.fisher_mean(combined_upperR_dec_tc,
combined_upperR_inc_tc)
print "The Fisher mean of the tilt-corrected upper Osler R directions:"
ipmag.print_direction_mean(OslerUpper_tc_mean)
print ''
print 'The k_2/k_1 ratio is:'
print OslerUpper_tc_mean['k']/OslerUpper_is_mean['k']
OslerR_upper_diddd=[]
Halls1974=Halls1974_sites_r_is.T.to_dict().values()
Halls1974_sites,filetype=pmag.magic_read('./Data/Halls1974/er_sites.txt') # reads in the data
for site in Halls1974:
orientations=pmag.get_dictitem(Halls1974_sites,'er_site_name',
str(site['er_site_name']),'T')
if len(orientations)>0: # record found
OslerR_upper_diddd.append([site['site_dec'],site['site_inc'],
float(orientations[0]['site_bed_dip_direction']),
float(orientations[0]['site_bed_dip'])])
else:
print 'no orientations found for site, ',site['er_site_name']
SH2014=SH2014_OslerR_upper.T.to_dict().values()
SH2014_sites,filetype=pmag.magic_read('./Data/Swanson-Hysell2014/er_sites.txt')
for site in SH2014:
orientations=pmag.get_dictitem(SH2014_sites,'er_site_name',
str(site['er_site_names']),'T')
if len(orientations)>0: # record found
OslerR_upper_diddd.append([site['tilt_dec_uncorr'],site['tilt_inc_uncorr'],
float(orientations[0]['site_bed_dip_direction']),
float(orientations[0]['site_bed_dip'])])
else:
print 'no orientations found for site, ',site['er_site_names']
diddd=np.array(OslerR_upper_diddd)
ipmag.bootstrap_fold_test(diddd,num_sims=100, min_untilt=0, max_untilt=140)
Halls1974_results = pd.read_csv('./Data/Halls1974/pmag_results.txt',sep='\t',skiprows=1)
#filter so that individual results are shown filtering out mean poles
Halls1974_results_i = Halls1974_results.ix[Halls1974_results['data_type']=='i']
#filter so that reversed poles are included rather than normal poles
Halls1974_results_r = Halls1974_results_i.ix[Halls1974_results['er_location_names']=='Osler Volcanics, Nipigon Strait, Lower Reversed']
Halls1974_results_r.head()
SH_vgps = ipmag.make_di_block(SH2014_OslerR_upper['vgp_lon'].tolist(),
SH2014_OslerR_upper['vgp_lat'].tolist())
Halls_vgps = ipmag.make_di_block(Halls1974_results_r['vgp_lon'].tolist(),
Halls1974_results_r['vgp_lat'].tolist())
ipmag.common_mean_watson(SH_vgps,Halls_vgps,NumSims=1000,plot='yes')
Osler_upperR_pole = pmag.fisher_mean(SH_vgps+Halls_vgps)
ipmag.print_pole_mean(Osler_upperR_pole)
from mpl_toolkits.basemap import Basemap
m = Basemap(projection='ortho',lat_0=35,lon_0=200,resolution='c',area_thresh=50000)
plt.figure(figsize=(6, 6))
m.drawcoastlines(linewidth=0.25)
m.fillcontinents(color='bisque',zorder=1)
m.drawmeridians(np.arange(0,360,30))
m.drawparallels(np.arange(-90,90,30))
ipmag.plot_vgp(m,SH2014_OslerR_upper['vgp_lon'].tolist(),
SH2014_OslerR_upper['vgp_lat'].tolist(),
marker='o')
ipmag.plot_vgp(m,Halls1974_results_r['vgp_lon'].tolist(),
Halls1974_results_r['vgp_lat'].tolist(),
marker='o',label='Osler upper reversed VGPs')
ipmag.plot_pole(m,Osler_upperR_pole['dec'],
Osler_upperR_pole['inc'],
Osler_upperR_pole['alpha95'],
marker='s',label='Osler upper reversed pole')
plt.legend()
plt.savefig('Example_Notebook_Output/pole_plot.svg')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There are three other important Python libraries (which are bundled with the Canopy and Anaconda installations of Python) that come in quite handy and are used within this notebook
Step2: Background on the Osler Volcanic Group
Step3: Loading the unpacked data into Pandas Dataframes
Step4: Filtering by polarity and tilt-correction
Step5: Data analysis and visualization
Step6: Calculating Fisher means
Step7: Plotting the Halls (1974) results
Step8: A similar plot showing the Fisher means of the site means calculated above and their associated $\alpha_{95}$ confidence ellipses can be generated using the ipmag.plot_di_mean function.
Step9: The means that have been calculated are now dictionaries that can be made into a new dataframe to present the results. A table like this can be exported into a variety of formats (e.g. LaTeX, html, csv) for inclusion in a publication.
Step10: Alternatively, one can export the MagIC data table pmag_results.txt into a tab delimited or latex file using the PmagPy program
Step11: Combining and plotting the Halls (1974) and Swanson-Hysell et al. (2014) data
Step12: Swanson-Hysell et al. (2014) argued that data from the upper third of the Simpson Island stratigraphy should be compared with the reverse data from the Halls (1974) Nipigon Strait region study. The dataframe can be filtered using the average_height value from the pmag_results table.
Step13: Let's fish out the declinations and inclinations in geographic (in situ) coordinates (is) and those in tilt-corrected coordinates (tc) from the pmag_results table from the Swanson-Hysell2014 dataset and convert them from a dataframe object to a python list object. We can see what happened with a print command.
Step14: And now the same for the Halls1974 data table.
Step15: We can combine the data from the two papers using the numpy (np) concatenate function
Step16: Now we can plot the data!
Step17: The combined means can be calculated with the ipmag.fisher_mean function and printed out with some formatting with the ipmag.print_direction_mean function.
Step18: Fold test on the site mean directions
Step19: Let's start with the Halls (1974) data set in Halls1974_sites_r_is. It is handier for the MagIC data tables in this exercise to have a list of dictionaries instead of a Pandas data frame. So let's convert the Halls 1974 filtered dataframe to a list of dictionaries called Halls1974.
Step20: Now we can read in the data from the er_sites.txt table (with the bedding attitudes) using the function pmag.magic_read which reads in magic tables into a list of dictionaries. Then we will step through the records and pair each site with its bedding orientations using a handy function get_dict_item from the pmag module that will find correct site record. The function expects a list of dictionaries (Halls1974_sites), a key to filter on ('er_site_name'), the value of the key (pmag.magic_read reads things in as strings but because the names are numbers, pandas converted them to integers, hence the str(site['er_site_name']), and whether the two must be equal ('T'), not equal ('F'), contain ('has') or not contain ('not'). After finding the correct orientation record for the site, we can put the directions and bedding orientations together into the list OslerR_upper_diddd.
Step21: We can do the same for the filtered upper Osler sequence of Swanson-Hysell et al. (2014). These have slightly different keys, but the general idea is the same. In the cell below, we convert the dataframe to a list of dictionaries, fish out the bedding orientations and attach them to the same list as before (OslerR_upper_diddd).
Step22: Now all we have to do is make a numpy array out of the OslerR_upper_diddd and send it to the ipmag.bootstrap_fold_test function.
Step23: Developing a mean paleomagnetic pole
Step24: The data can be made into a list of [vgp_lon, vgp_lat] values using the ipmag.make_di_block function.
Step25: Conducting a common mean test
Step26: Calculating a mean paleomagnetic pole
Step27: Plotting VGPs and the mean pole
|
15,116 | <ASSISTANT_TASK:>
Python Code:
import folium
import pandas as pd
SYR = (43.0481, -76.1474)
map = folium.Map(location=SYR, zoom_start=14)
map
data = pd.read_csv('https://cityofsyracuse.github.io/RoadsChallenge/data/potholes.csv')
data.sample(5)
# NOTE: to_dict('records') converts a pandas dataframe back to a list of dict!
SYR = (43.0481, -76.1474)
map = folium.Map(location=SYR, zoom_start=14)
subset = data.sample(500)
for row in subset.to_records():
coords = (row['Longitude'],row['Latitude'])
loc = str(row['strLocation']) + ' ' + str(row['dtTime'])
marker = folium.Circle(location=coords, radius=15, popup=loc,color='#3186cc',fill_color='#3186cc')
map.add_child(marker)
map
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to find the latitude and longitude of Syracuse, then estimate the appropriate zoom level...
Step2: We get the data from the RoadsChallange github account
Step3: Now we take the latitude and longitude of each pothole and show them on a map using circle markers
|
15,117 | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
PATH = os.getcwd()
train_df = pd.read_csv(f'{PATH}\\train.csv', low_memory=False)
test_df = pd.read_csv(f'{PATH}\\test.csv', low_memory=False)
def display_all(df):
with pd.option_context("display.max_rows", 100):
with pd.option_context("display.max_columns", 100):
display(df)
'''
**Problem Statement**
An online question and answer platform has hired you as a data scientist to identify the best question authors on the platform.
This identification will bring more insight into increasing the user engagement. Given the tag of the question, number of views
received, number of answers, username and reputation of the question author, the problem requires you to predict the upvote
count that the question will receive.
**DATA DICTIONARY**
- Variable ----------- Definition
- ID ----------- Question ID
- Tag ----------- Anonymised tags representing question category
- Reputation ----------- Reputation score of question author
- Answers ----------- Number of times question has been answered
- Username ----------- Anonymised user id of question author
- Views ----------- Number of times question has been viewed
- Upvotes (Target) ----------- Number of upvotes for the question
''';
train_df.head()
'Train', train_df.shape, len(set(train_df.ID.values)), len(set(train_df.Username.values)), 'Test', \
test_df.shape, len(set(test_df.ID.values)), len(set(test_df.Username.values))
def Intersection(lst1, lst2):
return len(list(set(lst1).intersection(lst2)))
Intersection(train_df.Username, test_df.Username),\
Intersection(train_df.Reputation, test_df.Reputation),\
Intersection(train_df.Views, test_df.Views)
man_train_list = train_df.Username.unique()
man_test_list = test_df.Username.unique()
print("Train: {0}".format(len(man_train_list)))
print("Test: {0}".format(len(man_test_list)))
temp1 = train_df.groupby('Username').count().iloc[:,-1]
temp2 = test_df.groupby('Username').count().iloc[:,-1]
df_man = pd.concat([temp1,temp2], axis = 1, join = 'outer')
df_man.columns = ['train_count','test_count']
print(df_man.head(20))
print(df_man.sort_values(by = 'train_count', ascending = False).head(20))
fig, axes = plt.subplots(1,2, figsize = (12,5))
temp = df_man['train_count'].dropna().sort_values(ascending = False).reset_index(drop = True)
axes[0].plot(temp.index+1, temp.cumsum()/temp.sum())
axes[0].set_title('cumulative train_count');
temp = df_man['test_count'].dropna().sort_values(ascending = False).reset_index(drop = True)
axes[1].plot(temp.index+1, temp.cumsum()/temp.sum())
axes[1].set_title('cumulative test_count');
ix20 = int(len(df_man['train_count'].dropna())*0.2)
print("TRAIN: 20% of man ({0}) responsible for {1:2.2f}% of entries".format(ix20,df_man['train_count'].sort_values(ascending = False).cumsum().iloc[ix20]/df_man['train_count'].sum()*100))
ix20 = int(len(df_man['test_count'].dropna())*0.2)
print("TEST: 20% of man ({0}) responsible for {1:2.2f}% of entries".format(ix20, df_man['test_count'].sort_values(ascending = False).cumsum().iloc[ix20]/df_man['test_count'].sum()*100))
man_not_in_test = set(man_train_list) - set(man_test_list)
man_not_in_train = set(man_test_list) - set(man_train_list)
print("{} man are featured in train but not in test".format(len(man_not_in_test)))
print("{} man are featured in test but not in train".format(len(man_not_in_train)))
train_df.loc[list(man_not_in_test)].head()
## Need to drop them blindly...
#train_df.drop(index = train_df.loc[list(man_not_in_test)].index, inplace=True).shape
print(df_man.loc[list(man_not_in_test)]['train_count'].describe())
print(df_man.loc[list(man_not_in_train)]['test_count'].describe())
df_man.sort_values(by = 'train_count', ascending = False).head(1000).corr()
df_man.sort_values(by = 'train_count', ascending = False).plot.scatter(x = 'train_count', y = 'test_count')
temp = df_man['train_count'].sort_values(ascending = False).head(50000)
temp = pd.concat([temp,temp.cumsum()/df_man['train_count'].sum()*100], axis = 1).reset_index()
temp.columns = ['user_id','count','percentage']
print(temp)
set(train_df.Tag)
man_list = df_man['train_count'].sort_values(ascending = False).index
ixes = train_df.Username.isin(man_list)
df10000 = train_df[ixes][['Username','Tag']]
tags_dummies = pd.get_dummies(df10000.Tag)
df10000 = pd.concat([df10000,tags_dummies[['a', 'c', 'h', 'i', 'j', 'o', 'p', 'r', 's', 'x']]], axis = 1).drop('Tag', axis = 1)
print("The contributors account for {} entries\n".format(len(df10000)))
print(df10000.head(10))
df10000.head(2)
import itertools
last_names = ['Mary', 'Patricia', 'Linda', 'Barbara', 'Elizabeth',
'Jennifer', 'Maria', 'Susan', 'Margaret', 'Dorothy',
'James', 'John', 'Robert', 'Michael', 'William', 'David',
'Richard', 'Charles', 'Joseph', 'Thomas','Smith', 'Johnson', 'Williams', 'Jones', 'Brown', 'Davis', 'Miller', 'Wilson', 'Moore',
'Taylor', 'Anderson', 'Thomas', 'Jackson', 'White', 'Harris', 'Martin', 'Thompson', 'Garcia',
'Martinez', 'Robinson', 'Clark', 'Rodriguez', 'Lewis', 'Lee', 'Walker', 'Hall', 'Allen', 'Young',
'Hernandez', 'King', 'Wright', 'Lopez', 'Hill', 'Scott', 'Green', 'Adams', 'Baker', 'Gonzalez', 'Nelson',
'Carter', 'Mitchell', 'Perez', 'Roberts', 'Turner', 'Phillips', 'Campbell', 'Parker', 'Evans', 'Edwards', 'Collins']
first_names = ['Smith', 'Johnson', 'Williams', 'Jones', 'Brown', 'Davis', 'Miller', 'Wilson', 'Moore',
'Taylor', 'Anderson', 'Thomas', 'Jackson', 'White', 'Harris', 'Martin', 'Thompson', 'Garcia',
'Martinez', 'Robinson', 'Clark', 'Rodriguez', 'Lewis', 'Lee', 'Walker', 'Hall', 'Allen', 'Young',
'Hernandez', 'King', 'Wright', 'Lopez', 'Hill', 'Scott', 'Green', 'Adams', 'Baker', 'Gonzalez', 'Nelson',
'Carter', 'Mitchell', 'Perez', 'Roberts', 'Turner', 'Phillips', 'Campbell', 'Parker', 'Evans', 'Edwards', 'Collins','Mary', 'Patricia', 'Linda', 'Barbara', 'Elizabeth',
'Jennifer', 'Maria', 'Susan', 'Margaret', 'Dorothy',
'James', 'John', 'Robert', 'Michael', 'William', 'David',
'Richard', 'Charles', 'Joseph', 'Thomas']
names = [first + ' ' + last for first,last in (itertools.product(first_names, last_names))]
# shuffle them
np.random.seed(12345)
np.random.shuffle(names)
dictionary = dict(zip(man_list, names))
df10000.loc[df10000.Username.isin(dictionary), 'Username' ] = df10000['Username'].map(dictionary)
print(df10000.head())
# see if the name coincides
print(names[:10])
print(df10000.groupby('Username').count().sort_values(by = 'a', ascending = False).head(10))
gby = pd.concat([df10000.groupby('Username').mean(),df10000.groupby('Username').count()], axis = 1).iloc[:,:-9]
gby.columns = ['a', 'c', 'h', 'i', 'j', 'o', 'p', 'r', 's', 'x', 'count']
gby.sort_values(by = 'count', ascending = False).head(10)[['a', 'c', 'h', 'i', 'j', 'o', 'p', 'r', 's', 'x', 'count']]
gby.sort_values(by = 'count', ascending = False).head(100).drop('count', axis = 1).plot(kind = 'bar', stacked = True, figsize = (15,6))
plt.figure()
gby.sort_values(by = 'count', ascending = False)['count'].head(100).plot(kind = 'bar', figsize = (15,6));
gby.head(2)
pd.concat([train_df['Tag'].value_counts().sort_values(ascending=False),test_df['Tag'].value_counts().sort_values(ascending=False)],sort=False, axis =1,\
keys=['Train_Stats', 'Test_Stats'])
gby['skill'] = gby['r']*1 + gby['o']*2 + gby['h']*3 + gby['s']*4 + gby['a']*5 + gby['i']*6 + gby['p']*7 + gby['j']*8 \
+ gby['c']*9
print("Top performers")
gby.sort_values(by = 'skill', ascending = False).reset_index().head()
print("\nWorst performers")
gby.sort_values(by = 'skill', ascending = False).reset_index().tail()
gby.skill.plot(kind = 'hist', bins=10)
print(gby.mean())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Username
Step2: First of all, let's see how many different User's we have on both datasets
Step3: Unique User's in Test are close to $1$/$2$ of user's in Train...
Step4: Some Users have entries only in one of the two datasets
Step5: More clearer if one looks at the plots for the cumulative distributions.
Step6: The man featuring in only one of the two datasets usually have very few entries.
Step7: Strong Correlation among them
Step8: Their performances seem very different, even for people with similar number of entries.
Step9: I think this high diversity should be accounted for when building our predictive model!
|
15,118 | <ASSISTANT_TASK:>
Python Code:
# Import SPI rack and D5b module
from spirack import SPI_rack, D5b_module
import logging
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
COM_port = 'COM4' # COM port of the SPI rack
COM_speed = 1e6 # Baud rate, not of much importance
timeout = 1 # Timeout value in seconds
spi_rack = SPI_rack(COM_port, COM_speed, timeout)
spi_rack.unlock() # Unlock the controller to be able to send data to the rack
spi_rack.unlock()
print('Version: ' + spi_rack.get_firmware_version())
print('Temperature: {:.2f} C'.format(spi_rack.get_temperature()))
battery_v = spi_rack.get_battery()
print('Battery: {:.3f}V, {:.3f}V'.format(battery_v[0], battery_v[1]))
d5b = D5b_module(spi_rack, module=1, reset_voltages=True)
print("Firmware version: {}".format(d5b.get_firmware_version()))
d5b.set_clock_source('internal')
print("Clock source: {}".format(d5b.get_clock_source()))
d5b.set_toggle_time(300)
toggle_value = d5b.get_toggle_time()
print('Toggle time: {} x 100 ns = {} s'.format(toggle_value, round(toggle_value*100e-9, 7)))
d5b.set_toggle_amount(6)
print('Toggle amount: {}'.format(d5b.get_toggle_amount()))
d5b.set_trigger_holdoff_time(30e-6)
print('Holdoff time: {} s'.format(d5b.get_trigger_holdoff_time()))
DAC = 0
d5b.set_DAC_span(DAC, '4V_bi')
print("Span DAC {}: {}".format(DAC, d5b.get_DAC_span(DAC)))
d5b.set_DAC_mode(DAC, 'toggle')
print("DAC {} mode: {}\n".format(DAC, d5b.get_DAC_mode(DAC)))
d5b.set_DAC_voltage(DAC, 0)
d5b.set_DAC_neg_toggle_voltage(DAC, -2)
d5b.set_DAC_pos_toggle_voltage(DAC, 2)
values = d5b.get_DAC_voltages(DAC)
print('Voltage: {:.3f} V\nNegative Toggle: {:.3f} V\nPositive Toggle: {:.3f} V'.format(values[0],values[1],values[2]))
DAC = 1
d5b.set_DAC_span(DAC, '4V_bi')
print("Span DAC {}: {}".format(DAC, d5b.get_DAC_span(DAC)))
d5b.set_DAC_mode(DAC, 'toggle')
print("DAC {} mode: {}\n".format(DAC, d5b.get_DAC_mode(DAC)))
d5b.set_DAC_voltage(DAC, 1)
d5b.set_DAC_neg_toggle_voltage(DAC, -1)
d5b.set_DAC_pos_toggle_voltage(DAC, 3)
values = d5b.get_DAC_voltages(DAC)
print('Voltage: {:.3f} V\nNegative Toggle: {:.3f} V\nPositive Toggle: {:.3f} V'.format(values[0],values[1],values[2]))
spi_rack.trigger_now()
spi_rack.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open the SPI rack connection and unlock the controller. This is necessary after bootup of the controller module. If not unlocked, no communication with the modules can take place. The virtual COM port baud rate is irrelevant as it doesn't change the actual speed. Timeout can be changed, but 1 second is a good value.
Step2: Read back the version of the microcontroller software. This should return 1.6 or higher to be able to use the D5b properly. Als read the temperature and the battery voltages through the C1b, this way we verify that the connection with the SPI Rack is working.
Step3: Create a new D5b module object at the correct module address using the SPI object. By default the module resets the output voltages to 0 Volt. Before it does this, it will read back the current value. If this value is non-zero it will slowly ramp it to zero. If reset_voltages = False then the output will not be changed.
Step4: Configuring the D5b
Step5: The toggle time of the DACs is set in steps of 100 ns (the 10 MHz clock) with a minimum of 30 μs. We need to input a value as a multiple of this 100 ns. The toggle amount should be an even number with a minimum of two.
Step6: The module will start toggling after it receives a trigger from the backplane (either directly controlled from the PC or from another module). If there are any filters and delays in the setup, we might want to wait with toggling the DAC before these are settled. This is what the hold-off time is for. It can be set in steps of 100 ns, with a minimum of 30 μs. This time should be set in seconds.
Step7: Running the D5b
Step8: Here we generate the trigger directly from the PC to demonstrate the usage.
Step9: You should be able to see the following output on your oscilloscope (minus the two top traces)
|
15,119 | <ASSISTANT_TASK:>
Python Code:
def loadContributions(file, withsexe=False):
contributions = pd.read_json(path_or_buf=file, orient="columns")
rows = [];
rindex = [];
for i in range(0, contributions.shape[0]):
row = {};
row['id'] = contributions['id'][i]
rindex.append(contributions['id'][i])
if (withsexe):
if (contributions['sexe'][i] == 'Homme'):
row['sexe'] = 0
else:
row['sexe'] = 1
for question in contributions['questions'][i]:
if (question.get('Reponse')) and (question['texte'][0:5] != 'Savez') :
row[question['titreQuestion']+' : '+question['texte']] = 1
for criteres in question.get('Reponse'):
# print(criteres['critere'].keys())
row[question['titreQuestion']+'. (Réponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1
rows.append(row)
df = pd.DataFrame(data=rows)
df.fillna(0, inplace=True)
return df
df = loadContributions('../data/EGALITE3.brut.json', True)
df.fillna(0, inplace=True)
df.index = df['id']
#df.to_csv('consultation_an.csv', format='%d')
#df.columns = ['Q_' + str(col+1) for col in range(len(df.columns) - 2)] + ['id' , 'sexe']
df.head()
from sklearn.cluster import KMeans
from sklearn import metrics
import numpy as np
X = df.drop('id', axis=1).values
def train_kmeans(nb_clusters, X):
kmeans = KMeans(n_clusters=nb_clusters, random_state=0).fit(X)
return kmeans
#print(kmeans.predict(X))
#kmeans.cluster_centers_
def select_nb_clusters():
perfs = {};
for nbclust in range(2,10):
kmeans_model = train_kmeans(nbclust, X);
labels = kmeans_model.labels_
# from http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index
# we are in an unsupervised model. cannot get better!
# perfs[nbclust] = metrics.calinski_harabaz_score(X, labels);
perfs[nbclust] = metrics.silhouette_score(X, labels);
print(perfs);
return perfs;
df['clusterindex'] = train_kmeans(4, X).predict(X)
#df
perfs = select_nb_clusters();
# result :
# {2: 341.07570462155348, 3: 227.39963334619881, 4: 186.90438345452918, 5: 151.03979976346525, 6: 129.11214073405731, 7: 112.37235520885432, 8: 102.35994869157568, 9: 93.848315820675438}
optimal_nb_clusters = max(perfs, key=perfs.get);
print("optimal_nb_clusters" , optimal_nb_clusters);
km_model = train_kmeans(optimal_nb_clusters, X);
df['clusterindex'] = km_model.predict(X)
lGroupBy = df.groupby(['clusterindex']).mean();
cluster_profile_counts = df.groupby(['clusterindex']).count();
cluster_profile_means = df.groupby(['clusterindex']).mean();
global_counts = df.count()
global_means = df.mean()
cluster_profile_counts.head(10)
df_profiles = pd.DataFrame();
nbclusters = cluster_profile_means.shape[0]
df_profiles['clusterindex'] = range(nbclusters)
for col in cluster_profile_means.columns:
if(col != "clusterindex"):
df_profiles[col] = np.zeros(nbclusters)
for cluster in range(nbclusters):
df_profiles[col][cluster] = cluster_profile_means[col][cluster]
# row.append(df[col].mean());
df_profiles.head()
#print(df_profiles.columns)
intereseting_columns = {};
for col in df_profiles.columns:
if(col != "clusterindex"):
global_mean = df[col].mean()
diff_means_global = abs(df_profiles[col] - global_mean). max();
# print(col , diff_means_global)
if(diff_means_global > 0.05):
intereseting_columns[col] = True
#print(intereseting_columns)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
interesting = list(intereseting_columns.keys())
df_profiles_sorted = df_profiles[interesting].sort_index(axis=1)
df_profiles_sorted.plot.bar(figsize =(1, 1))
df_profiles_sorted.plot.bar(figsize =(16, 8), legend=False)
df_profiles_sorted.T
#df_profiles.sort_index(axis=1).T
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build clustering model
Step2: Build the optimal model and apply it
Step3: Cluster Profiles
|
15,120 | <ASSISTANT_TASK:>
Python Code:
>>> import bokeh
>>> bokeh.__version__
import numpy as np
def mohrs_circle(stress_x=1,stress_y=1,shear=0):
A function that calculates the critical values to build a Mohr's Circle
# calculate the average stress, min stress and max stress
stress_avg=(stress_x+stress_y)/2
stress_max=stress_avg+(((stress_x-stress_y)/2)**2+shear**2)**0.5
stress_min=stress_avg-(((stress_x-stress_y)/2)**2+shear**2)**0.5
# calculate the radius
R=((((stress_x-stress_y)/2)**2)+shear**2)**0.5 #Also max shear
circle_eqn=((stress_x-stress_avg)**2)-shear**2-R**2
# Construct x and y arrays that build the circle
n=100
t=np.linspace(0,2*np.pi,n+1)
x=R*np.cos(t)+stress_avg
y=R*np.sin(t)
# Construct X and Y arrays that build the line accross Mohr's circle
X = np.array([stress_x, stress_y])
Y = np.array([-shear, shear])
# Declare the center
C = stress_avg
return x,y,X,Y,R,C
x,y,X,Y,R,C = mohrs_circle(2,5,1)
print(X)
print(Y)
print(C)
print(R)
import bokeh
from bokeh.plotting import figure, output_file, show, output_notebook
from bokeh.models import ColumnDataSource
print(f"Bokeh version: {bokeh.__version__}")
x,y,X,Y,R,C = mohrs_circle(2,5,1)
# Create the Bokeh Column Data Source Object from the mohrs_circle() output arrays
circle_source = ColumnDataSource(data=dict(x=x, y=y))
line_source = ColumnDataSource(data=dict(x=X, y=Y))
plot = figure(plot_height=400, plot_width=400, title="Mohr's Circle", tools="pan,reset,save,wheel_zoom")
plot.line('x','y', source=circle_source, line_width=3, line_alpha=0.6)
plot.line('x','y', source=line_source, line_width=3, line_alpha=0.8)
show(plot)
from bokeh.io import export_png
export_png(plot, filename="plot.png")
import bokeh
from bokeh.plotting import figure, output_file, show, output_notebook
from bokeh.models import ColumnDataSource
from bokeh.io import export_png
x,y,X,Y,R,C = mohrs_circle(2,5,1)
circle_source = ColumnDataSource(data=dict(x=x, y=y))
line_source = ColumnDataSource(data=dict(x=X, y=Y))
plot = figure(plot_height=400, plot_width=400, title="Mohr's Circle", tools="")
plot.toolbar.logo = None
plot.toolbar_location = None
plot.line('x','y', source=circle_source, line_width=3, line_alpha=0.6)
plot.line('x','y', source=line_source, line_width=3, line_alpha=0.8)
export_png(plot, filename="plot_no_tools.png")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: If you installation was successful, you should see a version Number like '1.3.4'.
Step3: Let's test our function and see the resulting output
Step4: We see output that looks reasonable.
Step5: Next, we'll call our mohrs_circle() function so that we have arrays we need to build the plot.
Step6: Now we'll use the arrays x,y and X,Y to create two Bokeh Columnar Data Sources. Bokeh uses the concept of a columnar data source, sort of like a column in a table or excel file to build plots.
Step7: The next step is to create a Bokeh figure object that we'll call plot. Bokeh figure objects are the basis for Bokeh plots. Lines, input boxes, sliders and other sorts of things can be added to a figure object. Whatever gets added to the figure object will be shown in the final plot.
Step8: Now we can add our circle and line to the plot. This is accomplished by calling plot.line() and specifying the axis 'x','y' and providing our column data sources as keyword arguments. Some line attributes such as line_width and line_alpha can also be specified.
Step9: OK, we've created our plot, but now we need to see it. There are a couple ways of doing so. Remember that Bokeh is primarily designed for creating plots built for the web.
Step10: The plot should look something like the plot below.
Step11: On my Windows 10 laptop, a Windows Defender Firewall window popped up that asked me for access. After I typed in my administrator password, the plot was saved.
|
15,121 | <ASSISTANT_TASK:>
Python Code:
# Use the alarm model to generate data from it.
from pgmpy.utils import get_example_model
from pgmpy.sampling import BayesianModelSampling
alarm_model = get_example_model("alarm")
samples = BayesianModelSampling(alarm_model).forward_sample(size=int(1e5))
samples.head()
# Defining the Bayesian Model structure
from pgmpy.models import BayesianNetwork
model_struct = BayesianNetwork(ebunch=alarm_model.edges())
model_struct.nodes()
# Fitting the model using Maximum Likelihood Estimator
from pgmpy.estimators import MaximumLikelihoodEstimator
mle = MaximumLikelihoodEstimator(model=model_struct, data=samples)
# Estimating the CPD for a single node.
print(mle.estimate_cpd(node="FIO2"))
print(mle.estimate_cpd(node="CVP"))
# Estimating CPDs for all the nodes in the model
mle.get_parameters()[:10] # Show just the first 10 CPDs in the output
# Verifying that the learned parameters are almost equal.
np.allclose(
alarm_model.get_cpds("FIO2").values, mle.estimate_cpd("FIO2").values, atol=0.01
)
# Fitting the using Bayesian Estimator
from pgmpy.estimators import BayesianEstimator
best = BayesianEstimator(model=model_struct, data=samples)
print(best.estimate_cpd(node="FIO2", prior_type="BDeu", equivalent_sample_size=1000))
# Uniform pseudo count for each state. Can also accept an array of the size of CPD.
print(best.estimate_cpd(node="CVP", prior_type="dirichlet", pseudo_counts=100))
# Learning CPDs for all the nodes in the model. For learning all parameters with BDeU prior, a dict of
# pseudo_counts need to be provided
best.get_parameters(prior_type="BDeu", equivalent_sample_size=1000)[:10]
# Shortcut for learning all the parameters and adding the CPDs to the model.
model_struct = BayesianNetwork(ebunch=alarm_model.edges())
model_struct.fit(data=samples, estimator=MaximumLikelihoodEstimator)
print(model_struct.get_cpds("FIO2"))
model_struct = BayesianNetwork(ebunch=alarm_model.edges())
model_struct.fit(
data=samples,
estimator=BayesianEstimator,
prior_type="BDeu",
equivalent_sample_size=1000,
)
print(model_struct.get_cpds("FIO2"))
from pgmpy.estimators import ExpectationMaximization as EM
# Define a model structure with latent variables
model_latent = BayesianNetwork(
ebunch=alarm_model.edges(), latents=["HYPOVOLEMIA", "LVEDVOLUME", "STROKEVOLUME"]
)
# Dataset for latent model which doesn't have values for the latent variables
samples_latent = samples.drop(model_latent.latents, axis=1)
model_latent.fit(samples_latent, estimator=EM)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: The Expecation Maximization (EM) algorithm can also learn the parameters when we have some latent variables in the model.
|
15,122 | <ASSISTANT_TASK:>
Python Code:
def square(x):
return x ** 2
type(square)
ribua = square
print(square(5))
print(ribua(5))
ribua is square
def add(num1, num2):
return num1 + num2
def subtract(num1, num2):
return num1 - num2
def multiply(num1, num2):
return num1 * num2
def divide(num1, num2):
return num1 / num2
functions = [add, subtract, multiply, divide]
# Option 1
print(add(5, 2))
# Option 2
math_function = functions[0]
print(math_function(5, 2))
# Option 3 (ugly, but works!)
print(functions[0](5, 2))
for function in functions:
print(function(5, 2))
def calculate(function, num1, num2):
return function(num1, num2)
calculate(divide, 5, 2)
def square(number):
return number ** 2
square_check = apply(square, [5, -1, 6, -8, 0])
tuple(square_check) == (25, 1, 36, 64, 0)
squared_items = map(square, [1, 6, -1, 8, 0, 3, -3, 9, -8, 8, -7])
print(tuple(squared_items))
def my_map(function, iterable):
for item in iterable:
yield function(item)
numbers = [(2, 4), (1, 4, 2), (1, 3, 5, 6, 2), (3, )]
sums = map(sum, numbers)
print(tuple(sums))
def add_one(number):
return number + 1
incremented = map(add_one, (1, 2, 3))
print(tuple(incremented))
def is_mature(age):
return age >= 18
ages = [0, 1, 4, 10, 20, 35, 56, 84, 120]
mature_ages = filter(is_mature, ages)
print(tuple(mature_ages))
to_sum = [(1, -1), (2, 5), (5, -3, -2), (1, 2, 3)]
sum_is_not_zero = filter(sum, to_sum)
print(tuple(sum_is_not_zero))
to_sum = [0, "", None, 0.0, True, False, "Hello"]
equivalent_to_true = filter(None, to_sum)
print(tuple(equivalent_to_true))
def add(num1, num2):
return num1 + num2
add = lambda num1, num2: num1 + num2
print(add(5, 2))
def is_positive(number):
return number > 0
numbers = [-2, -1, 0, 1, 2]
positive_numbers = filter(is_positive, numbers)
print(tuple(positive_numbers))
numbers = [-2, -1, 0, 1, 2]
positive_numbers = filter(lambda n: n > 0, numbers)
print(tuple(positive_numbers))
closet = [
{'name': 'Peter', 'year_of_birth': 1927, 'gender': 'Male'},
{'name': 'Edmund', 'year_of_birth': 1930, 'gender': 'Male'},
{'name': 'Lucy', 'year_of_birth': 1932, 'gender': 'Female'},
{'name': 'Susan', 'year_of_birth': 1928, 'gender': 'Female'},
{'name': 'Jadis', 'year_of_birth': 0, 'gender': 'Female'},
]
sorted(closet, key=lambda d: d['year_of_birth'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <p style="text-align
Step5: <p style="text-align
Step6: <p style="text-align
Step7: <p style="text-align
Step8: <p style="text-align
Step9: <p style="text-align
Step10: <span style="text-align
Step11: <p style="text-align
Step12: <p style="text-align
Step13: <p style="text-align
Step14: <p style="text-align
Step15: <p style="text-align
Step16: <p style="text-align
Step17: <p style="text-align
Step18: <div class="align-center" style="display
Step19: <p style="text-align
Step20: <p style="text-align
Step21: <p style="text-align
Step22: <p style="text-align
Step23: <p style="text-align
|
15,123 | <ASSISTANT_TASK:>
Python Code:
cd /usr/local/notebooks
mkdir -p ./workdir
#check seqfile files to process in data directory (make sure you still remember the data directory)
!ls ./data/test/data
Seqfile='./data/test/data/1c.fa'
Cpu='2' # number of maxixum threads for search and alignment
Hmm='./data/SSUsearch_db/Hmm.ssu.hmm' # hmm model for ssu
Gene='ssu'
Script_dir='./SSUsearch/scripts'
Gene_model_org='./data/SSUsearch_db/Gene_model_org.16s_ecoli_J01695.fasta'
Ali_template='./data/SSUsearch_db/Ali_template.silva_ssu.fasta'
Start='577' #pick regions for de novo clustering
End='727'
Len_cutoff='100' # min length for reads picked for the region
Gene_tax='./data/SSUsearch_db/Gene_tax.silva_taxa_family.tax' # silva 108 ref
Gene_db='./data/SSUsearch_db/Gene_db.silva_108_rep_set.fasta'
Gene_tax_cc='./data/SSUsearch_db/Gene_tax_cc.greengene_97_otus.tax' # greengene 2012.10 ref for copy correction
Gene_db_cc='./data/SSUsearch_db/Gene_db_cc.greengene_97_otus.fasta'
# first part of file basename will the label of this sample
import os
Filename=os.path.basename(Seqfile)
Tag=Filename.split('.')[0]
import os
Hmm=os.path.abspath(Hmm)
Seqfile=os.path.abspath(Seqfile)
Script_dir=os.path.abspath(Script_dir)
Gene_model_org=os.path.abspath(Gene_model_org)
Ali_template=os.path.abspath(Ali_template)
Gene_tax=os.path.abspath(Gene_tax)
Gene_db=os.path.abspath(Gene_db)
Gene_tax_cc=os.path.abspath(Gene_tax_cc)
Gene_db_cc=os.path.abspath(Gene_db_cc)
os.environ.update(
{'Cpu':Cpu,
'Hmm':os.path.abspath(Hmm),
'Gene':Gene,
'Seqfile':os.path.abspath(Seqfile),
'Filename':Filename,
'Tag':Tag,
'Script_dir':os.path.abspath(Script_dir),
'Gene_model_org':os.path.abspath(Gene_model_org),
'Ali_template':os.path.abspath(Ali_template),
'Start':Start,
'End':End,
'Len_cutoff':Len_cutoff,
'Gene_tax':os.path.abspath(Gene_tax),
'Gene_db':os.path.abspath(Gene_db),
'Gene_tax_cc':os.path.abspath(Gene_tax_cc),
'Gene_db_cc':os.path.abspath(Gene_db_cc)})
!echo "*** make sure: parameters are right"
!echo "Seqfile: $Seqfile\nCpu: $Cpu\nFilename: $Filename\nTag: $Tag"
cd workdir
mkdir -p $Tag.ssu.out
### start hmmsearch
!echo "*** hmmsearch starting"
!time hmmsearch --incE 10 --incdomE 10 --cpu $Cpu \
--domtblout $Tag.ssu.out/$Tag.qc.$Gene.hmmdomtblout \
-o /dev/null -A $Tag.ssu.out/$Tag.qc.$Gene.sto \
$Hmm $Seqfile
!echo "*** hmmsearch finished"
!python $Script_dir/get-seq-from-hmmout.py \
$Tag.ssu.out/$Tag.qc.$Gene.hmmdomtblout \
$Tag.ssu.out/$Tag.qc.$Gene.sto \
$Tag.ssu.out/$Tag.qc.$Gene
!echo "*** Starting mothur align"
!cat $Gene_model_org $Tag.ssu.out/$Tag.qc.$Gene > $Tag.ssu.out/$Tag.qc.$Gene.RFadded
# mothur does not allow tab between its flags, thus no indents here
!time mothur "#align.seqs(candidate=$Tag.ssu.out/$Tag.qc.$Gene.RFadded, template=$Ali_template, threshold=0.5, flip=t, processors=$Cpu)"
!rm -f mothur.*.logfile
!python $Script_dir/mothur-align-report-parser-cutoff.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.report \
$Tag.ssu.out/$Tag.qc.$Gene.align \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter \
0.5
!python $Script_dir/remove-gap.py $Tag.ssu.out/$Tag.qc.$Gene.align.filter $Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa
!python $Script_dir/region-cut.py $Tag.ssu.out/$Tag.qc.$Gene.align.filter $Start $End $Len_cutoff
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter."$Start"to"$End".cut.lenscreen $Tag.ssu.out/$Tag.forclust
!rm -f $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy
!mothur "#classify.seqs(fasta=$Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa, template=$Gene_db, taxonomy=$Gene_tax, cutoff=50, processors=$Cpu)"
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy
!python $Script_dir/count-taxon.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy.count
!rm -f mothur.*.logfile
!rm -f $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy
!mothur "#classify.seqs(fasta=$Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa, template=$Gene_db_cc, taxonomy=$Gene_tax_cc, cutoff=50, processors=$Cpu)"
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy
!python $Script_dir/count-taxon.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy.count
!rm -f mothur.*.logfile
# check the output directory
!ls $Tag.ssu.out
!echo "*** pipeline runs successsfully :)"
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: README
Step2: Other parameters to set
Step3: Pass hits to mothur aligner
Step4: Get aligned seqs that have > 50% matched to references
Step5: Search is done here (the computational intensive part). Hooray!
Step6: Classify SSU rRNA gene seqs using SILVA
Step7: Classify SSU rRNA gene seqs with Greengene for copy correction later
Step8: This part of pipeline (working with one sequence file) finishes here. Next we will combine samples for community analysis (see unsupervised analysis).
|
15,124 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
from adjustText import adjust_text
import numpy as np
import pandas as pd
np.random.seed(0)
x, y = np.random.random((2,30))
fig, ax = plt.subplots()
plt.plot(x, y, 'bo')
texts = [plt.text(x[i], y[i], 'Text%s' %i) for i in range(len(x))]
np.random.seed(0)
x, y = np.random.random((2,30))
fig, ax = plt.subplots()
plt.plot(x, y, 'bo')
texts = [plt.text(x[i], y[i], 'Text%s' %i) for i in range(len(x))]
adjust_text(texts)
fig, ax = plt.subplots()
ax.plot(x, y, 'bo')
texts = [ax.text(x[i], y[i], 'Text%s' %i) for i in range(len(x))]
adjust_text(texts, arrowprops=dict(arrowstyle='->', color='red'))
def plot_scatter(adjust=False):
np.random.seed(100)
x, y = np.random.random((2,400))
fig, ax = plt.subplots()
ax.plot(x, y, 'bo')
texts = [plt.text(x[0], y[0], 'Something')]
if adjust:
plt.title(str(
adjust_text(texts, x, y, arrowprops=dict(arrowstyle='->', color='red'))
)+' iterations')
_ = plot_scatter()
plot_scatter(adjust=True)
together = [(0, 1.0, 0.4), (25, 1.0127692669427917, 0.41), (50, 1.016404709797609, 0.41), (75, 1.1043426359673716, 0.42), (100, 1.1610446924342996, 0.44), (125, 1.1685687930691457, 0.43), (150, 1.3486407784550272, 0.45), (250, 1.4013999168008104, 0.45)]
together.sort()
text = [x for (x,y,z) in together]
eucs = [y for (x,y,z) in together]
covers = [z for (x,y,z) in together]
def plot_eucs_covers():
plt.plot(eucs,covers,color="black", alpha=0.5)
texts = []
for xt, yt, s in zip(eucs, covers, text):
texts.append(plt.text(xt, yt, s))
return texts
_ = plot_eucs_covers()
texts = plot_eucs_covers()
adjust_text(texts, arrowprops=dict(arrowstyle="->", color='r', lw=0.5))
from scipy import interpolate
texts = plot_eucs_covers()
f = interpolate.interp1d(eucs, covers)
x = np.linspace(min(eucs), max(eucs), 500)
y = f(x)
adjust_text(texts, x, y, arrowprops=dict(arrowstyle="->", color='r', lw=0.5))
texts = plot_eucs_covers()
adjust_text(texts, x, y, arrowprops=dict(arrowstyle="->", color='r', lw=0.5),
autoalign='', only_move={'points':'y', 'text':'y'})
data = pd.read_csv('../../figures/volcano_data.csv')
def plot_volcano(adjust=False, **kwargs):
plt.figure(figsize=(7, 10))
threshold = 0.05
xns, yns = data['log2FoldChange'][data['padj']>=threshold], -np.log10(data['pvalue'][data['padj']>=threshold])
plt.scatter(xns, yns, c='grey', edgecolor=(1,1,1,0), label='Not Sig')
xs, ys = data['log2FoldChange'][data['padj']<threshold], -np.log10(data['pvalue'][data['padj']<threshold])
plt.scatter(xs, ys, c='r', edgecolor=(1,1,1,0), label='FDR<5%')
texts = []
for x, y, l in zip(xs, ys, data['Gene'][data['padj']<threshold]):
texts.append(plt.text(x, y, l, size=8))
plt.legend()
plt.xlabel('$log_2(Fold Change)$')
plt.ylabel('$-log_{10}(pvalue)$')
if adjust:
plt.title('%s iterations' % adjust_text(texts,
arrowprops=dict(arrowstyle="-", color='k', lw=0.5), **kwargs))
_ = plot_volcano()
plot_volcano(adjust=True)
mtcars = pd.read_csv('../../figures/mtcars.csv')
def plot_mtcars(adjust=False, *args, **kwargs):
plt.figure(figsize=(9, 6))
plt.scatter(mtcars['wt'], mtcars['mpg'], s=15, c='r', edgecolors=(1,1,1,0))
texts = []
for x, y, s in zip(mtcars['wt'], mtcars['mpg'], mtcars['Car']):
texts.append(plt.text(x, y, s, size=9))
plt.xlabel('wt')
plt.ylabel('mpg')
if adjust:
plt.title('%s iterations' % adjust_text(texts, arrowprops=dict(arrowstyle="-", color='k', lw=0.5),
save_steps=False, save_prefix='/home/ilya/adjustText/examples/mtcars', **kwargs))
else:
plt.title('Original')
plot_mtcars()
plot_mtcars(adjust=True)
import matplotlib.dates as mdates
import locale
locale.setlocale(locale.LC_ALL,'en_GB.utf8') #I have it set to Russian, without this dates don't work
days, impressions = np.loadtxt("../../figures/page-impressions.csv", unpack=True,
converters={ 0: mdates.bytespdate2num('%Y-%m-%d')})
def plot_dates(adjust=False):
plt.figure(figsize=(10, 7))
plt.plot_date(x=days, y=impressions)
texts = []
for x, y in zip(days, impressions):
texts.append(plt.text(x, y, int(y)))
if adjust:
plt.title(str(adjust_text(texts, arrowprops=dict(arrowstyle="->", color='r', lw=0.5)))+' iterations')
plot_dates(adjust=False)
plot_dates(adjust=True)
data = [{'rf': [[10, 0.682312925170068], [20, 0.714904143475572], [30, 0.729107400535972], [40, 0.718944547515976], [50, 0.7372706658420943], [60, 0.7291486291486291], [70, 0.7332302618016904], [80, 0.7291486291486291], [90, 0.7291486291486291]], 'besttree': 0.7372706658420943}, {'rf': [[10, 0.6576994434137291], [20, 0.6760874046588332], [30, 0.7086786229643371], [40, 0.6943929086786229], [50, 0.6984951556380127], [60, 0.6903731189445474], [70, 0.7025974025974027], [80, 0.7128014842300556], [90, 0.7086786229643373]], 'besttree': 0.7128014842300556}, {'rf': [[10, 0.5763347763347764], [20, 0.5783962069676354], [30, 0.5946402803545661], [40, 0.5988455988455988], [50, 0.6028653885796743], [60, 0.6089466089466089], [70, 0.6171098742527313], [80, 0.6130488559059988], [90, 0.6130488559059988]], 'besttree': 0.6171098742527313}, {'rf': [[10, 0.6741084312512883], [20, 0.7025767882910741], [30, 0.6964337250051535], [40, 0.7127396413110699], [50, 0.7167594310451453], [60, 0.712677798392084], [70, 0.7269635126777982], [80, 0.7351061636775922], [90, 0.7350855493712636]], 'besttree': 0.7351061636775922}, {'rf': [[10, 0.6719645433931148], [20, 0.7006184291898577], [30, 0.7066378066378066], [40, 0.7107606679035251], [50, 0.7086580086580087], [60, 0.7269841269841271], [70, 0.718841475984333], [80, 0.7249433106575964], [90, 0.7188827045969903]], 'besttree': 0.7269841269841271}, {'rf': [[10, 0.5722119150690579], [20, 0.5641723356009071], [30, 0.5845186559472274], [40, 0.5947227375798805], [50, 0.6048649762935477], [60, 0.6049062049062048], [70, 0.6048443619872191], [80, 0.6007833436404865], [90, 0.6048855905998763]], 'besttree': 0.6049062049062048}, {'rf': [[10, 0.54582560296846], [20, 0.5478664192949907], [30, 0.5499278499278499], [40, 0.564172335600907], [50, 0.5621109049680477], [60, 0.5621109049680478], [70, 0.5600700886415172], [80, 0.5580705009276438], [90, 0.5600907029478458]], 'besttree': 0.564172335600907}, {'rf': [[10, 0.6171304885590599], [20, 0.6435992578849722], [30, 0.6354566068851784], [40, 0.6577819006390435], [50, 0.6618429189857762], [60, 0.6557410843125129], [70, 0.6638425066996495], [80, 0.65578231292517], [90, 0.6618841475984334]], 'besttree': 0.6638425066996495}, {'rf': [[10, 0.6578643578643579], [20, 0.6944135229849515], [30, 0.69853638425067], [40, 0.7005565862708719], [50, 0.6985569985569986], [60, 0.6985363842506699], [70, 0.6964955679241392], [80, 0.6923933209647496], [90, 0.7005565862708719]], 'besttree': 0.7005565862708719}]
fig = plt.figure(facecolor="w",figsize=(15,15))
for i,result in enumerate(data):
# Let's plot the bars
ax=fig.add_subplot(3,3,i+1)
x = [item[0] for item in result['rf']]
y = [item[1] for item in result['rf']]
ax.axis(ymin=0.5,ymax=0.8,xmin=4,xmax=100)
bars=ax.bar(x,y,color='green',tick_label=x,width=2)
# And add the labels, while changing the colour of some of the bars
texts=[]
for j,rect in enumerate(bars):
left = rect.get_x()+1
top = rect.get_y()+rect.get_height()
texts.append(ax.text(left,top,'%.3f'%y[j]))
if y[j] == result['besttree']:
rect.set_facecolor('red')
# Now adjust the text. We don't want to move in the x direction and we want the labels to vertically aligned with the bars.
# I also noticed that having a weaker force to repel texts from ech other makes the figure nicer.
ax.set_ylabel('CA')
ax.set_title('%s iterations' % adjust_text(texts, add_objects=bars,
autoalign=False, only_move={'points':'y', 'text':'y', 'objects':'y'},
ha='center', va='bottom'))
from matplotlib import gridspec
d1={'Afrikaans': 1.35, 'Amharic': 9.51, 'AncientGreek': 11.62, 'Arabic': 9.22, 'Armenian': 2.92, 'Bambara': 0.1, 'Basque': 8.46, 'Belarusian': 2.48, 'Breton': 24.0, 'Bulgarian': 5.12, 'Buryat': 0.0, 'Cantonese': 4.5, 'Catalan': 1.97, 'Chinese': 0.05, 'Coptic': 4.41, 'Croatian': 4.17, 'Czech': 8.74, 'Danish': 14.58, 'Dutch': 15.3, 'English': 0.79, 'Erzya': 21.74, 'Estonian': 17.1, 'Faroese': 8.92, 'Finnish': 5.82, 'French': 2.44, 'Galician': 7.0, 'German': 20.6, 'Gothic': 11.11, 'Greek': 4.88, 'Hebrew': 1.48, 'Hindi': 0.16, 'Hungarian': 7.46, 'Indonesian': 1.12, 'Irish': 98.16, 'Italian': 6.8, 'Japanese': 0.0, 'Kazakh': 0.46, 'Komi': 17.24, 'Korean': 0.04, 'Kurmanji': 0.37, 'Latin': 6.14, 'Latvian': 3.34, 'Lithuanian': 0.98, 'Maltese': 0.0, 'Marathi': 1.95, 'Naija': 0.11, 'NorthSami': 4.48, 'Norwegian': 12.59, 'OldChurchSlavonic': 13.42, 'OldFrench': 10.6, 'Persian': 2.45, 'Polish': 15.49, 'Portuguese': 3.1, 'Romanian': 12.75, 'Russian': 5.9, 'Sanskrit': 9.46, 'Serbian': 9.7, 'Slovak': 11.67, 'Slovenian': 12.08, 'Spanish': 3.41, 'Swedish': 13.36, 'SwedishSign': 18.89, 'Tagalog': 100.0, 'Tamil': 5.61, 'Telugu': 0.0, 'Thai': 0.0, 'Turkish': 9.95, 'Ukrainian': 5.39, 'UpperSorbian': 5.66, 'Urdu': 0.21, 'Uyghur': 1.96, 'Vietnamese': 0}
d2={'Afrikaans': 2.63, 'Amharic': 0.59, 'AncientGreek': 41.61, 'Arabic': 73.29, 'Armenian': 20.6, 'Bambara': 0.0, 'Basque': 18.53, 'Belarusian': 33.54, 'Breton': 53.99, 'Bulgarian': 30.08, 'Buryat': 0.38, 'Cantonese': 5.31, 'Catalan': 23.57, 'Chinese': 0.24, 'Coptic': 28.02, 'Croatian': 28.64, 'Czech': 37.94, 'Danish': 14.95, 'Dutch': 21.98, 'English': 9.93, 'Erzya': 42.54, 'Estonian': 38.92, 'Faroese': 16.07, 'Finnish': 23.02, 'French': 5.85, 'Galician': 19.7, 'German': 19.77, 'Gothic': 49.52, 'Greek': 35.74, 'Hebrew': 35.52, 'Hindi': 0.39, 'Hungarian': 28.8, 'Indonesian': 4.5, 'Irish': 98.64, 'Italian': 25.96, 'Japanese': 0.0, 'Kazakh': 0.44, 'Komi': 20.17, 'Korean': 0.04, 'Kurmanji': 0.46, 'Latin': 32.51, 'Latvian': 37.48, 'Lithuanian': 39.38, 'Maltese': 10.34, 'Marathi': 2.78, 'Naija': 4.44, 'NorthSami': 32.38, 'Norwegian': 19.04, 'OldChurchSlavonic': 53.81, 'OldFrench': 35.13, 'Persian': 0.73, 'Polish': 36.67, 'Portuguese': 13.93, 'Romanian': 30.23, 'Russian': 33.52, 'Sanskrit': 31.1, 'Serbian': 25.7, 'Slovak': 39.69, 'Slovenian': 31.77, 'Spanish': 22.06, 'Swedish': 19.8, 'SwedishSign': 18.69, 'Tagalog': 97.92, 'Tamil': 0.55, 'Telugu': 0.95, 'Thai': 0.15, 'Turkish': 4.67, 'Ukrainian': 32.81, 'UpperSorbian': 23.85, 'Urdu': 0.18, 'Uyghur': 4.06, 'Vietnamese': 1.62}
langnameGroup={"AncientGreek":"Indo-European", "Arabic":"Semitic", "Basque":"isolate", "Belarusian":"Indo-European-Baltoslavic", "Bulgarian":"Indo-European-Baltoslavic", "Cantonese":"Sino-Austronesian", "Catalan":"Indo-European-Romance", "Chinese":"Sino-Austronesian", "Coptic":"Afroasiatic", "Croatian":"Indo-European-Baltoslavic", "Czech":"Indo-European-Baltoslavic", "Danish":"Indo-European-Germanic", "Dutch":"Indo-European-Germanic", "English":"Indo-European-Germanic", "Estonian":"Agglutinating", "Finnish":"Agglutinating", "French":"Indo-European-Romance", "Galician":"Indo-European-Romance", "German":"Indo-European-Germanic", "Gothic":"Indo-European-Germanic", "Greek":"Indo-European", "Hebrew":"Semitic", "Hindi":"Indo-European", "Hungarian":"Agglutinating", "Indonesian":"Sino-Austronesian", "Irish":"Indo-European", "Italian":"Indo-European-Romance", "Japanese":"Agglutinating", "Kazakh":"Agglutinating", "Korean":"Agglutinating", "Latin":"Indo-European-Romance", "Latvian":"Indo-European-Baltoslavic", "Lithuanian":"Indo-European-Baltoslavic", "Norwegian":"Indo-European-Germanic", "OldChurchSlavonic":"Indo-European-Baltoslavic", "Persian":"Indo-European", "Polish":"Indo-European-Baltoslavic", "Portuguese":"Indo-European-Romance", "Romanian":"Indo-European-Romance", "Russian":"Indo-European-Baltoslavic", "Sanskrit":"Indo-European", "Slovak":"Indo-European-Baltoslavic", "Slovenian":"Indo-European-Baltoslavic", "Spanish":"Indo-European-Romance", "Swedish":"Indo-European-Germanic", "Tamil":"Dravidian", "Turkish":"Agglutinating", "Ukrainian":"Indo-European-Baltoslavic", "Urdu":"Indo-European", "Uyghur":"Agglutinating", "Vietnamese":"Sino-Austronesian",'Afrikaans':'Indo-European-Germanic', 'SwedishSign':'Indo-European-Germanic', 'Kurmanji':'Indo-European', 'NorthSami':'Agglutinating', 'UpperSorbian':"Indo-European-Baltoslavic", 'Buryat':'Agglutinating', 'Telugu':'Dravidian', 'Serbian':"Indo-European-Baltoslavic", 'Marathi':'Indo-European','Naija':"Indo-European-Germanic", "OldFrench":"Indo-European-Romance", "Maltese":"Semitic", "Thai":"Sino-Austronesian","Amharic":"Afroasiatic", 'Erzya': 'Agglutinating', 'Faroese':"Indo-European-Germanic", 'Tagalog':"Sino-Austronesian", 'Bambara':'Niger-Congo', 'Breton':"Indo-European", 'Armenian':"Indo-European", 'Komi': 'Agglutinating'}
groupColors={"Indo-European-Romance":'brown',"Indo-European-Baltoslavic":'purple',"Indo-European-Germanic":'olive',"Indo-European":'royalBlue',"Sino-Austronesian":'limeGreen', "Agglutinating":'red'}
groupMarkers={"Indo-European-Romance":'<',"Indo-European-Baltoslavic":'^',"Indo-European-Germanic":'v',"Indo-European":'>',"Sino-Austronesian":'s', "Agglutinating":'+'}
col1 = pd.Series(d1)
col2 = pd.Series(d2)
c=[groupColors.get(langnameGroup[label],'k') for label in col1.index]
m=[groupMarkers.get(langnameGroup[label],'o') for label in col1.index]
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 2, width_ratios=[1, 25], height_ratios=[25, 1])
aa = plt.subplot(gs[0])
ax = plt.subplot(gs[1])
bb = plt.subplot(gs[3])
li,la = (-15,100)
plt.xlim(li,la)
plt.ylim(li,la)
ax.set_xlim([li,la])
ax.set_ylim([li,la])
aa.set_xlim([0, 1])
aa.set_ylim([li,la])
bb.set_ylim([0, 1])
ax.set_xticks([0, 50,100], minor=False) # only the 50% is major
ax.set_xticks([0,25,50,75,100], minor=True) # all 10th are minor
ax.set_yticks([0, 50,100], minor=False) # only the 50% is major
ax.set_yticks([0,25,50,75,100], minor=True) # all 10th are minor
ax.grid(which='both', axis='both',alpha=.5) # draw grid
ax.plot([0, 1], [0, 1], transform=ax.transAxes, alpha=.5, color="gray") # diagonal
aa.set_xticks([], minor=False)
aa.set_yticks([], minor=False)
bb.set_xticks([], minor=False)
bb.set_yticks([], minor=False)
for xx, yy, cc, mm in zip(col1, col2, c, m):
ax.scatter(xx, yy, marker=mm, c=cc)
aa.scatter([0.5 for _ in col1], col2, c=c, alpha=0.5)
bb.scatter(col1, [0.5 for _ in col2], c=c, alpha=0.5)
texts=[]
for label, x, y in zip(col1.index, col1, col2):
texts+=[ax.text(x, y, label, color=groupColors.get(langnameGroup[label],'k'), fontsize=8)] # for adjustText
adjust_text(texts, col1.values, col2.values, ax=ax, precision=0.001,
expand_text=(1.01, 1.05), expand_points=(1.01, 1.05),
force_text=(0.01, 0.25), force_points=(0.01, 0.25),
arrowprops=dict(arrowstyle='-', color='gray', alpha=.5))
from matplotlib import patches
d={'Afrikaans': 1.93, 'Amharic': 44.56, 'AncientGreek': 33.06, 'Arabic': 65.9, 'Armenian': 20.16,
'Bambara': 0.13, 'Basque': 20.4, 'Belarusian': 26.28, 'Breton': 53.21, 'Bulgarian': 25.77, 'Buryat': 0.4,
'Cantonese': 4.4, 'Catalan': 19.14, 'Chinese': 0.19, 'Coptic': 11.67, 'Croatian': 24.72, 'Czech': 36.6,
'Danish': 16.38, 'Dutch': 21.72, 'English': 4.9, 'Erzya': 40.76, 'Estonian': 36.45,
'Faroese': 14.19, 'Finnish': 17.88, 'French': 4.67,
'Galician': 17.52, 'German': 21.45, 'Gothic': 34.23, 'Greek': 34.27,
'Hebrew': 28.75, 'Hindi': 1.4, 'Hungarian': 27.91, 'Indonesian': 2.6,
'Irish': 87.93, 'Italian': 22.75, 'Japanese': 0.0, 'Kazakh': 0.89, 'Komi': 19.34, 'Korean': 0.35, 'Kurmanji': 0.61,
'Latin': 27.5, 'Latvian': 24.22, 'Lithuanian': 28.8, 'Maltese': 7.26, 'Marathi': 2.64,
'Naija': 2.29, 'NorthSami': 21.18, 'Norwegian': 19.43, 'OldChurchSlavonic': 37.51, 'OldFrench': 20.14,
'Persian': 0.99, 'Polish': 30.55, 'Portuguese': 12.84, 'Romanian': 29.0, 'Russian': 29.15,
'Sanskrit': 20.09, 'Serbian': 24.1, 'Slovak': 33.18, 'Slovenian': 31.72, 'Spanish': 19.09, 'Swedish': 18.84, 'SwedishSign': 19.23,
'Tagalog': 98.18, 'Tamil': 2.95, 'Telugu': 0.85, 'Thai': 0.06, 'Turkish': 6.38,
'Ukrainian': 26.38, 'UpperSorbian': 22.03, 'Urdu': 0.74, 'Uyghur': 3.58, 'Vietnamese': 1.78}
langnameGroup={"AncientGreek":"Indo-European", "Arabic":"Semitic", "Basque":"isolate", "Belarusian":"Indo-European-Baltoslavic", "Bulgarian":"Indo-European-Baltoslavic", "Cantonese":"Sino-Austronesian", "Catalan":"Indo-European-Romance", "Chinese":"Sino-Austronesian", "Coptic":"Afroasiatic", "Croatian":"Indo-European-Baltoslavic", "Czech":"Indo-European-Baltoslavic", "Danish":"Indo-European-Germanic", "Dutch":"Indo-European-Germanic", "English":"Indo-European-Germanic", "Estonian":"Agglutinating", "Finnish":"Agglutinating", "French":"Indo-European-Romance", "Galician":"Indo-European-Romance", "German":"Indo-European-Germanic", "Gothic":"Indo-European-Germanic", "Greek":"Indo-European", "Hebrew":"Semitic", "Hindi":"Indo-European", "Hungarian":"Agglutinating", "Indonesian":"Sino-Austronesian", "Irish":"Indo-European", "Italian":"Indo-European-Romance", "Japanese":"Agglutinating", "Kazakh":"Agglutinating", "Korean":"Agglutinating", "Latin":"Indo-European-Romance", "Latvian":"Indo-European-Baltoslavic", "Lithuanian":"Indo-European-Baltoslavic", "Norwegian":"Indo-European-Germanic", "OldChurchSlavonic":"Indo-European-Baltoslavic", "Persian":"Indo-European", "Polish":"Indo-European-Baltoslavic", "Portuguese":"Indo-European-Romance", "Romanian":"Indo-European-Romance", "Russian":"Indo-European-Baltoslavic", "Sanskrit":"Indo-European", "Slovak":"Indo-European-Baltoslavic", "Slovenian":"Indo-European-Baltoslavic", "Spanish":"Indo-European-Romance", "Swedish":"Indo-European-Germanic", "Tamil":"Dravidian", "Turkish":"Agglutinating", "Ukrainian":"Indo-European-Baltoslavic", "Urdu":"Indo-European", "Uyghur":"Agglutinating", "Vietnamese":"Sino-Austronesian",'Afrikaans':'Indo-European-Germanic', 'SwedishSign':'Indo-European-Germanic', 'Kurmanji':'Indo-European', 'NorthSami':'Agglutinating', 'UpperSorbian':"Indo-European-Baltoslavic", 'Buryat':'Agglutinating', 'Telugu':'Dravidian', 'Serbian':"Indo-European-Baltoslavic", 'Marathi':'Indo-European','Naija':"Indo-European-Germanic", "OldFrench":"Indo-European-Romance", "Maltese":"Semitic", "Thai":"Sino-Austronesian","Amharic":"Afroasiatic", 'Erzya': 'Agglutinating', 'Faroese':"Indo-European-Germanic", 'Tagalog':"Sino-Austronesian", 'Bambara':'Niger-Congo', 'Breton':"Indo-European", 'Armenian':"Indo-European", 'Komi': 'Agglutinating'}
groupColors={"Indo-European-Romance":'brown',"Indo-European-Baltoslavic":'purple',"Indo-European-Germanic":'olive',"Indo-European":'royalBlue',"Sino-Austronesian":'limeGreen', "Agglutinating":'red'}
df = pd.Series(d)
c=[groupColors.get(langnameGroup[label],'k') for label in df.index]
fig, aa = plt.subplots(figsize=(7, 9))
aa.axes.get_xaxis().set_visible(False)
aa.set_xlim(-10, 0.1)
aa.set_ylim(-1, 101)
aa.scatter([0 for _ in df], df, c=c, alpha=0.5, edgecolors='none')
aa.spines['left'].set_visible(False)
aa.spines['top'].set_visible(False)
aa.spines['bottom'].set_visible(False)
aa.yaxis.set_label_position('right')
aa.yaxis.set_ticks_position('right')
plt.tight_layout()
patch = patches.Rectangle((-0.1, 0), 0.2, 100, fill=False, alpha=0) # We add a rectangle to make sure the labels don't move to the right
aa.add_patch(patch)
texts=[]
np.random.seed(0)
for label, y in zip(df.index, df):
texts+=[aa.text(-.1+np.random.random()/1000, y, label, # A small random shift prevents labels which have exactly the same coordinates from being stuck together
# (not a problem in this example really, but wanted to mention this possibility)
color=groupColors.get(langnameGroup[label],'k'), fontsize=8)]
adjust_text(texts, [0 for _ in df], df, ha='right', va='center', add_objects=[patch],
expand_text=(1.05, 1), # We want them to be quite compact, so reducing expansion makes sense
force_text=(0.75, 0), force_objects=(1, 0), #With default forces it takes a very long time to converge, but higher values still produce very nice output
autoalign=False, only_move={'points':'x', 'text':'x', 'objects':'x'})
import cartopy.crs as ccrs
fig = plt.figure(figsize=[10, 8])
ax = plt.subplot(1, 1, 1, projection=ccrs.NorthPolarStereo())
ax.coastlines(resolution='50m')
ax.set_extent([-180,180,53,90],crs=ccrs.PlateCarree())
np.random.seed(0)
loclon = np.random.randint(-180, 180, 25)
loclat = np.random.randint(53, 90, 25)
ax.scatter(loclon, loclat,s=75,
linewidth=2, marker='o',alpha=0.3,edgecolors='black',c='red',
transform=ccrs.PlateCarree() )
texts = [ax.text(loclon[i], loclat[i], 'Location%s' %i, ha='center', color='r',
va='center',transform=ccrs.PlateCarree()) for i in range(len(loclon))]
adjust_text(texts, arrowprops=dict(arrowstyle='->', color='blue'), ax=ax)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A very simple example also shown in the wiki to simply introduce the call signature of adjust_text
Step2: First a very simple example with labelling one point out of many
Step3: As you can see, the label overlaps many points here and it is difficult to understand what it says. Also it is not clear which of the points it corresponds to. Let's try to asjust its location with default parameters while adding an arrow to connect it to the original point.
Step4: Here the number we see in the output is number of iteration the algorithm used to adjust the location of the text. The result is much better! The text doesn't overlap anything and is legible while clearly corresponding to one particular point.
Step5: Now let's just apply the text adjustment with all default parameters and define the arrow properties we like.
Step6: Now one of the labels overlays the line and two are very close to it, that's not pretty. We'll create some virtual points along the lines using linear interpolation to repel the labels from the graph itself.
Step7: Nice! But what if we don't want the labels to move along the x axis? Let's allow movement only along y axis. (Looks like it's currently broken?)
Step8: Now this is quite good! Making the distance of labels to the line exactly right is difficult because of the artificial way we are repelling them and different shape of the line close to different labels.
Step9: We can't read half of the labels in the left half! Let's adjust them
Step10: And this is very nice now!
Step11: Aaa everything is overlapping!!! Crazy. Let's fix it.
Step12: Now this is pretty good!
Step13: Labelling bars above them
Step14: Looks nice! I guess it can be quite a common use case.
Step15: We can also look at distribution of languages across just one of these features, and we can use adjust_text to nicely arrange the labels along one axis.
Step16: adjust_text also works with cartopy axes!
|
15,125 | <ASSISTANT_TASK:>
Python Code::
from sklearn.linear_model import Lasso
from sklearn.metrics import mean_squared_error, mean_absolute_error, max_error, explained_variance_score, mean_absolute_percentage_error
# initialise & fit Lasso regression model with alpha set to 0.5
model = Lasso(alpha=0.5)
model.fit(X_train, y_train)
# create dictionary that contains the feature coefficients
coef = dict(zip(X_train.columns, model.coef_.T))
print(coef)
# make prediction for test data & evaluate performance
y_pred = model.predict(X_test)
print('RMSE:',mean_squared_error(y_test, y_pred, squared = False))
print('MAE:',mean_absolute_error(y_test, y_pred))
print('MAPE:',mean_absolute_percentage_error(y_test, y_pred))
print('Max Error:',max_error(y_test, y_pred))
print('Explained Variance Score:',explained_variance_score(y_test, y_pred))
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
15,126 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pygslib
pygslib.version.__version__
# we read the file header
# use >>> print. pygslib.gslib.read_header.__doc__ for help
#the data file is in the working directory
mydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
#adding elevation and a dummy BHID
mydata['Zlocation']=0
mydata['bhid']=1
print (' \n **** 5 first rows in my datafile \n\n ', mydata.head(n=5))
#define a direction:
import numpy as np
import matplotlib.pyplot as plt
def hdir(r=1, ndir=18, refdir=0):
theta = np.linspace(0, np.pi, ndir+1)[:-1]
ax = plt.subplot(111, polar=True)
# make clockwise like maps
ax.set_theta_direction(-1)
# make the plot to point north
ax.set_theta_zero_location("N")
for t in theta:
pass
ax.plot([0,t], [0,r], color='r', linewidth=1)
ax.plot([0,t+np.pi], [0,r], color='r', linewidth=1)
ax.grid(True)
ax.invert_yaxis()
ax.set_title("A line plot on a polar axis", va='bottom')
ax.plot([0,refdir], [0,r], color='b', linewidth=3)
ax.plot([0,refdir+np.pi], [0,r], color='b', linewidth=3)
plt.show()
return np.rad2deg(theta) #the last direction
azm= hdir(r=1, ndir=18, refdir=0)
plt.show()
print (len (azm), azm)
ndir=18
azm= hdir(r=1, ndir=ndir, refdir=0) #this produces a plot + directions
atol= np.ones(ndir)*180/ndir/2
dip = np.zeros(ndir)
bandwh = np.ones(ndir)*5000
bandwd = np.ones(ndir)*5000
dtol = np.ones(ndir)*5000
sills = [np.var(mydata['Primary'])]
ivtail = np.ones(ndir)
ivhead = np.ones(ndir)
ivtype = np.ones(ndir)*7
parameters = {
'x' : mydata['Xlocation'] , # X coordinates, array('f') with bounds (nd), nd is number of data points
'y' : mydata['Ylocation'], # Y coordinates, array('f') with bounds (nd)
'z' : mydata['Zlocation'], # Z coordinates, array('f') with bounds (nd)
'bhid' : mydata['bhid'], # bhid for downhole variogram, array('i') with bounds (nd)
'vr' : mydata['Primary'], # Variables, array('f') with bounds (nd,nv), nv is number of variables
'tmin' : -1.0e21, # trimming limits, float
'tmax' : 1.0e21, # trimming limits, float
'nlag' : 8, # number of lags, int
'xlag' : 6, # lag separation distance, float
'xltol' : 3, # lag tolerance, float
'azm' : azm, # azimuth, array('f') with bounds (ndir)
'atol' : atol, # azimuth tolerance, array('f') with bounds (ndir)
'bandwh' : bandwh, # bandwith h, array('f') with bounds (ndir)
'dip' : dip, # dip, array('f') with bounds (ndir)
'dtol' : dtol, # dip tolerance, array('f') with bounds (ndir)
'bandwd' : bandwd, # bandwith dip, array('f') with bounds (ndir)
'isill' : 0, # standardize sills? (0=no, 1=yes), int
'sills' : sills, # variance used to std the sills, array('f') with bounds (nv)
'ivtail' : ivtail, # tail var., array('i') with bounds (nvarg), nvarg is number of variograms
'ivhead' : ivhead, # head var., array('i') with bounds (nvarg)
'ivtype' : ivtype, # variogram type, array('i') with bounds (nvarg)
'maxclp' : 50000} # maximum number of variogram point cloud to use, input int
#Now we are ready to calculate the variogram
pdis,pgam, phm,ptm,phv,ptv,pnump, cldi, cldj, cldg, cldh = pygslib.gslib.gamv(parameters)
#get parameters
print (type(pdis), pdis.shape)
# for example
nvrg = pdis.shape[0]
ndir = pdis.shape[1]
nlag = pdis.shape[2]-2
%matplotlib inline
import matplotlib.pyplot as plt
# knowing that this data is stored in a 3D matrix (nvarg, ndir, nlag+2)
#plot variogram 1 , direction 1,2,3
v=1
ndir
for d in range(ndir):
dip=parameters['dip'][d]
azm=parameters['azm'][d]
plt.plot (pdis[v, d, 1:], pgam[v, d, 1:], '-o', label=str(dip) + '-->' + str(azm))
plt.legend()
plt.grid(True)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
#get an array of direction + complementary direction = direction + 180
ndir2= len(parameters['azm'])*2
azimuths = np.zeros (ndir2 + 1) #this is to repeat the first direction
zeniths = pdis[0, 0, 1:]
values = np.zeros ((ndir2 +1 ,len(pdis[0, 0, 1:])))
for i in range (ndir2/2):
azimuths[i] = parameters['azm'][i]
azimuths[i+ndir2/2] = parameters['azm'][i]+180
for ii in range(len(pdis[0, 0, 1:])):
values[i,ii] = pgam[0, i, ii+1]
values[i+ndir2/2,ii] = pgam[0, i, ii+1]
azimuths[ndir2] = azimuths[ndir2/2]+180
values[ndir2,:] = values[0,:]
#prepare grid
r, theta = np.meshgrid(zeniths, azimuths)
#-- Plot... ------------------------------------------------
ax = plt.subplot(111, polar=True)
# make clockwise like maps
ax.set_theta_direction(-1)
# make the plot to point north
ax.set_theta_zero_location("N")
cont=ax.contourf(np.deg2rad(theta), r, values)
ax.contour(np.deg2rad(theta), r, values, colors='k')
ax.plot(np.deg2rad(theta.flatten()), r.flatten(), color= '0.6', linewidth=0.2)
ax.set_rmax(50)
plt.colorbar(cont)
plt.show()
#-- Plot... with zoom ------------------------------------------------
ax = plt.subplot(111, polar=True)
# make clockwise like maps
ax.set_theta_direction(-1)
# make the plot to point north
ax.set_theta_zero_location("N")
cont=ax.contourf(np.deg2rad(theta), r, values)
ax.contour(np.deg2rad(theta), r, values, colors='k')
ax.plot(np.deg2rad(theta.flatten()), r.flatten(), color= '0.6', linewidth=0.2)
ax.set_rmax(10)
plt.colorbar(cont)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defining and plotting direction over a plane
Step2: Variogram map
|
15,127 | <ASSISTANT_TASK:>
Python Code:
%time
fba_result = simulation.fba(model)
%time
pfba_result = simulation.pfba(model)
model.reactions.PGI
model.reactions.PGI.knock_out()
model.reactions.PGI
%time
fba_knockout_result = simulation.fba(model)
fba_knockout_result[model.objective]
pfba_knockout_result = simulation.pfba(model)
pfba_knockout_result[model.objective]
lmoma_result["2 * EX_glc_lp_e_rp_"]
%time
lmoma_result = simulation.lmoma(model, reference=pfba_result.fluxes)
lmoma_result[model.objective]
%time
room_result = simulation.room(model, reference=pfba_result.fluxes)
room_result[model.objective]
room_result
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setp 2
Step2: Simulate using different methods
Step3: MOMA and ROOM relly on a reference (wild-type) flux distribution and we can use the one previously computed.
|
15,128 | <ASSISTANT_TASK:>
Python Code:
-- Campainha IoT - LHC - v1.1
-- ESP Inicializa pinos, Configura e Conecta no Wifi, Cria conexão TCP
-- e na resposta de um "Tocou" coloca o ESP em modo DeepSleep para economizar bateria.
-- Se nenhuma resposta for recebida em 15 segundos coloca o ESP em DeepSleep.
led_pin = 3
status_led = gpio.LOW
ip_servidor = "192.168.1.10"
ip_campainha = "192.168.1.20"
voltagem=3333
function desliga_circuito()
print("Colocando ESP em Deep Sleep")
node.dsleep(0)
end
function read_voltage()
-- Desconecta do wifi para poder ler a voltagem de alimentação do ESP.
wifi.sta.disconnect()
voltagem = adc.readvdd33()
print("Voltagem: "..voltagem)
-- Inicializa o Wifi e conecta no servidor
print("Inicializando WiFi")
init_wifi()
end
function pisca_led()
gpio.write(led_pin, status_led)
if status_led == gpio.LOW then
status_led = gpio.HIGH
else
status_led = gpio.LOW
end
end
function init_pins()
gpio.mode(led_pin, gpio.OUTPUT)
gpio.write(led_pin, status_led)
end
function init_wifi()
wifi.setmode(wifi.STATION)
wifi.sta.config("SSID", "password")
wifi.sta.connect()
wifi.sta.setip({ip=ip_campainha,netmask="255.255.255.0",gateway="192.168.1.1"})
-- Aguarda conexão com Wifi antes de enviar o request.
function try_connect()
if (wifi.sta.status() == 5) then
tmr.stop(0)
print("Conectado, mandando request")
manda_request()
-- Se nenhuma confirmação for recebida em 15 segundos, desliga o ESP.
tmr.alarm(2,15000,0, desliga_circuito)
else
print("Conectando...")
end
end
tmr.alarm(0,1000,1, function() try_connect() end )
end
function manda_request()
tmr.alarm(1, 200, 1, pisca_led)
print("Request enviado")
-- Cria a conexão TCP
conn=net.createConnection(net.TCP,false)
-- Envia o toque da campainha e voltagem para o servidor
conn:on("connection", function(conn)
conn:send("GET /?bateria=" ..voltagem.. " HTTP/1.0\r\n\r\n")
end)
-- Se receber "Tocou" do servidor, desliga o ESP.
conn:on("receive", function(conn, data)
if data:find("Tocou") ~= nil then
desliga_circuito()
end
end)
-- Conectar no servidor
conn:connect(9999,ip_servidor)
end
print("Inicializando pinos")
init_pins()
print ("Lendo voltagem")
read_voltage()
# !/usr/bin/python2
import time
import BaseHTTPServer
import os
import random
import string
import requests
from urlparse import parse_qs, urlparse
HOST_NAME = '0.0.0.0'
PORT_NUMBER = 9999
# A variável MP3_DIR será construida tendo como base o diretório HOME do usuário + Music/Campainha
# (e.g: /home/usuario/Music/Campainha)
MP3_DIR = os.path.join(os.getenv('HOME'), 'Music', 'Campainha')
VALID_CHARS = set(string.ascii_letters + string.digits + '_.')
CHAVE_THINGSPEAK = 'XYZ11ZYX99XYZ1XX'
# Salva o arquivo de log no diretório do usuário (e.g: /home/usuário/campainha.log)
ARQUIVO_LOG = os.path.join(os.getenv('HOME'), 'campainha.log')
def filtra(mp3):
if not mp3.endswith('.mp3'):
return False
for c in mp3:
if not c in VALID_CHARS:
return False
return True
def log(msg, output_file=None):
if output_file is None:
output_file = open(ARQUIVO_LOG, 'a')
output_file.write('%s: %s\n' % (time.asctime(), msg))
output_file.flush()
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(s):
s.send_header("Content-type", "text/plain")
query = urlparse(s.path).query
if not query:
s.send_response(404)
s.end_headers()
s.wfile.write('Not found')
return
components = dict(qc.split('=') for qc in query.split('&'))
if not 'bateria' in components:
s.send_response(404)
s.end_headers()
s.wfile.write('Not found')
return
s.send_response(200)
s.end_headers()
s.wfile.write('Tocou')
s.wfile.flush()
log("Atualizando thingspeak")
r = requests.post('https://api.thingspeak.com/update',
data={'api_key': CHAVE_THINGSPEAK, 'field1': components['bateria']})
log("Thingspeak retornou: %d" % r.status_code)
log("Tocando MP3")
mp3s = [f for f in os.listdir(MP3_DIR) if filtra(f)]
mp3 = random.choice(mp3s)
os.system("mpv " + os.path.join(MP3_DIR, mp3))
if __name__ == '__main__':
server_class = BaseHTTPServer.HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
log("Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER))
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
log("Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER))
import numpy as np
import csv
with open('database.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in spamreader:
print ', '.join(row)
References
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Server Local
Step2: Export database from dashaboard about device IoT
Step3: Method
|
15,129 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
!head -n 30 open_exoplanet_catalogue.txt
data = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter=',')
assert data.shape==(1993,24)
plt.hist(data[:,2], bins=24, range=(0,12))
plt.xlabel('M_JUP')
plt.ylabel('Number of Planets')
assert True # leave for grading
plt.scatter(data[:,5],data[:,6])
plt.xscale('symlog', subsx=[1,2,3,4,5])
plt.xlim(0,1)
plt.xlabel('Semi-major axis (AU)')
plt.ylim(0,1)
plt.ylabel('Eccentricity')
assert True # leave for grading
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exoplanet properties
Step2: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data
Step3: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Step4: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
|
15,130 | <ASSISTANT_TASK:>
Python Code:
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
import os
from google.cloud import bigquery
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR BUCKET NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
%%bash
## Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w # TODO: Add dataset name)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:# TODO: Add dataset name
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
# TODO: Add selected raw features and preprocessed features
FROM
publicdata.samples.natality
WHERE
# TODO: Add filters
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
# TODO: Replace is_male and plurality as indicated above
FROM
babyweight.babyweight_data
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
# TODO: Modulo hashmonth to be approximately 75% of the data
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
# TODO: Modulo hashmonth to be approximately 25% of the data
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = # TODO: Add dataset name
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in [# TODO: Loop over train and eval]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import necessary libraries.
Step2: Lab Task #1
Step3: The source dataset
Step4: Create the training and evaluation data tables
Step5: Lab Task #3
Step6: Lab Task #4
Step7: Split augmented dataset into eval dataset
Step8: Verify table creation
Step9: Lab Task #5
Step10: Verify CSV creation
|
15,131 | <ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,sklearn
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases'
'/breast-cancer-wisconsin/wdbc.data', header=None)
df.shape
df = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wdbc/wdbc.data', header=None)
df.head()
from sklearn.preprocessing import LabelEncoder
X = df.loc[:, 2:].values
y = df.loc[:, 1].values
le = LabelEncoder()
y = le.fit_transform(y)
le.transform(['M', 'B'])
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.20, random_state=1)
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
pipe_lr = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', LogisticRegression(random_state=1))])
pipe_lr.fit(X_train, y_train)
print('Test Accuracy: %.3f' % pipe_lr.score(X_test, y_test))
y_pred = pipe_lr.predict(X_test)
Image(filename='./images/06_01.png', width=500)
Image(filename='./images/06_02.png', width=500)
Image(filename='./images/06_03.png', width=500)
import numpy as np
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
else:
from sklearn.model_selection import StratifiedKFold
if Version(sklearn_version) < '0.18':
kfold = StratifiedKFold(y=y_train,
n_folds=10,
random_state=1)
else:
kfold = StratifiedKFold(n_splits=10,
random_state=1).split(X_train, y_train)
scores = []
for k, (train, test) in enumerate(kfold):
pipe_lr.fit(X_train[train], y_train[train])
score = pipe_lr.score(X_train[test], y_train[test])
scores.append(score)
print('Fold: %s, Class dist.: %s, Acc: %.3f' % (k+1,
np.bincount(y_train[train]), score))
print('\nCV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import cross_val_score
scores = cross_val_score(estimator=pipe_lr,
X=X_train,
y=y_train,
cv=10,
n_jobs=1)
print('CV accuracy scores: %s' % scores)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
Image(filename='./images/06_04.png', width=600)
import matplotlib.pyplot as plt
if Version(sklearn_version) < '0.18':
from sklearn.learning_curve import learning_curve
else:
from sklearn.model_selection import learning_curve
pipe_lr = Pipeline([('scl', StandardScaler()),
('clf', LogisticRegression(penalty='l2', random_state=0))])
train_sizes, train_scores, test_scores =\
learning_curve(estimator=pipe_lr,
X=X_train,
y=y_train,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=1)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,
color='blue', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(train_sizes,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean,
color='green', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.0])
plt.tight_layout()
# plt.savefig('./figures/learning_curve.png', dpi=300)
plt.show()
if Version(sklearn_version) < '0.18':
from sklearn.learning_curve import validation_curve
else:
from sklearn.model_selection import validation_curve
param_range = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]
train_scores, test_scores = validation_curve(
estimator=pipe_lr,
X=X_train,
y=y_train,
param_name='clf__C',
param_range=param_range,
cv=10)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='blue', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std,
train_mean - train_std, alpha=0.15,
color='blue')
plt.plot(param_range, test_mean,
color='green', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xscale('log')
plt.legend(loc='lower right')
plt.xlabel('Parameter C')
plt.ylabel('Accuracy')
plt.ylim([0.8, 1.0])
plt.tight_layout()
# plt.savefig('./figures/validation_curve.png', dpi=300)
plt.show()
from sklearn.svm import SVC
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
pipe_svc = Pipeline([('scl', StandardScaler()),
('clf', SVC(random_state=1))])
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [{'clf__C': param_range,
'clf__kernel': ['linear']},
{'clf__C': param_range,
'clf__gamma': param_range,
'clf__kernel': ['rbf']}]
gs = GridSearchCV(estimator=pipe_svc,
param_grid=param_grid,
scoring='accuracy',
cv=10,
n_jobs=-1)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
clf = gs.best_estimator_
clf.fit(X_train, y_train)
print('Test accuracy: %.3f' % clf.score(X_test, y_test))
Image(filename='./images/06_07.png', width=500)
gs = GridSearchCV(estimator=pipe_svc,
param_grid=param_grid,
scoring='accuracy',
cv=2)
# Note: Optionally, you could use cv=2
# in the GridSearchCV above to produce
# the 5 x 2 nested CV that is shown in the figure.
scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=5)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
from sklearn.tree import DecisionTreeClassifier
gs = GridSearchCV(estimator=DecisionTreeClassifier(random_state=0),
param_grid=[{'max_depth': [1, 2, 3, 4, 5, 6, 7, None]}],
scoring='accuracy',
cv=2)
scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=5)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
Image(filename='./images/06_08.png', width=300)
from sklearn.metrics import confusion_matrix
pipe_svc.fit(X_train, y_train)
y_pred = pipe_svc.predict(X_test)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
print(confmat)
fig, ax = plt.subplots(figsize=(2.5, 2.5))
ax.matshow(confmat, cmap=plt.cm.Blues, alpha=0.3)
for i in range(confmat.shape[0]):
for j in range(confmat.shape[1]):
ax.text(x=j, y=i, s=confmat[i, j], va='center', ha='center')
plt.xlabel('predicted label')
plt.ylabel('true label')
plt.tight_layout()
# plt.savefig('./figures/confusion_matrix.png', dpi=300)
plt.show()
le.transform(['M', 'B'])
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
print(confmat)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
print(confmat)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred, labels=[1, 0])
print(confmat)
from sklearn.metrics import precision_score, recall_score, f1_score
print('Precision: %.3f' % precision_score(y_true=y_test, y_pred=y_pred))
print('Recall: %.3f' % recall_score(y_true=y_test, y_pred=y_pred))
print('F1: %.3f' % f1_score(y_true=y_test, y_pred=y_pred))
from sklearn.metrics import make_scorer
scorer = make_scorer(f1_score, pos_label=0)
c_gamma_range = [0.01, 0.1, 1.0, 10.0]
param_grid = [{'clf__C': c_gamma_range,
'clf__kernel': ['linear']},
{'clf__C': c_gamma_range,
'clf__gamma': c_gamma_range,
'clf__kernel': ['rbf']}]
gs = GridSearchCV(estimator=pipe_svc,
param_grid=param_grid,
scoring=scorer,
cv=10,
n_jobs=-1)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
from sklearn.metrics import roc_curve, auc
from scipy import interp
pipe_lr = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', LogisticRegression(penalty='l2',
random_state=0,
C=100.0))])
X_train2 = X_train[:, [4, 14]]
if Version(sklearn_version) < '0.18':
cv = StratifiedKFold(y_train,
n_folds=3,
random_state=1)
else:
cv = list(StratifiedKFold(n_splits=3,
random_state=1).split(X_train, y_train))
fig = plt.figure(figsize=(7, 5))
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train, test) in enumerate(cv):
probas = pipe_lr.fit(X_train2[train],
y_train[train]).predict_proba(X_train2[test])
fpr, tpr, thresholds = roc_curve(y_train[test],
probas[:, 1],
pos_label=1)
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr,
tpr,
lw=1,
label='ROC fold %d (area = %0.2f)'
% (i+1, roc_auc))
plt.plot([0, 1],
[0, 1],
linestyle='--',
color=(0.6, 0.6, 0.6),
label='random guessing')
mean_tpr /= len(cv)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',
label='mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.plot([0, 0, 1],
[0, 1, 1],
lw=2,
linestyle=':',
color='black',
label='perfect performance')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.title('Receiver Operator Characteristic')
plt.legend(loc="lower right")
plt.tight_layout()
# plt.savefig('./figures/roc.png', dpi=300)
plt.show()
pipe_lr = pipe_lr.fit(X_train2, y_train)
y_pred2 = pipe_lr.predict(X_test[:, [4, 14]])
from sklearn.metrics import roc_auc_score, accuracy_score
print('ROC AUC: %.3f' % roc_auc_score(y_true=y_test, y_score=y_pred2))
print('Accuracy: %.3f' % accuracy_score(y_true=y_test, y_pred=y_pred2))
pre_scorer = make_scorer(score_func=precision_score,
pos_label=1,
greater_is_better=True,
average='micro')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see
Step2: Streamlining workflows with pipelines
Step3: <hr>
Step4: <hr>
Step5: <br>
Step6: <br>
Step7: <br>
Step8: <br>
Step9: <br>
Step10: <br>
Step11: <br>
Step12: <br>
Step13: Additional Note
Step14: Next, we printed the confusion matrix like so
Step15: Note that the (true) class 0 samples that are correctly predicted as class 0 (true negatives) are now in the upper left corner of the matrix (index 0, 0). In order to change the ordering so that the true negatives are in the lower right corner (index 1,1) and the true positves are in the upper left, we can use the labels argument like shown below
Step16: We conclude
Step17: <br>
Step18: <br>
|
15,132 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import bird_data as data
import wav_utils
from IPython.core.display import HTML
# Plot in the notebook
%pylab inline
data.call_df.head(10)
HTML(data.label_df.head(10).to_html())
data.call_df.type.unique()
wavs = data.get_wav_dict()
three_call_clipnames = data.label_df.index[data.label_df.sum(axis=1)==3]
plot(wavs[three_call_clipnames[1]])
wav_utils.view_clip(three_call_clipnames[4])
wav_utils.view_clip(three_call_clipnames[1])
wav_utils.view_clip(data.label_df.index[32])
wav_utils.view_clip(data.label_df.index[8])
wav_utils.view_clip(data.label_df.index[31])
wav_utils.view_clip(data.label_df.index[34])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explore the data
Step2: How do we know the calls in a particular training audio clip?
Step3: Exercises
Step4: What about the sound clips?
Step5: Find all the clips with three calls
Step6: #### Plot the audio for one of these three-bird clips
Step7: That wasn't too helpful, maybe a spectrogram will be more illuminating
Step8: Much better, how about some more clips
Step9: Is that a car in the background?
Step10: Uhhhhh, anyone there?
Step11: That's an uncommonly interesting call, Firecrest
|
15,133 | <ASSISTANT_TASK:>
Python Code:
##Some code to run at the beginning of the file, to be able to show images in the notebook
##Don't worry about this cell
#Print the plots in this screen
%matplotlib inline
#Be able to plot images saved in the hard drive
from IPython.display import Image
#Make the notebook wider
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
#Usual imports
import pandas as pd
import numpy as np
import pylab as plt
#Dictionary
this_is_a_dict = {"Javier": "garcia@uva.nl", "Friend1": "f1@uva.nl", "Friend2": "f2@uva.nl"}
print(this_is_a_dict)
#Get an element
print(this_is_a_dict["Friend2"])
print(this_is_a_dict.get("Friend2"))
#The difference between the two is that while the first line gives an error if "Friends2"
#is not part of the dictionary, the second one answers with None**
print(this_is_a_dict.get("Friend5")) #not enough friends
#Create an element
this_is_a_dict["Friend3"] = "f3@uva.nl"
this_is_a_dict
#Print the keys
print(this_is_a_dict.keys())
#Print the values
print(this_is_a_dict.values())
del this_is_a_dict["Friend3"]
print(this_is_a_dict)
#Creating dictionary using two lists
list_names = ["Javier", "Friend1", "Friend2"]
list_numbers = ["garcia@uva.nl","f1@uva.nl","f2@uva.nl"]
#Put both together using zip
this_is_a_dict = dict(zip(list_names,list_numbers))
print(this_is_a_dict)
#The zip object is another strange data structure that we cannot see (like range)
print(zip(list_names,list_numbers))
#But we can convert it to a list to see how it looks (like range)
print(list(zip(list_names,list_numbers)))
## Our own functions
def mean_ours(list_numbers): #list_numbers is the arguments
This is called the docstring, it is a comment describing the function. In this case the function calculates the mean of a list of numbers.
input
list_numbers: a list of numbers
output: the mean of the input
#what gives back
return sum(list_numbers)/len(list_numbers)
##INDENTATION!!
##Two points after the "def"
mean_ours?
aList = [2,3,4]
print(mean_ours(aList)) #this is how you call the funciton
def f():
local_var1 = 2
local_var2 = 3
local_var = local_var1*local_var2
print(local_var)
#Call the function
f()
def f():
local_var1 = 2
local_var2 = 2
local_var = local_var1*local_var2
#Call the function
f()
#We haven't created local_var
print(local_var)
def f():
local_var1 = 2
local_var2 = 2
local_var = local_var1*local_var2
return local_var
#Call the function
local_var = f()
#Now we have local_var (but generally it is not a good idea to use the same name)
print(local_var)
local_var = "python"
def f():
print(local_var) #this can read the variable outside, but NOT CHANGE IT (except .pop() and .append())
#it's okay for functions not to return anything, by default they return None
#Call the function
f()
#We can also see it from outside the function
print(local_var)
#Imagine we want to find what some articles are talking about, we could do it like this,
#but it's unfeasible when you have more than a dozen articles
import numpy as np
list_articles = ["article 1: blah python",
"article 2: blah Trump",
"article 3: blah Trump",
"article 4: blah Trump"]#many article
print("python" in list_articles[0])
print("python" in list_articles[1])
print("python" in list_articles[2])
print("python" in list_articles[3])
#...
#but we can use for loops
for article in list_articles:
print("python" in article)
#this is very common as well (especially in other programming languages)
for index in [0,1,2,3]:
print("python" in list_articles[index])
#this is sometimes useful when we want both the article and the index
for index,article in enumerate(list_articles):
print(index, "python" in article)
for index,article in enumerate(list_articles):
if index == 2: break
print(index, "python" in article)
for index,article in enumerate(list_articles):
if index%2 == 0: continue #this skips the rest of the code below if the number is even
print(index, "python" in article)
article = "article 2: blah Trump"
if "python" in article:
print("Article refering to Python")
elif "Trump" in article:
print("Article refering to Trump")
else:
print("Article not refering to Python or Trump")
def python_or_trump(article):
prints if an article is related to python or trump
input
article: string with words
if "python" in article:
print("Article refering to Python")
elif "Trump" in article:
print("Article refering to Trump")
else:
print("Article not refering to Python or Trump")
article = "article 2: blah Trump"
print(article)
#this is how you call the function
python_or_trump(article)
#stops when python is found, never check for trump
article = "article 2: blah Trump python"
print(article)
python_or_trump(article)
article = "article 2: blah blah"
print(article)
python_or_trump(article)
list_articles = ["article 1: blah python",
"article 2: blah Trump",
"article 3: blah Trump",
"article 4: blah Trump"]#many articles
for article in list_articles:
python_or_trump(article)
def count_words(list_articles):
input: list of articles
output: number of articles with the word trump and with the word pythoon
count_trump = 0
count_python = 0
for article in list_articles:
if "python" in article:
count_python = count_python + 1 #count_python += 1
if "Trump" in article:
count_trump = count_trump + 1 #count_python += 1
return count_trump,count_python
import numpy as np
list_articles = ["article 1: blah python",
"article 2: blah Trump",
"article 3: blah Trump",
"article 4: blah Trump"]#many articles
count_trump,count_python = count_words(list_articles)
print("python articles: ", count_python)
print("trump_articles: ", count_trump)
#Let's use a list of numbers instead of two separate variables for the counter
list_articles = ["article 1: blah python",
"article 2: blah Trump",
"article 3: blah Trump",
"article 4: blah Trump"]#many articles
def count_words(list_articles):
counters = [0]*2
for article in list_articles:
if "python" in article:
counters[0] += 1 #count_python += 1
if "Trump" in article:
counters[1] += 1 #count_python += 1
return counters
counters = count_words(list_articles)
print("python articles: ", counters[0])
print("trump_articles: ", counters[1])
# And allow for any two words, not just python or Trump
list_articles = ["article 1: blah python",
"article 2: blah Trump",
"article 3: blah Trump",
"article 4: blah Trump"]#many articles
def count_words(list_articles,words):
counters = [0]*2
for article in list_articles:
if words[0] in article:
counters[0] += 1 #count_python += 1
elif words[1] in article:
counters[1] += 1 #count_python += 1
return counters
counters = count_words(list_articles,words=["python","blah"])
print("python articles: ", counters[0])
print("blah_articles: ", counters[1])
# And allow for any number of words, not just two
list_articles = ["article 1: blah python",
"article 2: blah Trump",
"article 3: blah Trump",
"article 4: blah Trump"]#many articles
def count_words(list_articles,words):
counters = [0] * len(words)
for index in range(len(list_articles)):
article = list_articles[index]
for i in range(len(words)):
if words[i] in article:
counters[i] += 1
return counters
words = ["python","Trump","blah"]
counters = count_words(list_articles,words)
print(words)
print(counters)
#We can make a dictionary out of it
d_word2counter = dict(zip(words,counters))
d_word2counter["Trump"]
#For instance this fails, because we don't have more than 2 friends
this_is_a_dict = {"Javier": "garcia@uva.nl", "Friend1": "f1@uva.nl", "Friend2": "f2@uva.nl"}
this_is_a_dict["Friend5"]
#example how to fix it
#the indents are important, as well as the colons
try:
print(this_is_a_dict["Friend5"])
except KeyError:
print("Not enough friends")
#but this one is very common and we have a function that does it for us
print(this_is_a_dict.get("Friend5"))
with open("data/file_to_write.csv","w+") as f:
f.write("I'm line number {}".format(0))
f.write("I'm line number {}".format(1))
f.write("I'm line number {}".format(2))
f.write("I'm line number {}".format(3))
f.write("I'm line number {}".format(4))
with open("data/file_to_write.csv","w+") as f:
f.write("I'm line number {}\n".format(0))
f.write("I'm line number {}\n".format(1))
f.write("I'm line number {}\n".format(2))
f.write("I'm line number {}\n".format(3))
f.write("I'm line number {}\n".format(4))
#Ways to read files
with open("data/file_to_write.csv") as f:
#way 1
all_file = f.read()
print(all_file)
with open("data/file_to_write.csv") as f:
#way 2
all_file_by_line = f.readlines()
print(all_file_by_line)
with open("data/file_to_write.csv") as f:
#way 3
for line in f:
print(line)
with open("data/file_to_write.csv") as f:
#way 3
for line in f:
print(line.rstrip())
#A character is a special type of number
ord("ñ")
#A string is very similar to a list of characters
"abdc"[3]
#A boolean is a number
print(True == 1)
#A numpy array is a special type of list
#A pandas dataframe is a list of numpy arrays
#A set is a dictionary without values {"d":1,"e":3} vs {"d","e"}
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PYTHON
Step2: OPERATIONS IN DICT
Step3: Add
Step4: Remove
Step5: Creating a dictionary from two lists
Step7: Why to use dict? Because it's much much faster than a list, it always takes the same time to find an element in a dict, that's not the case in a list
Step8: How the arguments of a function work
Step9: Variables created inside functions are only seen within the function
Step10: Variables created outside functions are seen by all the code (be careful!)
Step11: 1.4 For-Loops
Step12: what if we want to stop a loop?
Step13: what if we want to skip some rounds? Then we use continue
Step14: 1.5 Control flow = if-else statements
Step16: 1.x1 Let's combine all we learned so far
Step17: Now we do it for many articles
Step19: 1.x2 Let's combine all we learned so far
Step20: Let's make it a bit more flexible
Step21: what if we want a loop but we don't know when we need to stop?
Step22: 2. Writing and reading from disk
Step23: But remember to add a "return character" (\n)
Step24: There are 3 ways to read a file
Step25: Read it breaking in the "\n"
Step26: Read it line by line
Step27: you can delete the "\n" at the end of the string with .rstrip()
Step28: In-class exercises
|
15,134 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))
BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset, epochs=10)
model.evaluate(test_dataset)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: tf.data を使って NumPy データをロードする
Step2: .npz ファイルからのロード
Step3: tf.data.Dataset を使って NumPy 配列をロード
Step4: データセットの使用
Step5: モデルの構築と訓練
|
15,135 | <ASSISTANT_TASK:>
Python Code:
# # Dash packages installation
# !conda install -c conda-forge dash-renderer -y
# !conda install -c conda-forge dash -y
# !conda install -c conda-forge dash-html-components -y
# !conda install -c conda-forge dash-core-components -y
# !conda install -c conda-forge plotly -y
import dash
import dash_core_components as dcc
import dash_html_components as html
import flask
from dash.dependencies import Input, Output
import plotly.graph_objs as go
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', 999)
import pandas.io.sql as psql
d_lab = pd.read_csv("data/D_LABITEMS.csv")
d_lab.columns = map(str.lower, d_lab.columns)
d_lab.drop(columns = ['row_id'], inplace = True)
lab = pd.read_csv("data/LABEVENTS.csv")
lab.columns = map(str.lower, lab.columns)
lab = lab[lab['subject_id'] == 41976]
lab.drop(columns = ['row_id'], inplace = True)
lab = pd.merge(d_lab, lab, on = 'itemid', how = 'inner')
print(lab.columns)
lab[['subject_id', 'hadm_id', 'itemid', 'label', 'value']].head()
presc = pd.read_csv("data/PRESCRIPTIONS.csv")
presc.columns = map(str.lower, presc.columns)
presc = presc[presc['subject_id'] == 41976]
presc.drop(columns = ['row_id'], inplace = True)
print(presc.columns)
presc[['subject_id', 'hadm_id', 'icustay_id', 'drug']].head()
lab['charttime'] = pd.to_datetime(lab['charttime'], errors = 'coerce')
lab.sort_values(by='charttime', inplace=True)
lab.set_index('charttime', inplace = True)
lab.head(1)
presc['dose_val_rx'] = pd.to_numeric(presc['dose_val_rx'], errors = 'coerce')
presc = presc[presc['dose_unit_rx']=='mg']
presc = presc[presc['drug'].isin(['Vancomycin','Meropenem','Levofloxacin'])]
temp_df = pd.DataFrame()
for item in presc.drug.unique():
temp = presc[presc['drug'].str.contains(item)]
temp['norm_size'] = temp['dose_val_rx'] / temp['dose_val_rx'].max()
temp_df = temp_df.append(temp)
presc = pd.merge(presc, temp_df, on=list(presc.columns))
presc['startdate'] = pd.to_datetime(presc['startdate'], errors = 'coerce')
presc.sort_values(by='startdate', inplace=True)
presc.set_index('startdate', inplace = True)
presc.head(1)
list_patient = ['41976']
list_biomarker = ['White Blood Cells', 'Neutrophils']
list_drug = ['Vancomycin','Meropenem','Levofloxacin']
# stylesheets = ['./resources/bWLwgP.css']
app = dash.Dash()
app.layout = html.Div([
dcc.Dropdown(
id = 'patient',
value = '41976',
multi = False,
options = [{'label': i, 'value': i} for i in list_patient],
),
dcc.Dropdown(
id = 'biomarker',
value = 'White Blood Cells',
multi = False,
options = [{'label': i, 'value': i} for i in list_biomarker],
),
dcc.Dropdown(
id = 'drug',
value = ['Vancomycin'],
multi = True,
options = [{'label': i, 'value': i} for i in list_drug],
),
dcc.Graph(id = 'graph'),
])
@app.callback(Output('graph', 'figure'),
[Input('patient', 'value'),
Input('biomarker', 'value'),
Input('drug', 'value')])
def update_graph(patient, biomarker, drug):
traces = []
temp_l = lab[lab['subject_id'].astype(str) == patient]
temp_p = presc[presc['subject_id'].astype(str) == patient]
temp_min = 0
item = biomarker
temp = temp_l[temp_l['label'] == item]
temp_min = float(temp.value.astype(float).min())
trace = go.Scatter(
x = temp.index,
y = temp.value,
name = item,
mode = 'lines+markers',
)
traces.append(trace)
for i, item in enumerate(drug):
temp = temp_p[ temp_p['drug'] == item]
trace = go.Scatter(
x = temp.index,
y = np.ones((1, len(temp)))[0] * temp_min - i - 1,
name = item,
mode = 'markers',
marker = {
'size': temp.norm_size * 10
}
)
traces.append(trace)
layout = go.Layout(
legend = {'x': 0.5, 'y': -0.1, 'orientation': 'h', 'xanchor': 'center'},
margin = {'l': 300, 'b': 10, 't': 10, 'r': 300},
hovermode = 'closest',
)
return {'data': traces, 'layout': layout}
app.run_server(port = 8050)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data collection (2/6)
Step2: Data preparation for labevents table (3/6)
Step3: Data preparation for prescriptions table (4/6)
Step4: Create a structure and presentation of your web with HTML and CSS (5/6)
Step5: Define the reactive behavior with Python (6/6)
|
15,136 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
idx, B = load_data()
C = B.index_select(1, idx)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
15,137 | <ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('../resource/final_df3.csv')
sample = df.title
y = df['rating(y)'].values
real_X = df[['avg_rating']].values
cat_X = df.text.fillna("").values
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, TfidfTransformer
count_vect = CountVectorizer()
X_counts = count_vect.fit_transform(cat_X)
tfidf_vect = TfidfVectorizer()
X_tfidf = tfidf_vect.fit_transform(cat_X)
from sklearn.cross_validation import StratifiedKFold
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import mean_absolute_error
cv = StratifiedKFold(y, n_folds=5, random_state=51)
i_range = []
score_range = []
sigma = []
for a in np.arange(0, 2, 0.01):
mnb = MultinomialNB(alpha = a)
scores = np.zeros(5)
for i, (train_idx, test_idx) in enumerate(cv):
X_train = X_counts[train_idx]
y_train = y[train_idx]
X_test = X_counts[test_idx]
y_test = y[test_idx]
mnb.fit(X_train, y_train)
y_pred = mnb.predict(X_test)
scores[i] = mean_absolute_error(y_test, y_pred)
i_range.append(a)
score_range.append(np.mean(scores))
sigma.append(np.std(scores))
best_idx = np.argmin(score_range)
best_alpha = i_range[best_idx]
best_score = score_range[best_idx]
sigma
plt.figure(figsize = (15, 5))
plt.plot(i_range, score_range)
plt.plot(i_range, np.array(score_range) + sigma, 'b--')
plt.plot(i_range, np.array(score_range) - sigma, 'b--')
plt.axhline(best_score + sigma[best_idx], linestyle=':', color='r')
plt.axvline(best_alpha, linestyle=':', color='r')
def find_nearest(array, value):
idx = (np.abs(array-value)).argmin()
return idx
sub_alpha = i_range[find_nearest(score_range, best_score+sigma[best_idx])]
sub_score = best_score+sigma[best_idx]
plt.scatter(sub_alpha, sub_score, s=100, c='red')
plt.xlim(0, 2)
plt.ylabel('CV score(mae)')
plt.xlabel('alpha')
print("best alpha : ", best_alpha)
print("best score : ", best_score)
print(' 1-sigma : ', round(sigma[best_idx], 4))
print('='*25)
print("sub_opt alpha : ", sub_alpha)
print("sub_opt score : ", sub_score)
cv = StratifiedKFold(y, n_folds=5, random_state=51)
i_range = []
score_range = []
sigma = []
for a in np.arange(0, 1, 0.01):
mnb = MultinomialNB(alpha = a)
scores = np.zeros(5)
for i, (train_idx, test_idx) in enumerate(cv):
X_train = X_tfidf[train_idx]
y_train = y[train_idx]
X_test = X_tfidf[test_idx]
y_test = y[test_idx]
mnb.fit(X_train, y_train)
y_pred = mnb.predict(X_test)
scores[i] = mean_absolute_error(y_test, y_pred)
i_range.append(a)
score_range.append(np.mean(scores))
sigma.append(np.std(scores))
best_idx = np.argmin(score_range)
best_alpha = i_range[best_idx]
best_score = score_range[best_idx]
sigma
plt.figure(figsize = (15, 5))
plt.plot(i_range, score_range)
plt.plot(i_range, np.array(score_range) + sigma, 'b--')
plt.plot(i_range, np.array(score_range) - sigma, 'b--')
plt.axhline(best_score + sigma[best_idx], linestyle=':', color='r')
plt.axvline(best_alpha, linestyle=':', color='r')
def find_nearest(array, value):
idx = (np.abs(array-value)).argmin()
return idx
sub_alpha = i_range[find_nearest(score_range, best_score+sigma[best_idx])]
sub_score = best_score+sigma[best_idx]
plt.scatter(sub_alpha, sub_score, s=100, c='red')
plt.xlim(0, 1)
plt.ylabel('CV score(mae)')
plt.xlabel('alpha')
print("best alpha : ", best_alpha)
print("best score : ", best_score)
print(' 1-sigma : ', round(sigma[best_idx], 4))
print('='*25)
print("sub_opt alpha : ", sub_alpha)
print("sub_opt score : ", sub_score)
from sklearn.pipeline import Pipeline
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
from sklearn.grid_search import GridSearchCV
parameters = {
'vect__ngram_range': [(1, 1), (1, 2), (1, 3), (1, 4), ],
'tfidf__use_idf' : [True, False],
'clf__alpha' : np.arange(0, 1, 0.01),
}
gs_clf = GridSearchCV(text_clf, parameters, cv=5, scoring='mean_absolute_error', n_jobs=-1)
gs_clf = gs_clf.fit(cat_X, y)
best_parameters, score, _ = max(gs_clf.grid_scores_, key=lambda x: x[1])
for param_name in sorted(parameters.keys()):
print("{name}: {best}".format(
name=param_name, best=best_parameters[param_name]
))
print("="*25)
print('score :', score)
cv = StratifiedKFold(y, n_folds=5, random_state=51)
i_range = []
score_range = []
sigma = []
for a in np.arange(0, 0.45, 0.01):
text_clf = Pipeline([
('vect', CountVectorizer(ngram_range=(1, 2))),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB(alpha=a)),
])
scores = np.zeros(5)
for i, (train_idx, test_idx) in enumerate(cv):
X_train = cat_X[train_idx]
y_train = y[train_idx]
X_test = cat_X[test_idx]
y_test = y[test_idx]
text_clf.fit(X_train, y_train)
y_pred = text_clf.predict(X_test)
scores[i] = mean_absolute_error(y_test, y_pred)
i_range.append(a)
score_range.append(np.mean(scores))
sigma.append(np.std(scores))
best_idx = np.argmin(score_range)
best_alpha = i_range[best_idx]
best_score = score_range[best_idx]
sigma
plt.figure(figsize = (15, 5))
plt.plot(i_range, score_range)
plt.plot(i_range, np.array(score_range) + sigma, 'b--')
plt.plot(i_range, np.array(score_range) - sigma, 'b--')
plt.axhline(best_score + sigma[best_idx], linestyle=':', color='r')
plt.axvline(best_alpha, linestyle=':', color='r')
def find_nearest(array, value):
idx = (np.abs(array-value)).argmin()
return idx
sub_alpha = i_range[find_nearest(score_range, best_score+sigma[best_idx])]
sub_score = best_score+sigma[best_idx]
plt.scatter(sub_alpha, sub_score, s=100, c='red')
plt.xlim(0, 0.45)
plt.ylabel('CV score(mae)')
plt.xlabel('alpha')
print("best alpha : ", best_alpha)
print("best score : ", best_score)
print(' 1-sigma : ', round(sigma[best_idx], 4))
print('='*25)
print("sub_opt alpha : ", sub_alpha)
print("sub_opt score : ", sub_score)
from sklearn.naive_bayes import GaussianNB, MultinomialNB
gnb = GaussianNB()
mnb = Pipeline([
('vect', CountVectorizer(ngram_range=(1, 2),)),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB(alpha=0.3)),
])
gnb.fit(real_X, y)
gnb_pred = gnb.predict(real_X)
gnb_prob = gnb.predict_proba(real_X)
mnb.fit(cat_X, y)
mnb_pred = mnb.predict(cat_X)
mnb_prob = mnb.predict_proba(cat_X)
mix_prob = np.multiply(gnb_prob, mnb_prob)
mix_prob.shape
def softmax(w, t=1.0):
Calculate the softmax of a list of numbers w.
Parameters
----------
w : list of numbers
t : float
Return
------
a list of the same length as w of non-negative numbers
Examples
--------
>>> softmax([0.1, 0.2])
array([ 0.47502081, 0.52497919])
>>> softmax([-0.1, 0.2])
array([ 0.42555748, 0.57444252])
>>> softmax([0.9, -10])
array([ 9.99981542e-01, 1.84578933e-05])
>>> softmax([0, 10])
array([ 4.53978687e-05, 9.99954602e-01])
e = np.exp(np.array(w) / t)
dist = e / np.sum(e)
return dist
mix_prob_softmax = np.zeros((544, 5))
for i in range(544):
mix_prob_softmax[i] = softmax(mix_prob[i])
mix_prob_softmax
np.sum(mix_prob_softmax[0])
mix_pred = np.zeros(544, )
for i in range(544):
mix_pred[i] = np.argmax(mix_prob_softmax[i])
mix_pred += 1 # 별점은 1점부터 5점까지이므로(int)
mix_pred
test_df = pd.read_excel('../resource/test_df.xlsx')
test_sample = test_df.title
test_y = test_df['my_rating'].values
test_real_X = test_df[['avg_rating']].values
test_cat_X = test_df.text
test_watcha_y = test_df['watcha_rating'].values
gnb_test_pred = gnb.predict(test_real_X)
gnb_test_prob = gnb.predict_proba(test_real_X)
mnb_test_pred = mnb.predict(test_cat_X)
mnb_test_prob = mnb.predict_proba(test_cat_X)
mix_test_prob = np.multiply(gnb_test_prob, mnb_test_prob)
mix_test_prob_softmax = np.zeros((12, 5))
for i in range(12):
mix_test_prob_softmax[i] = softmax(mix_test_prob[i])
mix_test_prob_softmax
np.sum(mix_test_prob_softmax[0])
mix_test_pred = np.zeros(12, )
for i in range(12):
mix_test_pred[i] = np.argmax(mix_test_prob_softmax[i])
mix_test_pred += 1 # 별점은 1점부터 5점까지이므로(int)
mix_test_pred
test_df['predict'] = mix_test_pred
test_df
mix_score = mean_absolute_error(mix_test_pred, test_y)
watcha_score = mean_absolute_error(test_watcha_y, test_y)
print('mix_score :', mix_score)
print('watcha_score :', watcha_score)
# watcha_rating을 반올림하여 정수로변환하여 스코어 측정해봄
test_watchar_round_y = np.round(test_watcha_y,)
mean_absolute_error(test_watchar_round_y, test_y)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: training MultiNB & parameter tuning
Step2: X_counts로 cv했을때
Step3: X_tfidf로 cv했을때
Step4: Tuning & Improvement
Step5: Retraining with new parameters & 1sigma rule
Step7: Gaussian & Multinomial NB fitting
Step8: Final Test
Step9: score
|
15,138 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc6', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
DOC.set_value("Other: ice")
DOC.set_value("bare soil")
DOC.set_value("lake")
DOC.set_value("vegetated")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(1)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("ISLSCP Initiative I (FAO, GISS, U. Arizona, NASA/GSFC)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("ISLSCP Initiative I (FAO, GISS, U. Arizona, NASA/GSFC)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("ISLSCP Initiative I (ERBE)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("N/A")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(6)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
DOC.set_value("Explicit diffusion")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(6)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Thermo dynamics")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
DOC.set_value("Explicit diffusion")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
DOC.set_value("soil moisture freeze-thaw")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(3)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
DOC.set_value("constant")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("prognostic")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("diagnostic")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("prognostic")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
DOC.set_value("ground snow fraction")
DOC.set_value("vegetation snow fraction")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
DOC.set_value("Other: snow refreezing")
DOC.set_value("snow interception")
DOC.set_value("snow melting")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
DOC.set_value("prognostic")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
DOC.set_value("aerosol deposition")
DOC.set_value("snow age")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
DOC.set_value("vegetation types")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
DOC.set_value("C3 grass")
DOC.set_value("C4 grass")
DOC.set_value("broadleaf tree")
DOC.set_value("needleleaf tree")
DOC.set_value("vegetated")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
DOC.set_value("prescribed (varying from files)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("prescribed")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
DOC.set_value("CO2")
DOC.set_value("light")
DOC.set_value("temperature")
DOC.set_value("water availability")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(2)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
DOC.set_value("transpiration")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(2)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
DOC.set_value("present day")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
DOC.set_value("direct (large rivers)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
DOC.set_value("water")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
DOC.set_value("water")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("diagnostic")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
15,139 | <ASSISTANT_TASK:>
Python Code:
import nltk
import random
from nltk.corpus import movie_reviews
import pickle
from nltk.classify import ClassifierI
from statistics import mode
## defing the voteclassifier class
class VoteClassifier(ClassifierI):
def __init__(self, *classifiers):
self._classifiers = classifiers
def classify(self, features):
votes = []
for c in self._classifiers:
v = c.classify(features)
votes.append(v)
return mode(votes)
def confidence(self, features):
votes = []
for c in self._classifiers:
v = c.classify(features)
votes.append(v)
choice_votes = votes.count(mode(votes))
conf = choice_votes / len(votes)
return conf
# pickle_obj = open("documents.pickle", "wb")
# documents = [(list(movie_reviews.words(fileid)), category)
# for category in movie_reviews.categories()
# for fileid in movie_reviews.fileids(category)]
# pickle.dump(documents, pickle_obj)
# pickle_obj.close()
pickle_obj = open("documents.pickle", "rb")
documents = pickle.load(pickle_obj)
pickle_obj.close()
random.shuffle(documents)
all_words = []
for w in movie_reviews.words():
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())[:3000]
def find_features(document):
words = set(document)
features = {}
for w in word_features:
features[w] = (w in words)
return features
#print((find_features(movie_reviews.words('neg/cv000_29416.txt'))))
featuresets = [(find_features(rev), category) for (rev, category) in documents]
training_set = featuresets[:1900]
testing_set = featuresets[1900:]
original_nb = open("naive_bayes.pickle", "rb")
naive_bayes_classifier = pickle.load(original_nb)
original_nb.close()
pickle_file = open("MNB_pickle.pickle", "rb")
MNB_classifier = pickle.load(pickle_file)
pickle_file.close()
pickle_file = open("BNB_pickle.pickle", "rb")
BernoulliNB_classifier = pickle.load(pickle_file)
pickle_file.close()
pickle_file = open("LogisticRegression.pickle", "rb")
LogisticRegression_classifier = pickle.load(pickle_file)
pickle_file.close()
pickle_file = open("SGDClassifier.pickle", "rb")
SGDClassifier_classifier = pickle.load(pickle_file)
pickle_file.close()
pickle_file = open("LinearSVC.pickle", "rb")
LinearSVC_classifier = pickle.load(pickle_file)
pickle_file.close()
pickle_file = open("NuSVC_classifier.pickle", "rb")
NuSVC_classifier = pickle.load(pickle_file)
pickle_file.close()
print("naive bayes: ", (nltk.classify.accuracy(naive_bayes_classifier, testing_set))*100)
print("MNB_classifier: ", (nltk.classify.accuracy(MNB_classifier, testing_set))*100)
print("BernoulliNB_classifier: ", (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))*100)
print("LogisticRegression_classifier: ", (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))*100)
print("SGDClassifier_classifier: ", (nltk.classify.accuracy(SGDClassifier_classifier, testing_set))*100)
print("LinearSVC_classifier: ", (nltk.classify.accuracy(LinearSVC_classifier, testing_set))*100)
print("NuSVC_classifier: ", (nltk.classify.accuracy(NuSVC_classifier, testing_set))*100)
voted_classifier = VoteClassifier(
naive_bayes_classifier,
MNB_classifier,
BernoulliNB_classifier,
LogisticRegression_classifier,
SGDClassifier_classifier,
LinearSVC_classifier,
NuSVC_classifier
)
print("Voted classifier accuracy : ", (nltk.classify.accuracy(voted_classifier, testing_set))*100)
print("Classification:", voted_classifier.classify(testing_set[0][0]), "Confidence %:",voted_classifier.confidence(testing_set[0][0])*100)
print("Classification:", voted_classifier.classify(testing_set[1][0]), "Confidence %:",voted_classifier.confidence(testing_set[1][0])*100)
print("Classification:", voted_classifier.classify(testing_set[2][0]), "Confidence %:",voted_classifier.confidence(testing_set[2][0])*100)
print("Classification:", voted_classifier.classify(testing_set[3][0]), "Confidence %:",voted_classifier.confidence(testing_set[3][0])*100)
print("Classification:", voted_classifier.classify(testing_set[4][0]), "Confidence %:",voted_classifier.confidence(testing_set[4][0])*100)
print("Classification:", voted_classifier.classify(testing_set[5][0]), "Confidence %:",voted_classifier.confidence(testing_set[5][0])*100)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading all the classifiers from their respective pickle files
Step2: passing it voting class function
|
15,140 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
def find_peaks(a):
Find the indices of the local maxima in a sequence.
peaks = []
for i in range(len(a)):
if i==0:
if a[i]>a[i+1]:
peaks.append(i)
elif i!=0 and i!=len(a)-1:
if a[i]>a[i-1] and a[i]>a[i+1]:
peaks.append(i)
elif i==len(a)-1:
if a[i]>a[i-1]:
peaks.append(i)
return peaks
a = [2,0,1,0,2,0,1]
p1 = find_peaks(a)
print(p1)
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
pi_list = []
for i in range(len(pi_digits_str)):
pi_list.append(int(pi_digits_str[i]))
pi_array = np.array(pi_list)
pi_peaks = find_peaks(pi_array)
pi_diff = np.diff(pi_peaks)
max(pi_diff)
list(np.arange(2,11))
g = plt.figure(figsize=(6,6))
plt.hist(pi_diff, bins=max(pi_diff)+1, range=(.5,max(pi_diff)+1.5))
plt.xlim(1.5,12.5)
plt.xticks(np.arange(2,13))
plt.xlabel('Distance Between Peaks')
plt.ylabel('Count')
plt.title('Distance Between Maxima for the First 10,000 Digits of Pi');
assert True # use this for grading the pi digits histogram
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Peak finding
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following
|
15,141 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format='retina'
# %config InlineBackend.figure_format='svg'
import matplotlib as mpl
import matplotlib.pyplot as plt
import os
import time
from glob import glob
import numpy as np
brick = 23
STARFISH = os.getenv("STARFISH")
isoc_dir = "b23ir3_isoc"
lib_dir = "b23ir3_lib"
synth_dir = "b23ir3_synth"
fit_dir = "b23ir3_fit"
wfc3_bands = ['F110W', 'F160W']
from astropy.coordinates import Distance
import astropy.units as u
from padova import AgeGridRequest, IsochroneRequest
from starfisher import LibraryBuilder
z_grid = [0.015, 0.019, 0.024]
delta_gyr = 0.5
late_ages = np.log10(np.arange(1e9 + delta_gyr, 13e9, delta_gyr * 1e9))
if not os.path.exists(os.path.join(STARFISH, isoc_dir)):
for z in z_grid:
# Young ages in log-grid
r = AgeGridRequest(z,
min_log_age=6.6,
max_log_age=8.9,
delta_log_age=0.05,
phot='wfc3', photsys_version='odfnew')
for isoc in r.isochrone_set:
isoc.export_for_starfish(os.path.join(STARFISH, isoc_dir),
bands=wfc3_bands)
# Old ages in linear grid
for logage in late_ages:
r = IsochroneRequest(z, logage,
phot='wfc3', photsys_version='odfnew')
r.isochrone.export_for_starfish(os.path.join(STARFISH, isoc_dir),
bands=wfc3_bands)
d = Distance(785 * u.kpc)
builder = LibraryBuilder(isoc_dir, lib_dir,
nmag=len(wfc3_bands),
dmod=d.distmod.value,
iverb=3)
if not os.path.exists(builder.full_isofile_path):
builder.install()
from collections import namedtuple
from starfisher import Lockfile
from starfisher import Synth
from starfisher import ExtinctionDistribution
from starfisher import ExtantCrowdingTable
from starfisher import ColorPlane
from m31hst.phatast import PhatAstTable
if not os.path.exists(os.path.join(STARFISH, synth_dir)):
os.makedirs(os.path.join(STARFISH, synth_dir))
# No binning in our lockfile
lockfile = Lockfile(builder.read_isofile(), synth_dir, unbinned=False)
# Bin young isochrones
young_grid = np.linspace(6.5, 8.9, 5)
for i, logage0 in enumerate(young_grid[:-1]):
logage1 = young_grid[i + 1] - 0.05
z_str = "0019"
mean_age = (logage0 + logage1) / 0.2
name = "z{0}_{1:05.2f}".format(z_str, mean_age)
lockfile.lock_box(name, (logage0, logage1), (0.014, 0.025))
# Bin old isochrones
old_grid = np.arange(1e9, 14 * 1e9, 1e9)
for i, age0 in enumerate(old_grid[:-1]):
logage0 = np.log10(age0 - 0.05 * 1e9)
logage1 = np.log10(old_grid[i + 1])
z_str = "0019"
mean_age = (logage0 + logage1) / 0.2
name = "z{0}_{1:05.2f}".format(z_str, mean_age)
lockfile.lock_box(name, (logage0, logage1), (0.014, 0.025))
from starfisher.plots import plot_lock_polygons, plot_isochrone_logage_logzsol
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111)
plot_isochrone_logage_logzsol(ax, builder, c='k', s=8)
plot_lock_polygons(ax, lockfile, facecolor='None', edgecolor='r')
ax.set_xlim(6, 10.2)
ax.set_ylim(-0.2, 0.2)
ax.set_xlabel(r"$\log(A)$")
ax.set_ylabel(r"$\log(Z/Z_\odot)$")
fig.show()
# No extinction, yet
young_av = ExtinctionDistribution()
old_av = ExtinctionDistribution()
rel_extinction = np.ones(len(wfc3_bands), dtype=float)
for av in (young_av, old_av):
av.set_uniform(0.)
# Use PHAT AST from the outer field
crowd_path = os.path.join(synth_dir, "crowding.dat")
full_crowd_path = os.path.join(STARFISH, crowd_path)
tbl = PhatAstTable()
tbl.write_crowdfile_for_field(full_crowd_path, 0,
bands=('f110w', 'f160w'))
crowd = ExtantCrowdingTable(crowd_path)
# Define CMD planes
Lim = namedtuple('Lim', 'x y')
ir_lim = Lim(x=(0.3, 1.3), y=(24, 16.5))
ir_cmd = ColorPlane((wfc3_bands.index('F110W'),
wfc3_bands.index('F160W')),
wfc3_bands.index('F160W'),
ir_lim.x,
(min(ir_lim.y), max(ir_lim.y)),
28.,
suffix='f110f160',
x_label=r'$\mathrm{F110W}-\mathrm{F160W}$',
y_label=r'$\mathrm{F110W}$',
dpix=0.05)
ir_cmd.mask_region((-1., 0.), (22., 16))
ir_cmd.mask_region((0, 0.3), (22., 16))
ir_cmd.mask_region((0.3, 0.7), (20., 16))
ir_cmd.mask_region((0.7, 0.8), (19., 16))
ir_cmd.mask_region((0.8, 0.9), (18., 16))
ir_cmd.mask_region((1.1, 1.5), (28, 21))
colour_planes = [ir_cmd]
synth = Synth(synth_dir, builder, lockfile, crowd,
rel_extinction,
young_extinction=young_av,
old_extinction=old_av,
planes=colour_planes,
mass_span=(0.08, 150.),
nstars=10000000)
if len(glob(os.path.join(STARFISH, synth_dir, "z*"))) == 0:
synth.run_synth(n_cpu=4)
synth.plot_all_hess(os.path.join(STARFISH, synth_dir, 'hess'))
from astropy.table import Table
from m31hst import phat_v2_phot_path
if not os.path.exists(os.path.join(STARFISH, fit_dir)):
os.makedirs(os.path.join(STARFISH, fit_dir))
data_root = os.path.join(fit_dir, "b23ir.")
full_data_path = os.path.join(STARFISH, '{0}f110f160'.format(data_root))
brick_table = Table.read(phat_v2_phot_path(brick), format='fits')
# Only use stars within the fitting box
c = brick_table['f110w_vega'] - brick_table['f160w_vega']
m = brick_table['f160w_vega']
sel = np.where((c > min(ir_lim.x)) & (c < max(ir_lim.x)) &
(m > min(ir_lim.y)) & (m < max(ir_lim.y)))[0]
brick_table = brick_table[sel]
print("Fitting {0:d} stars".format(len(brick_table)))
if not os.path.exists(full_data_path):
phot_dtype = np.dtype([('x', np.float), ('y', np.float)])
photdata = np.empty(len(brick_table), dtype=phot_dtype)
photdata['x'][:] = brick_table['f110w_vega'] - brick_table['f160w_vega']
photdata['y'][:] = brick_table['f160w_vega']
np.savetxt(full_data_path, photdata, delimiter=' ', fmt='%.4f')
from androcmd.plot import contour_hess
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111)
contour_hess(ax, brick_table['f110w_vega'] - brick_table['f160w_vega'],
brick_table['f160w_vega'], ir_lim.x, (max(ir_lim.y), min(ir_lim.y)),
plot_args={'ms': 3})
ir_cmd.plot_mask(ax)
ax.set_xlabel(r'$\mathrm{F110W}-\mathrm{F160W}$')
ax.set_ylabel(r'$\mathrm{F110W}$')
ax.set_xlim(ir_lim.x)
ax.set_ylim(ir_lim.y)
fig.show()
from starfisher import SFH, Mask
mask = Mask(colour_planes)
sfh = SFH(data_root, synth, mask, fit_dir)
if not os.path.exists(sfh.full_outfile_path):
sfh.run_sfh()
sfh_table = sfh.solution_table()
from starfisher.sfhplot import LinearSFHCirclePlot, SFHCirclePlot
fig = plt.figure(figsize=(9, 5))
ax_log = fig.add_subplot(121)
ax_lin = fig.add_subplot(122)
cp = SFHCirclePlot(sfh_table)
cp.plot_in_ax(ax_log, max_area=800)
for logage in np.log10(np.arange(1, 13, 1) * 1e9):
ax_log.axvline(logage, c='0.8', zorder=-1)
ax_log.set_ylim(-0.2, 0.2)
cp = LinearSFHCirclePlot(sfh_table)
cp.plot_in_ax(ax_lin, max_area=800)
for tl in ax_lin.get_ymajorticklabels():
tl.set_visible(False)
ax_lin.set_ylabel("")
ax_lin.set_ylim(-0.2, 0.2)
fig.show()
import cubehelix
cmapper = lambda: cubehelix.cmap(startHue=240,endHue=-300,minSat=1,maxSat=2.5,
minLight=.3,maxLight=.8,gamma=.9)
from starfisher.sfhplot import ChiTriptykPlot
fig = plt.figure(figsize=(10, 6))
ctp = ChiTriptykPlot(sfh.full_chi_path, 1, ir_cmd.x_span, ir_cmd.y_span,
ir_cmd.dpix, ir_cmd.x_label, ir_cmd.y_label,
flipy=True)
ax_obs, ax_mod, ax_chi = ctp.setup_axes(fig)
ctp.plot_obs_in_ax(ax_obs, cmap=cmapper())
ctp.plot_mod_in_ax(ax_mod, cmap=cmapper())
ctp.plot_chi_in_ax(ax_chi, cmap=cubehelix.cmap())
ax_obs.text(0.0, 1.01, "Observed", transform=ax_obs.transAxes, size=8, ha='left')
ax_mod.text(0.0, 1.01, "Model", transform=ax_mod.transAxes, size=8, ha='left')
ax_chi.text(0.0, 1.01, r"$\log \chi^2$", transform=ax_chi.transAxes, size=8, ha='left')
fig.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download Isochrones
Step2: Build the Isochrone Library and Synthesize CMD planes
Step3: Here we visualize the isochrone bins in $\log(\mathrm{age})$ space.
Step4: Export the dataset for StarFISH
Step5: Run StarFISH SFH
Step6: Visualization of the SFH
Step7: Comparison of Observed and Modelled CMDs
|
15,142 | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import importlib
import os
import sys
from elasticsearch import Elasticsearch
from skopt.plots import plot_objective
# project library
sys.path.insert(0, os.path.abspath('..'))
import qopt
importlib.reload(qopt)
from qopt.notebooks import evaluate_mrr100_dev, optimize_query_mrr100
from qopt.optimize import Config
# use a local Elasticsearch or Cloud instance (https://cloud.elastic.co/)
# es = Elasticsearch('http://localhost:9200')
es = Elasticsearch('http://35.234.93.126:9200')
# set the parallelization parameter `max_concurrent_searches` for the Rank Evaluation API calls
# max_concurrent_searches = 10
max_concurrent_searches = 30
index = 'msmarco-document'
template_id = 'cross_fields'
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id,
params={
'operator': 'OR',
'minimum_should_match': 50, # in percent/%
'tie_breaker': 0.0,
'url|boost': 1.0,
'title|boost': 1.0,
'body|boost': 1.0,
})
%%time
_ = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'grid',
'space': {
'operator': ['OR', 'AND'],
'minimum_should_match': [30, 40, 50, 60, 70],
},
'default': {
'tie_breaker': 0.0,
'url|boost': 1.0,
'title|boost': 1.0,
'body|boost': 1.0,
}
}))
%%time
_, _, final_params_msmtb, _ = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'grid',
'space': {
'minimum_should_match': [30, 40, 50, 60, 70],
'tie_breaker': [0.0, 0.25, 0.5, 0.75, 1.0],
},
'default': {
'operator': 'OR',
'url|boost': 1.0,
'title|boost': 1.0,
'body|boost': 1.0,
}
}))
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_msmtb)
final_params_msmtb
%%time
_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'bayesian',
'num_iterations': 50,
'num_initial_points': 25,
'space': {
'url|boost': { 'low': 0.0, 'high': 10.0 },
'title|boost': { 'low': 0.0, 'high': 10.0 },
'body|boost': { 'low': 0.0, 'high': 10.0 },
},
'default': final_params_msmtb,
}))
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
_ = plot_objective(metadata_boosts, sample_source='result')
%%time
_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'bayesian',
'num_iterations': 75,
'num_initial_points': 30,
'space': {
'minimum_should_match': { 'low': 30, 'high': 70 },
'tie_breaker': { 'low': 0.0, 'high': 1.0 },
'url|boost': { 'low': 0.0, 'high': 10.0 },
'title|boost': { 'low': 0.0, 'high': 10.0 },
'body|boost': { 'low': 0.0, 'high': 10.0 },
},
'default': {
'operator': 'OR',
}
}),
verbose=False)
_ = plot_objective(metadata_boosts, sample_source='result')
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
%%time
_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'bayesian',
'num_iterations': 100,
'num_initial_points': 40,
'space': {
'minimum_should_match': { 'low': 40, 'high': 60 }, # 50 +/- 10
'tie_breaker': { 'low': 0.1, 'high': 0.4 }, # 0.25 +/- 0.15
'url|boost': { 'low': 0.0, 'high': 10.0 },
'title|boost': { 'low': 0.0, 'high': 10.0 },
'body|boost': { 'low': 0.0, 'high': 10.0 },
},
'default': {
'operator': 'OR',
}
}),
verbose=False)
_ = plot_objective(metadata_boosts, sample_source='result')
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
%%time
_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'random',
'num_iterations': 75,
'space': {
'minimum_should_match': { 'low': 30, 'high': 70 },
'tie_breaker': { 'low': 0.0, 'high': 1.0 },
'url|boost': { 'low': 0.0, 'high': 10.0 },
'title|boost': { 'low': 0.0, 'high': 10.0 },
'body|boost': { 'low': 0.0, 'high': 10.0 },
},
'default': {
'operator': 'OR',
}
}),
verbose=False)
_ = plot_objective(metadata_boosts, sample_source='result')
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Baseline evaluation
Step2: Now we have a baseline score that we'll iterate towards improving.
Step3: The first thing we notice is that there's really not much difference between the variants of minimum_should_match. There is however a very big difference between OR and AND operator. It's pretty clear that with the kinds of queries we get in this dataset (long-form, natural language questions), that OR is always better than AND. Based on this we're going to just assume that OR is always the better option and we'll continue to look for a good minimum_should_match. Let's do that in combination with tie_breaker now since those two parameters can have an impact on each other. We'll start simple again with a grid search over a limited number of parameter values for each, all of which are discrete values. With two dimensions and five parameter values each, we have a parameter space of size 25 and can test every possible value in a reasonable amount of time.
Step4: Well that looks pretty good and we see some improvements on the training set. Let's evaluate on the development dataset now using the best parameters we've found so far. This will show us where we are relative to the baseline query.
Step5: Definitely a good improvement and all we've done is optimize a few basic query parameters!
Step6: Great! It looks like we made an improvement over both the baseline and the preivous best parameters found. This example shows how important it is to not tune field boosts manually, as there is no intuitive relationship between the boost values of fields.
Step7: Experiment
Step8: Ok, so not a big difference to the step-wise method we used above, but maybe it was a bit simpler to just throw in a huge parameter space.
Step9: Looks like we did about the same as the other methods in terms of MRR@100. In terms of simplicity though, this approach definitely wins as we can throw all the parameters in at once and not have to think too much about order and parameter dependencies.
|
15,143 | <ASSISTANT_TASK:>
Python Code:
from sklearn.metrics.pairwise import rbf_kernel
line = np.linspace(-3, 3, 100)[:, np.newaxis]
kernel_value = rbf_kernel(line, [[0]], gamma=1)
plt.plot(line, kernel_value)
from figures import plot_svm_interactive
plot_svm_interactive()
from sklearn import datasets
digits = datasets.load_digits()
X, y = digits.data, digits.target
# split the dataset, apply grid-search
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The rbf kernel has an inverse bandwidth-parameter gamma, where large gamma mean a very localized influence for each data point, and
Step2: Exercise
|
15,144 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os, sys, time, gzip
import pickle as pkl
import numpy as np
from scipy.sparse import lil_matrix, csr_matrix, issparse
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
from tools import calc_metrics, diversity, pairwise_distance_hamming, softmax
np.seterr(all='raise')
TOPs = [5, 10, 20, 30, 50, 100, 200, 300, 500, 700, 1000]
datasets = ['aotm2011', '30music']
dix = 1
dataset_name = datasets[dix]
dataset_name
data_dir = 'data/%s/coldstart/setting1' % dataset_name
X_trndev = pkl.load(gzip.open(os.path.join(data_dir, 'X_trndev.pkl.gz'), 'rb'))
Y_trndev = pkl.load(gzip.open(os.path.join(data_dir, 'Y_trndev.pkl.gz'), 'rb'))
X_test = pkl.load(gzip.open(os.path.join(data_dir, 'X_test.pkl.gz'), 'rb'))
Y_test = pkl.load(gzip.open(os.path.join(data_dir, 'Y_test.pkl.gz'), 'rb'))
songs1 = pkl.load(gzip.open(os.path.join(data_dir, 'songs_train_dev_test_s1.pkl.gz'), 'rb'))
train_songs = songs1['train_song_set']
dev_songs = songs1['dev_song_set']
test_songs = songs1['test_song_set']
song2index_trndev = {sid: ix for ix, (sid, _) in enumerate(train_songs + dev_songs)}
song2index_test = {sid: ix for ix, (sid, _) in enumerate(test_songs)}
index2song_test = {ix: sid for ix, (sid, _) in enumerate(test_songs)}
_song2artist = pkl.load(gzip.open('data/msd/song2artist.pkl.gz', 'rb'))
song2artist = {sid: _song2artist[sid] for sid, _ in train_songs + dev_songs + test_songs if sid in _song2artist}
all_playlists = pkl.load(gzip.open(os.path.join(data_dir, 'playlists_s1.pkl.gz'), 'rb'))
artist2pop = dict()
test_songset = set(test_songs)
for pl, _ in all_playlists:
for sid in [sid for sid in pl if sid not in test_songset]:
if sid in song2artist:
aid = song2artist[sid]
try:
artist2pop[aid] += 1
except KeyError:
artist2pop[aid] = 1
song2genre = pkl.load(gzip.open('data/msd/song2genre.pkl.gz', 'rb'))
cliques_all = pkl.load(gzip.open(os.path.join(data_dir, 'cliques_trndev.pkl.gz'), 'rb'))
U = len(cliques_all)
pl2u = np.zeros(Y_test.shape[1], dtype=np.int32)
for u in range(U):
clq = cliques_all[u]
pl2u[clq] = u
song2pop = pkl.load(gzip.open(os.path.join(data_dir, 'song2pop.pkl.gz'), 'rb'))
Y_test.shape
X_trndev.shape
Y_trndev.shape
M, N = Y_trndev.shape
D = 80
C = 1
n_sweeps = 200
np.random.seed(0)
S = np.random.rand(M, D)
P = np.random.rand(N, D)
# alternating least squares
for sweep in range(n_sweeps):
# fix S, optimise P
SS = np.dot(S.T, S) # D by D
np.fill_diagonal(SS, C + SS.diagonal())
P_new = np.dot(Y_trndev.transpose().dot(S), np.linalg.inv(SS).T) # N by D
pdiff = (P_new - P).ravel()
P = P_new
# fix P, optimise S
PP = np.dot(P.T, P) # D by D
np.fill_diagonal(PP, C + PP.diagonal())
S_new = np.dot(Y_trndev.dot(P), np.linalg.inv(PP).T) # M by D
sdiff = (S_new - S).ravel()
S = S_new
print('P diff: {:8.6f}, S diff: {:8.6f}'.format(np.sqrt(pdiff.dot(pdiff)), np.sqrt(sdiff.dot(sdiff))))
Y_trndev_coo = Y_trndev.tocoo()
loss = 0.
for row, col in tqdm(zip(Y_trndev_coo.row, Y_trndev_coo.col)):
diff = S[row, :].dot(P[col, :]) - 1
loss += diff * diff
loss /= Y_trndev_coo.nnz
print('RMSE:', np.sqrt(loss))
rps = []
hitrates = {top: [] for top in TOPs}
aucs = []
spreads = []
novelties = {top: dict() for top in TOPs}
artist_diversities = {top: [] for top in TOPs}
genre_diversities = {top: [] for top in TOPs}
np.random.seed(0)
npos = Y_test.sum(axis=0).A.reshape(-1)
assert Y_test.shape[0] == len(test_songs)
for j in range(Y_test.shape[1]):
if (j+1) % 100 == 0:
sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1]))
sys.stdout.flush()
if npos[j] < 1:
continue
y_true = Y_test[:, j].A.reshape(-1)
y_pred = np.zeros(len(test_songs))
for ix in range(len(test_songs)):
sid = index2song_test[ix]
# map song feature to song latent factor
# score (song, playlist) pair by the dot product of their latent factors
rp, hr_dict, auc = calc_metrics(y_true, y_pred, tops=TOPs)
rps.append(rp)
for top in TOPs:
hitrates[top].append(hr_dict[top])
aucs.append(auc)
# spread
y_pred_prob = softmax(y_pred)
spreads.append(-np.dot(y_pred_prob, np.log(y_pred_prob)))
# novelty
sortix = np.argsort(-y_pred)
u = pl2u[j]
for top in TOPs:
nov = np.mean([-np.log2(song2pop[index2song_test[ix]]) for ix in sortix[:top]])
try:
novelties[top][u].append(nov)
except KeyError:
novelties[top][u] = [nov]
# artist/genre diversity
for top in TOPs:
artist_vec = np.array([song2artist[index2song_test[ix]] for ix in sortix[:top]])
genre_vec = np.array([song2genre[index2song_test[ix]] if index2song_test[ix] in song2genre \
else str(np.random.rand()) for ix in sortix[:top]])
artist_diversities[top].append( diversity(artist_vec) )
genre_diversities[top].append( diversity(genre_vec) )
print('\n%d / %d' % (len(rps), Y_test.shape[1]))
perf = {dataset_name: {'Test': {'R-Precision': np.mean(rps),
'Hit-Rate': {top: np.mean(hitrates[top]) for top in TOPs},
'AUC': np.mean(aucs),
'Spread': np.mean(spreads),
'Novelty': {t: np.mean([np.mean(novelties[t][u]) for u in novelties[t]])
for t in TOPs},
'Artist-Diversity': {top: np.mean(artist_diversities[top]) for top in TOPs},
'Genre-Diversity': {top: np.mean(genre_diversities[top]) for top in TOPs}},
'Test_All': {'R-Precision': rps,
'Hit-Rate': {top: hitrates[top] for top in TOPs},
'AUC': aucs,
'Spread': spreads,
'Novelty': novelties,
'Artist-Diversity': artist_diversities,
'Genre-Diversity': genre_diversities}}}
perf[dataset_name]['Test']
fperf = os.path.join(data_dir, 'perf-mfcnn.pkl')
print(fperf)
pkl.dump(perf, open(fperf, 'wb'))
pkl.load(open(fperf, 'rb'))[dataset_name]['Test']
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matrix Factorisation
Step2: Let $S \in \mathbb{R}^{M \times D}, P \in \mathbb{R}^{N \times D}, Y \in \mathbb{R}^{M \times N}$ be the latent factors of songs and playlists, respectively.
Step3: Sanity check, RMSE
Step4: Map song features to song latent factors
|
15,145 | <ASSISTANT_TASK:>
Python Code:
import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.global_variables_initializer())
y = tf.matmul(x,W) + b
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g" % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
# print ("test accuracy %g" % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
# Tensorflow throw OOM error if evaluate accuracy at once and memory is not enough
cross_accuracy = 0
for i in range(100):
testSet = mnist.test.next_batch(50)
each_accuracy = accuracy.eval(feed_dict={ x: testSet[0], y_: testSet[1], keep_prob: 1.0})
cross_accuracy += each_accuracy
print("test %d accuracy %g" % (i,each_accuracy))
print("test average accuracy %g" % (cross_accuracy/100,))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 这里,mnist是一个轻量级的类。它以Numpy数组的形式存储着训练、校验和测试数据集。同时提供了一个函数,用于在迭代每一小批数据,后面我们将会用到。
Step2: 计算图
Step3: 这里的x和y_并不是特定的值,相反,他们都只是一个占位符,可以在TensorFlow运行某一计算时根据该占位符输入具体的值。
Step4: 我们在调用tf.Variable的时候传入初始值。在这个例子里,我们把W和b都初始化为零向量。W是一个784x10的矩阵(因为我们有784个特征和10个输出值)。b是一个10维的向量(因为我们有10个分类)。
Step5: 类别预测与损失函数
Step6: 我们可以指定损失函数来指示模型预测一个实例有多不准;我们要在整个训练过程中使其最小化。这里我们的损失函数是目标类别和预测类别之间的交叉熵。斤现在初级教程中一样,我们使用稳定方程:
Step7: 注意,tf.nn.softmax_cross_entropy_with_logits隐式地对模型未归一化模型预测值和所有类别的总值应用了softmax函数,tf.reduce_sum取了总值的平均值。
Step8: 这一行代码实际上是用来往计算图上添加一个新操作,其中包括计算梯度,计算每个参数的步长变化,并且计算出新的参数值。
Step9: 每一步迭代,我们都会加载50个训练样本,然后执行一次train_step,并通过feed_dict将x和y_张量占位符用训练替代为训练数据。
Step10: 这里返回一个布尔数组。为了计算我们分类的准确率,我们将布尔值转换为浮点数来代表对、错,然后取平均值。例如:[True, False, True, True]变为[1,0,1,1],计算出平均值为0.75。
Step11: 最后,我们可以计算出在测试数据上的准确率,大概是92%。
Step12: 构建一个多层卷积网络
Step13: 卷积和池化
Step14: 第一层卷积
Step15: 为了用这一层,我们把x变成一个4d向量,其第2、第3维对应图片的宽、高,最后一维代表图片的颜色通道数(因为是灰度图所以这里的通道数为1,如果是rgb彩色图,则为3)。
Step16: 我们把x_image和权值向量进行卷积,加上偏置项,然后应用ReLU激活函数,最后进行max pooling。max_pool_2x2方法会将图像降为14乘14。
Step17: 第二层卷积
Step18: 密集连接层
Step19: Dropout
Step20: 输出层
Step21: 训练和评估模型
|
15,146 | <ASSISTANT_TASK:>
Python Code:
words = 'The quick brown fox jumps over the lazy dog'.split()
print words
stuff = []
for w in words:
stuff.append([w.upper(), w.lower(), len(w)])
for i in stuff:
print i
stuff = map(lambda w: [w.upper(), w.lower(), len(w)],words)
for i in stuff: print i
sentence = "It's a myth that there are no words in English without vowels."
vowels = 'aeiou'
result = filter(lambda x: x not in vowels, sentence)
print result
print reduce(lambda x,y: x*y, [47,11,42,102,13])
# note that you can improve the speed of the calculation using built-in functions
# or better still: using the numpy module
from operator import mul
import numpy as np
a = range(1, 101)
print "\nreduce(lambda x, y: x * y, a)"
%timeit reduce(lambda x, y: x * y, a) # (1)
print "\nreduce(mul, a)"
%timeit reduce(mul, a) # (2)
print "\nnp.prod(a)"
a = np.array(a)
%timeit np.prod(a) # (3)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use list comprehension and lambda/map function to define <b>stuff</b>.
Step2: <u>Problem 2</u>
Step3: <u>Problem 3</u>
|
15,147 | <ASSISTANT_TASK:>
Python Code:
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "linear" # "linear", "dnn", "dnn_dropout", or "cnn"
# do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "1.13" # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
rm -rf mnistmodel.tar.gz mnist_trained
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
-- \
--output_dir=${PWD}/mnist_trained \
--train_steps=100 \
--learning_rate=0.01 \
--model=$MODEL_TYPE
%%bash
OUTDIR=gs://${BUCKET}/mnist/trained_${MODEL_TYPE}
JOBNAME=mnist_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_steps=10000 --learning_rate=0.01 --train_batch_size=512 \
--model=$MODEL_TYPE --batch_norm
from google.datalab.ml import TensorBoard
TensorBoard().start("gs://{}/mnist/trained_{}".format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()["pid"]:
TensorBoard().stop(pid)
print("Stopped TensorBoard with pid {}".format(pid))
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/mnist/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
import json, codecs
import matplotlib.pyplot as plt
import tensorflow as tf
HEIGHT = 28
WIDTH = 28
# Get mnist data
mnist = tf.keras.datasets.mnist
(_, _), (x_test, _) = mnist.load_data()
# Scale our features between 0 and 1
x_test = x_test / 255.0
IMGNO = 5 # CHANGE THIS to get different images
jsondata = {"image": x_test[IMGNO].reshape(HEIGHT, WIDTH).tolist()}
json.dump(jsondata, codecs.open("test.json", 'w', encoding = "utf-8"))
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
%%bash
gcloud ml-engine predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run as a Python module
Step2: Now, let's do it on Cloud ML Engine so we can train on GPU
Step3: Monitoring training with TensorBoard
Step4: Here are my results
Step5: To predict with the model, let's take one of the example images.
Step6: Send it to the prediction service
|
15,148 | <ASSISTANT_TASK:>
Python Code:
n = int(input('请输入你想取的整数的个数'))
m = int(input('请输入取整范围的下限'))
k = int(input('请输入取整范围的上限'))
import random,math
def get_num():
number = random.randint(m,k)
return number
i = 0
total = 0
while i < n:
total = total + get_num()
i = i + 1
print(get_num())
average = total/n
math.sqrt(average)
n = int(input('请输入随机整数个数'))
m = int(input('请输入取整范围的下限,不为1'))
k = int(input('请输入取整范围的上限'))
import random,math
def get_num():
number = random.randint(m,k)
return number
i = 0
total = 0
while i < n:
total = total + math.log(get_num())
i = i + 1
j = 0
total_n = 0
while j < n:
total_n = total_n + 1/math.log(get_num())
j = j + 1
print(total)
print(total_n)
import math,random
n = int(input('请输入相加数字的个数,回车结束'))
i = 0
total = 0
b = random.randint(1,9)
a = b
c = b
print('a=',a)
while i < n:
total= total + a
i = i + 1
a = a + c*10**i
print('a + aa + aaa + aaaa....+a...a的值是:',total)
import math,random
def win():
print('Congratulations!! You win')
def lose():
print('I am sorry.~~ You lose. You can try again')
def over():
print('Game over!')
def show_team():
print('Cindy.Zhang')
def show_instruction():
print('请玩家任意输入一个数字,由计算机来猜测。每猜测一次,玩家来判断数字大或小或等于输入数字,直到猜测次数用尽(猜测次数不定)。')
def menu():
print('''=====游戏菜单=====
1. 游戏说明
2. 开始游戏
3. 退出游戏
4. 制作团队
=====游戏菜单=====''')
def inner_game1():
while judge == '大了':
print('计算机继续猜测')
guess = random.randint(0,guess)
print(guess)
def inner_game2():
while judge == '小了':
print('计算机继续猜测')
guess = random.randint(guess,n)
print(guess)
def guess_game():
n = int(input('请输入一个大于零的整数作为神秘数字的上界,以回车结束。'))
number = int(input('请输入一个整数,位于零到上界内,作为神秘数字,回车结束。'))
max_times = math.ceil(math.log(n,2))
guess_times = 0
while guess_times <= max_times:
guess = random.randint(0,n)
guess_times += 1
print('一共可以猜', max_times, '次')
print('计算机已经猜了', guess_times, '次')
print(guess)
judge = input('请玩家输入判断,如:大了、小了、正确')
if judge == '大了':
guess = random.randint(0,guess)
print(guess)
judge = input('请玩家输入判断,如:大了、小了、正确')
elif judge == '小了':
guess = random.randint(guess,n)
print(guess)
judge = input('请玩家输入判断,如:大了、小了、正确')
if judge == '正确':
win()
print('神秘数字是:', guess)
print('你比标准次数少', max_times-guess_times, '次')
break
else:
print('神秘数字是:', number)
lose()
def main():
while True:
menu()
choice = int(input('请输入你的选择'))
if choice == 1:
show_instruction()
elif choice == 2:
guess_game()
elif choice == 3:
game_over()
break
else:
show_team()
if __name__ == '__main__':
main()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 练习 2:共n个随机整数,整数范围在m与k之间,求西格玛log(随机整数)及西格玛1/log(随机整数)
Step2: 练习 3:写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。
Step3: 挑战性练习:仿照task5,将猜数游戏改成由用户随便选择一个整数,让计算机来猜测的猜数游戏,要求和task5中人猜测的方法类似,但是人机角色对换,由人来判断猜测是大、小还是相等,请写出完整的猜数游戏。
|
15,149 | <ASSISTANT_TASK:>
Python Code:
#obj = ["3C 454.3", 343.49062, 16.14821, 1.0]
obj = ["PKS J0006-0623", 1.55789, -6.39315, 1.0]
#obj = ["M87", 187.705930, 12.391123, 1.0]
#### name, ra, dec, radius of cone
obj_name = obj[0]
obj_ra = obj[1]
obj_dec = obj[2]
cone_radius = obj[3]
obj_coord = coordinates.SkyCoord(ra=obj_ra, dec=obj_dec, unit=(u.deg, u.deg), frame="icrs")
# Query data
data_2mass = Irsa.query_region(obj_coord, catalog="fp_psc", radius=cone_radius * u.deg)
data_wise = Irsa.query_region(obj_coord, catalog="allwise_p3as_psd", radius=cone_radius * u.deg)
__data_galex = Vizier.query_region(obj_coord, catalog='II/335', radius=cone_radius * u.deg)
data_galex = __data_galex[0]
num_2mass = len(data_2mass)
num_wise = len(data_wise)
num_galex = len(data_galex)
print("Number of object in (2MASS, WISE, GALEX): ", num_2mass, num_wise, num_galex)
# use only coordinate columns
ra_2mass = data_2mass['ra']
dec_2mass = data_2mass['dec']
c_2mass = coordinates.SkyCoord(ra=ra_2mass, dec=dec_2mass, unit=(u.deg, u.deg), frame="icrs")
ra_wise = data_wise['ra']
dec_wise = data_wise['dec']
c_wise = coordinates.SkyCoord(ra=ra_wise, dec=dec_wise, unit=(u.deg, u.deg), frame="icrs")
ra_galex = data_galex['RAJ2000']
dec_galex = data_galex['DEJ2000']
c_galex = coordinates.SkyCoord(ra=ra_galex, dec=dec_galex, unit=(u.deg, u.deg), frame="icrs")
####
sep_min = 1.0 * u.arcsec # minimum separation in arcsec
# Only 2MASS and WISE matching
#
idx_2mass, idx_wise, d2d, d3d = c_wise.search_around_sky(c_2mass, sep_min)
# select only one nearest if there are more in the search reagion (minimum seperation parameter)!
print("Only 2MASS and WISE: ", len(idx_2mass))
# from matching of 2 cats (2MASS and WISE) coordinate
data_2mass_matchwith_wise = data_2mass[idx_2mass]
data_wise_matchwith_2mass = data_wise[idx_wise] # WISE dataset
w1 = data_wise_matchwith_2mass['w1mpro']
j = data_2mass_matchwith_wise['j_m']
w1j = w1-j
cutw1j = -1.7 # https://academic.oup.com/mnras/article/448/2/1305/1055284
# WISE galaxy data -> from cut
galaxy = data_wise_matchwith_2mass[w1j < cutw1j]
print("Number of galaxy from cut W1-J:", len(galaxy))
w1j_galaxy = w1j[w1j<cutw1j]
w1_galaxy = w1[w1j<cutw1j]
plt.scatter(w1j, w1, marker='o', color='blue')
plt.scatter(w1j_galaxy, w1_galaxy, marker='.', color="red")
plt.axvline(x=cutw1j) # https://academic.oup.com/mnras/article/448/2/1305/1055284
# GALEX
###
# coord of object in 2mass which match wise (first objet/nearest in sep_min region)
c_2mass_matchwith_wise = c_2mass[idx_2mass]
c_wise_matchwith_2mass = c_wise[idx_wise]
#Check with 2mass cut
idx_2mass_wise_galex, idx_galex1, d2d, d3d = c_galex.search_around_sky(c_2mass_matchwith_wise, sep_min)
num_galex1 = len(idx_galex1)
#Check with wise cut
idx_wise_2mass_galex, idx_galex2, d2d, d3d = c_galex.search_around_sky(c_wise_matchwith_2mass, sep_min)
num_galex2 = len(idx_galex2)
print("Number of GALEX match in 2MASS cut (with WISE): ", num_galex1)
print("Number of GALEX match in WISE cut (with 2MASS): ", num_galex2)
# diff/average
print("Confusion level: ", abs(num_galex1 - num_galex2)/np.mean([num_galex1, num_galex2])*100, "%")
# Choose which one is smaller!
if num_galex1 < num_galex2:
select_from_galex = idx_galex1
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# 2MASS from GALEX_selected
_idx_galex1, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(c_selected_galex, sep_min)
match_2mass = data_2mass[_idx_2mass]
# WISE from 2MASS_selected
_ra_match_2mass = match_2mass['ra']
_dec_match_2mass = match_2mass['dec']
_c_match_2mass = coordinates.SkyCoord(ra=_ra_match_2mass, dec=_dec_match_2mass, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_wise, d2d, d3d = c_wise.search_around_sky(_c_match_2mass, sep_min)
match_wise = data_wise[_idx_wise]
else:
select_from_galex = idx_galex2
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# WISE from GALEX_selected
_idx_galex1, _idx_wise, d2d, d3d = c_wise.search_around_sky(c_selected_galex, sep_min)
match_wise = data_wise[_idx_wise]
# 2MASS from WISE_selected
_ra_match_wise = match_wise['ra']
_dec_match_wise = match_wise['dec']
_c_match_wise = coordinates.SkyCoord(ra=_ra_match_wise, dec=_dec_match_wise, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(_c_match_wise, sep_min)
match_2mass = data_2mass[_idx_2mass]
print("Number of match in GALEX: ", len(match_galex))
print("Number of match in 2MASS: ", len(match_2mass))
print("Number of match in WISE : ", len(match_wise))
joindata = np.array([match_2mass['j_m'], match_2mass['h_m'], match_2mass['k_m'],
match_wise['w1mpro'], match_wise['w2mpro'], match_wise['w3mpro'], match_wise['w4mpro'],
match_galex['NUVmag']])
joindata = joindata.T
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
X = scale(joindata)
pca = PCA(n_components=4)
X_r = pca.fit(X).transform(X)
print(pca.components_)
print(pca.explained_variance_)
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,2], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,2], X_r[i,3], marker=".", color="red")
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
X = scale(joindata)
db = DBSCAN(eps=1, min_samples=3).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
#print(labels)
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
## J vs J-W1
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=8)
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.plot(X[i,3], X[i,0], marker="X", markerfacecolor='red', markeredgecolor='none', markersize=8)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
from sklearn.manifold import TSNE
X = scale(joindata)
X_r = TSNE(n_components=2).fit_transform(X)
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color="blue")
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker='.', color="red")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matching coordinates
Step2: Plot $W_1-J$ vs $W_1$
Step3: W1-J < -1.7 => galaxy
Step4: Collect relevant data
Step5: Analysis
Step6: DBSCAN
Step7: Plot $W_1$ vs $J$
Step8: t-SNE
|
15,150 | <ASSISTANT_TASK:>
Python Code:
# change this cell into a Markdown cell. Then type something here and execute it (Shift-Enter)
'''Make sure you are in "edit" mode and that this cell is for Coding ( You should see the In [ ]:)
on the left of the cell. '''
print("Hello world!")
# print your name in this cell.
# Addition
5+3
# Subtraction
8-9
# Multiplication
3*12
# Division
48/12
# Exponentiation. Limited precision though!
16**0.5
# Residue class modulo n
5%2
# Guess the output before executing this cell. Come on, don't cheat!
6%(1+3)
# Assignment
x=1
y=2
x+y
x/y
x=5
x+y-2
# For example
x = x+1
print(x)
# reset x to 5
x=5
x += 1
print(x)
x = 5
#What do you think the values of x will be for x -= 1, x *= 2 or x /= 2?
# Test it out in the space below
print(x)
0.1+0.2
# Noting the difference between printing quoted variables (strings) and printing the variable itself.
x = 5
print(x)
print('x')
my_name = 'Tang U-Liang'
print(my_name)
# String formatting: Using the %
age = 35
print('Hello doctor, my name is %s. I am %d years old. I weigh %.1f kg' % (my_name, age, 70.25))
# or using .format method
print("Hi, I'm {name}. Please register {name} for this conference".format(name=my_name))
fruit = 'Apple'
drink = 'juice'
print(fruit+drink) # concatenation
#Don't like the lack of spacing between words?
print(fruit+' '+drink)
print(fruit[0])
print(fruit[1])
favourite_drink = fruit+' '+drink
print("Printing the first to 3rd letter.")
print(favourite_drink[0:3])
print("\nNow I want to print the second to seventh letter:")
print(favourite_drink[1:7])
print(favourite_drink[0:7:2])
# Here's a trick, try this out
print(favourite_drink[3:0:-1])
# Write your answer here and check it with the output below
x = 5.0
type(x)
type(favourite_drink)
type(True)
type(500)
# Here's a list called staff containing his name, his age and current renumeration
staff = ['Andy', 28, 980.15]
len(staff)
staff[0]
# type your answer here and run the cell
staff[1:3]
nested_list = ['apples', 'banana', [1.50, 0.40]]
# Accesing items from within a nested list structure.
print(nested_list[2])
# Assigning nested_list[2] to a variable. The variable price represents a list
price = nested_list[2]
print(type(price))
# Getting the smaller of the two floats
print(nested_list[2][1])
# append
staff.append('Finance')
print(staff)
# pop away the information about his salary
andys_salary = staff.pop(2)
print(andys_salary)
print(staff)
# oops, made a mistake, I want to reinsert information about his salary
staff.insert(3, andys_salary)
print(staff)
contacts = [99993535, "andy@company.com"]
staff = staff+contacts # reassignment of the concatenated list back to staff
print(staff)
staff = ['Andy', 28, 'Finance', 980.15, 99993535, 'andy@company.com']
staff
# type your answer here
print(staff)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.2 Your first script
Step2: Notice that Hello world! is printed at the bottom of the cell as an output. In general, this is how output of a python code is displayed to you.
Step3: 2.3 Commenting
Step4: Note the floating point answer. In previous versions of Python, / meant floor division. This is no longer the case in Python 3
Step5: In the above 5%2 means return me the remainder after 5 is divided by 2 (which is indeed 1).
Step6: It is interesting to note that the % operator is not distributive.
Step7: Notice that after assignment, I can access the variables in a different cell. However, if you reassign a variable to a different number, the old values for that variable are overwritten.
Step8: Now try clicking back to the cell x+y and re-executing it. What do you the answer will be?
Step9: So what happened here? Well, if we recall x originally was assigned 5. Therefore x+1 would give us 6. This value is then reassigned to the exact same location in memory represented by the variable x. So now that piece of memory contains the value 6. We then use the print function to display the content of x.
Step10: 3.1.3 Floating point precision
Step11: The following exerpt from the Python documentation explains what is happening quite clearly.
Step12: In the second print function, the text 'x' is printed while in the first print function, it is the contents of x which is printed to the console.
Step13: When using % to indicate string substitution, take note of the common formatting "placeholders"
Step14: Use [] to access specific letters in the string. Python uses 0 indexing. So the first letter is accessed by my_string[0] while my_string[1] accesses the second letter.
Step15: Slicing is a way of get specific subsets of the string. If you let $x_n$ denote the $n+1$-th letter (note zero indexing) in a string (and by letter this includes whitespace characters as well!) then writing my_string[i
Step16: Notice the use of \n in the second print function. This is called a newline character which does exactly what its name says. Also in the third print function notice the seperation between e and j. It is actually not seperated. The sixth letter is a whitespace character ' '.
Step17: So what happened above? Well [3
Step18: Answer
Step19: 4. list, here's where the magic begins
Step20: 4.1 Properties of list objects and indexing
Step21: Perhaps you want to recover that staff's name. It's in the first position of the list.
Step22: Notice that Python still outputs to console even though we did not use the print function. Actually the print function prints a particularly "nice" string representation of the object, which is why Andy is printed without the quotation marks if print was used.
Step23: The same slicing rules for strings apply to lists as well. If we wanted Andy's age and wage, we would type staff[1
Step24: This returns us a sub-list containing Andy's age and renumeration.
Step25: Notice that if I type nested_list[2], Python will return me the list [1.50, .40]. This can be accessed again using indexing (or slicing notation) [ ].
Step26: 4.3 List methods
Step27: 4.3.1 Your first programming challenge
|
15,151 | <ASSISTANT_TASK:>
Python Code:
%%capture
import pandas as pd
pd.set_option("display.max_rows", 5)
from siuba import _
from siuba.data import mtcars
g_cyl = mtcars.groupby("cyl")
## Both snippets below raise an error.... :/
g_cyl.mpg + g_cyl.mpg
g_cyl.add(g_cyl.mpg)
# two ways to do it f_elwise
ser_mpg2 = mtcars.mpg + mtcars.mpg
ser_mpg2 = g_cyl.mpg.obj + g_cyl.mpg.obj
# doing grouped aggregate
g_cyl.mpg.mean()
degroup = lambda ser: getattr(ser, "obj", ser)
f_add = lambda x, y: degroup(x) + degroup(y)
f_add(g_cyl.mpg, f_add(g_cyl.mpg, 1))
from pandas.core import algorithms
def broadcast_agg_result(grouper, result, obj):
# Simplified broadcasting from g_cyl.mpg.transform('mean')
ids, _, ngroup = grouper.group_info
out = algorithms.take_1d(result._values, ids)
return pd.Series(out, index=obj.index, name=obj.name)
f_mean = lambda x: broadcast_agg_result(x.grouper, x.mean(), degroup(x))
f_add(f_mean(g_cyl.mpg), f_mean(g_cyl.hp))
f_add(g_cyl.mpg, f_add(f_mean(g_cyl.mpg), f_mean(g_cyl.hp)))
from pandas.core.groupby import SeriesGroupBy
from pandas.core import algorithms
# Define Agg Result ----
def create_agg_result(ser, orig_object, orig_grouper):
# since pandas groupby method is hard-coded to create a SeriesGroupBy, mock
# AggResult below by making it a SeriesGroupBy whose grouper has 2 extra attributes
obj = ser.groupby(ser.index)
obj.grouper.orig_grouper = orig_grouper
obj.grouper.orig_object = orig_object
return obj
def is_agg_result(x):
return hasattr(x, "grouper") and hasattr(x.grouper, "orig_grouper")
# Handling Grouped Operations ----
def regroup(ser, grouper):
return ser.groupby(grouper)
def degroup(ser):
# returns tuple of (Series or literal, Grouper or None)
# because we can't rely on type checking, use hasattr instead
return getattr(ser, "obj", ser), getattr(ser, "grouper", None)
def f_mean(x):
# SeriesGroupBy -> AggResult
return create_agg_result(x.mean(), x.obj, x.grouper)
def broadcast_agg_result(g_ser, compare=None):
Returns a tuple of (Series, final op grouper)
if not isinstance(g_ser, SeriesGroupBy):
return g_ser, compare.grouper
# NOTE: now only applying for agg_result
if not is_agg_result(g_ser):
return degroup(g_ser)
if g_ser.grouper.orig_grouper is compare.grouper:
orig = g_ser.grouper.orig_object
grouper = g_ser.grouper.orig_grouper
# Simplified broadcasting from g_cyl.mpg.transform('mean') implementation
ids, _, ngroup = grouper.group_info
out = algorithms.take_1d(g_ser.obj._values, ids)
return pd.Series(out, index=orig.index, name=orig.name), grouper
return degroup(g_ser)
# Define operations ----
def f_add(x, y):
# SeriesGroupBy, SeriesGroupBy -> ""
broad_x, grouper = broadcast_agg_result(x, y)
broad_y, __ = broadcast_agg_result(y, x)
res = broad_x + broad_y
return regroup(res, grouper)
grouped_agg = f_add(f_mean(g_cyl.mpg), f_mean(g_cyl.hp))
# Notice, only 1 result per group
grouped_agg.obj
grouped_mutate = f_add(g_cyl.mpg, grouped_agg)
grouped_mutate.obj
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How are grouped operations currently handled in pandas?
Step2: What about composing f_elwise and f_agg operations?
Step3: Also, as noted in the first section, we are returning a Series here, but functions returning a SeriesGroupBy should also be compatible (so long as we enforce liskov substitution..).
Step4: Notice we can keep going with this, since
Step6: However, there is are two problems here...
|
15,152 | <ASSISTANT_TASK:>
Python Code:
# Numeral 1
# Importar librerías necesarias
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Definimos funcion mu
def mu(x, r):
return r*(1-x)
# Definimos conjunto de valores en x
x = np.linspace(0, 1.2, 50)
# Valor del parametro solicitado
r = 1
# Conjunto de valores en y
y = mu(x, r)
# Graficamos
plt.figure(figsize=(6,4))
plt.plot(x, y, 'r')
plt.xlabel('Poblacion $x$')
plt.ylabel('$\mu(x)$')
plt.grid()
plt.show()
# Numeral 2
# Importamos librería para solución numérica de ecuaciones diferenciales
from scipy.integrate import odeint
# Definimos la función que nos pide odeint
def logistica(x, t):
return r*(1-x)*x
x0 = 0.1 # Condición inicial
tt = np.linspace(0, 10) # Vector de tiempo
xx = odeint(logistica, x0, tt) # Solución numérica
# Graficamos solución
plt.figure(figsize=(6,4))
plt.plot(tt, xx, '--y', linewidth = 3)
plt.xlabel('Tiempo $t$')
plt.ylabel('Poblacion $x(t)$')
plt.grid()
plt.show()
# Numeral 3
plt.figure(figsize=(6,4))
for r in np.arange(-1,1.1,0.5):
xx = odeint(logistica, x0, tt)
plt.plot(tt, xx, linewidth = 3, label = 'r=%f'%r)
plt.xlabel('Tiempo $t$')
plt.ylabel('Poblacion $x(t)$')
plt.legend(loc='center left', bbox_to_anchor=(1.05,0.5))
plt.grid()
plt.show()
# importar librerías
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# definimos modelo lotka volterra
def lotka_volterra(x, t):
x1 = x[0]
x2 = x[1]
return [x1*(3-x1-2*x2), x2*(2-x2-x1)]
# primer condición inicial
x0 = [0.5, 1]
tt = np.linspace(0, 10, 100)
# solucion numerica
xx = odeint(lotka_volterra, x0, tt)
xx.shape
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Población')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
# segunda condición inicial
x0 = [1, 0.5]
tt = np.linspace(0, 10, 100)
xx = odeint(lotka_volterra, x0, tt)
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Población')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
# tercer condición inicial
x0 = [1.5, 1]
tt = np.linspace(0, 10, 100)
xx = odeint(lotka_volterra, x0, tt)
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Población')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
# cuarta condición inicial
x0 = [1, 1.5]
tt = np.linspace(0, 10, 100)
xx = odeint(lotka_volterra, x0, tt)
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Población')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
# cuarta condición inicial
x0 = [1, 1]
tt = np.linspace(0, 10, 100)
xx = odeint(lotka_volterra, x0, tt)
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Población')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Por la ecuación logística, $\mu(x)$ representa una tasa de crecimiento de la población. Por la gráfica, cuando la poblacion es pequeña esta tasa es máxima y cuando la población está en su tope esta tasa es cero.
Step2: Con estas condiciones, la solución numérica nos muestra que la población tiende a su capacidad máxima cuando $t\to\infty$.
Step3: De la gráfica podemos inferir que la población crece hasta su capacidad máxima para $r>0$, se extingue para $r<0$ y permanece constante para $r=0$.
Step4: Con estas condiciones iniciales, evidenciamos que los conejos se extinguen y que las ovejas alcanzan su máxima capacidad de población.
Step5: Con estas condiciones iniciales, evidenciamos que las ovejas se extinguen y que los conejos alcanzan su máxima capacidad de población.
Step6: Con estas condiciones iniciales, evidenciamos que las ovejas se extinguen y que los conejos alcanzan su máxima capacidad de población.
Step7: Con estas condiciones iniciales, evidenciamos que los conejos se extinguen y que las ovejas alcanzan su máxima capacidad de población.
|
15,153 | <ASSISTANT_TASK:>
Python Code:
import math
from pomegranate import *
guest = DiscreteDistribution( { 'A': 1./3, 'B': 1./3, 'C': 1./3 } )
prize = DiscreteDistribution( { 'A': 1./3, 'B': 1./3, 'C': 1./3 } )
monty = ConditionalProbabilityTable(
[[ 'A', 'A', 'A', 0.0 ],
[ 'A', 'A', 'B', 0.5 ],
[ 'A', 'A', 'C', 0.5 ],
[ 'A', 'B', 'A', 0.0 ],
[ 'A', 'B', 'B', 0.0 ],
[ 'A', 'B', 'C', 1.0 ],
[ 'A', 'C', 'A', 0.0 ],
[ 'A', 'C', 'B', 1.0 ],
[ 'A', 'C', 'C', 0.0 ],
[ 'B', 'A', 'A', 0.0 ],
[ 'B', 'A', 'B', 0.0 ],
[ 'B', 'A', 'C', 1.0 ],
[ 'B', 'B', 'A', 0.5 ],
[ 'B', 'B', 'B', 0.0 ],
[ 'B', 'B', 'C', 0.5 ],
[ 'B', 'C', 'A', 1.0 ],
[ 'B', 'C', 'B', 0.0 ],
[ 'B', 'C', 'C', 0.0 ],
[ 'C', 'A', 'A', 0.0 ],
[ 'C', 'A', 'B', 1.0 ],
[ 'C', 'A', 'C', 0.0 ],
[ 'C', 'B', 'A', 1.0 ],
[ 'C', 'B', 'B', 0.0 ],
[ 'C', 'B', 'C', 0.0 ],
[ 'C', 'C', 'A', 0.5 ],
[ 'C', 'C', 'B', 0.5 ],
[ 'C', 'C', 'C', 0.0 ]], [guest, prize] )
s1 = State( guest, name="guest" )
s2 = State( prize, name="prize" )
s3 = State( monty, name="monty" )
network = BayesianNetwork( "test" )
network.add_states( s1, s2, s3 )
network.add_transition( s1, s3 )
network.add_transition( s2, s3 )
network.bake()
print("\t".join([ state.name for state in network.states ]))
observations = { 'guest' : 'A' }
beliefs = map( str, network.predict_proba( observations ) )
print("\n".join( "{}\t{}".format( state.name, belief ) for state, belief in zip( network.states, beliefs ) ))
observations = { 'guest' : 'A', 'monty' : 'B' }
beliefs = map( str, network.predict_proba( observations ) )
print("\n".join( "{}\t{}".format( state.name, belief ) for state, belief in zip( network.states, beliefs ) ))
observations = { 'monty' : 'B' }
beliefs = map( str, network.predict_proba( observations ) )
print("\n".join( "{}\t{}".format( state.name, belief ) for state, belief in zip( network.states, beliefs ) ))
data = [[ 'A', 'A', 'C' ],
[ 'A', 'A', 'C' ],
[ 'A', 'A', 'B' ],
[ 'A', 'A', 'A' ],
[ 'A', 'A', 'C' ],
[ 'B', 'B', 'B' ],
[ 'B', 'B', 'C' ],
[ 'C', 'C', 'A' ],
[ 'C', 'C', 'C' ],
[ 'C', 'C', 'C' ],
[ 'C', 'C', 'C' ],
[ 'C', 'B', 'A' ]]
network.fit( data )
print(monty)
print(prize)
print(guest)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's create the distributions for the guest and the prize. Note that both distributions are independent of one another.
Step2: Now let's create the conditional probability table for our Monty. The table is dependent on both the guest and the prize.
Step3: Now lets create the states for the bayesian network.
Step4: Then the bayesian network itself, adding the states in after.
Step5: Then the transitions.
Step6: With a "bake" to finalize the structure of our network.
Step7: Now we can check the possible states in our network.
Step8: Now we can see what happens to our network when our Guest chooses 'A'.
Step9: Now our host chooses 'B'. (note that prize goes to 66% if you switch)
Step10: We can also see what happens if our host simply chooses 'B'.
Step11: Now let's train our network on the following set of data.
Step12: Let's see the results! Starting with the Monty.
Step13: Then our Prize.
Step14: Finally our Guest.
|
15,154 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pylab as plt
!curl -o /tmp/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff http://golden-tile-geotiffs.s3.amazonaws.com/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff
# Set the center of the map to the location the data
M.set_center(-120.32, 47.84, 7)
from geonotebook.wrappers import RasterData
rd = RasterData('file://data/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff')
rd
M.add_layer(rd[1, 2, 3], opacity=1.0, gamma=2.5)
M.layers
print("Color Min Max")
print("Red: {}, {}".format(rd[1].min, rd[1].max))
print("Green: {}, {}".format(rd[2].min, rd[2].max))
print("Blue: {}, {}".format(rd[3].min, rd[3].max))
M.remove_layer(M.layers[0])
M.add_layer(rd[1, 2, 3], interval=(0,1))
M.remove_layer(M.layers[0])
M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5)
M.remove_layer(M.layers[0])
M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5, opacity=0.75)
# Remove the layer before moving on to the next section
M.remove_layer(M.layers[0])
M.add_layer(rd[4])
M.remove_layer(M.layers[0])
cmap = plt.get_cmap('winter', 10)
M.add_layer(rd[4], colormap=cmap, opacity=0.8)
from matplotlib.colors import LinearSegmentedColormap
M.remove_layer(M.layers[0])
# Divergent Blue to Beige to Green colormap
cmap =LinearSegmentedColormap.from_list(
'ndvi', ['blue', 'beige', 'green'], 20)
# Add layer with custom colormap
M.add_layer(rd[4], colormap=cmap, opacity=0.8, min=-1.0, max=1.0)
M.set_center(-119.25618502500376, 47.349300631765104, 11)
layer, data = next(M.layers.annotation.rectangles[0].data)
data
import numpy as np
fig, ax = plt.subplots(figsize=(16, 16))
ax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))
# Adapted from the scikit-image segmentation tutorial
# See: http://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html
import numpy as np
from skimage import measure
from skimage.filters import sobel
from skimage.morphology import watershed
from scipy import ndimage as ndi
THRESHOLD = 20
WATER_MIN = 0.2
WATER_MAX = 0.6
fig, ax = plt.subplots(figsize=(16, 16))
edges = sobel(data)
markers = np.zeros_like(data)
markers[data > WATER_MIN] = 2
markers[data > WATER_MAX] = 1
mask = (watershed(edges, markers) - 1).astype(bool)
seg = np.zeros_like(mask, dtype=int)
seg[~mask] = 1
# Fill holes
seg = ndi.binary_fill_holes(seg)
# Ignore entities smaller than THRESHOLD
label_objects, _ = ndi.label(seg)
sizes = np.bincount(label_objects.ravel())
mask_sizes = sizes > THRESHOLD
mask_sizes[0] = 0
clean_segs = mask_sizes[label_objects]
# Find contours of the segmented data
contours = measure.find_contours(clean_segs, 0)
ax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))
ax.axis('tight')
for n, contour in enumerate(contours):
ax.plot(contour[:, 1], contour[:, 0], linewidth=4)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get data from S3
Step2: Adding an RGB layer to the map
Step3: To add the layer we call M.add_layer passing in a subset of the raster data set's bands. In this case we index into rd with the list [1, 2, 3]. This actually returns a new RasterData object with only three bands available (in this case bands 1, 2 and 3 corrispond to Red, Green and Blue). When adding layers you can only add a layer with either 3 bands (R,G,B) or one band (we'll see a one band example in a moment).
Step4: This should have added an RGB dataset to the map for visualization. You can also see what layers are available via the M.layers attribute.
Step5: The dataset may appear alarmingly dark. This is because the data itself is not well formated. We can see this by looking at band min and max values
Step6: R,G,B values should be between 0 and 1. We can remedy this by changing some of the styling options that are available on the layers including setting an interval for scaling our data, and setting a gamma to brighten the image.
Step7: Then we can re-add the layer with a color interval of 0 to 1.
Step8: We can also brighten this up by changing the gamma.
Step9: Finally, let's add a little opacity to layer so we can see some of the underlying base map features.
Step10: Adding a single band Layer
Step11: You may find this colormap a little aggressive, in which case you can replace the colormap with any of the built in matplotlib colormaps
Step12: Including custom color maps as in this example. Here we create a linear segmented colormap that transitions from Blue to Beige to Green. When mapped to our NDVI band data -1 will appear blue, 0 will appear beige and 1 will appear green.
Step13: What can I do with this data?
Step14: Go ahead and start a rectangular annotation (Second button to the right of the 'CellToolbar' button - with the square icon).
Step15: As a sanity check we can prove the data is the region we've selected by plotting the data with matplotlib's imshow function
Step16: NDVI Segmentation analysis
|
15,155 | <ASSISTANT_TASK:>
Python Code:
data_in_shape = (4, 3)
norm = BatchNormalization(epsilon=1e-05, axis=-1, center=True, scale=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = norm(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(1000 + i)
if i == 3:
# variance should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('gamma shape:', weights[0].shape)
print('gamma:', format_decimal(weights[0].ravel().tolist()))
print('beta shape:', weights[1].shape)
print('beta:', format_decimal(weights[1].ravel().tolist()))
print('moving_mean shape:', weights[2].shape)
print('moving_mean:', format_decimal(weights[2].ravel().tolist()))
print('moving_variance shape:', weights[3].shape)
print('moving_variance:', format_decimal(weights[3].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['normalization.BatchNormalization.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 3)
norm = BatchNormalization(epsilon=1e-02, axis=-1, center=True, scale=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = norm(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(1010 + i)
if i == 3:
# variance should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('gamma shape:', weights[0].shape)
print('gamma:', format_decimal(weights[0].ravel().tolist()))
print('beta shape:', weights[1].shape)
print('beta:', format_decimal(weights[1].ravel().tolist()))
print('moving_mean shape:', weights[2].shape)
print('moving_mean:', format_decimal(weights[2].ravel().tolist()))
print('moving_variance shape:', weights[3].shape)
print('moving_variance:', format_decimal(weights[3].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['normalization.BatchNormalization.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 3, 2)
norm = BatchNormalization(epsilon=1e-05, axis=1, center=True, scale=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = norm(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(1020 + i)
if i == 3:
# variance should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('gamma shape:', weights[0].shape)
print('gamma:', format_decimal(weights[0].ravel().tolist()))
print('beta shape:', weights[1].shape)
print('beta:', format_decimal(weights[1].ravel().tolist()))
print('moving_mean shape:', weights[2].shape)
print('moving_mean:', format_decimal(weights[2].ravel().tolist()))
print('moving_variance shape:', weights[3].shape)
print('moving_variance:', format_decimal(weights[3].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['normalization.BatchNormalization.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 3, 2)
norm = BatchNormalization(epsilon=1e-05, axis=2, center=True, scale=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = norm(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(1030 + i)
if i == 3:
# variance should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('gamma shape:', weights[0].shape)
print('gamma:', format_decimal(weights[0].ravel().tolist()))
print('beta shape:', weights[1].shape)
print('beta:', format_decimal(weights[1].ravel().tolist()))
print('moving_mean shape:', weights[2].shape)
print('moving_mean:', format_decimal(weights[2].ravel().tolist()))
print('moving_variance shape:', weights[3].shape)
print('moving_variance:', format_decimal(weights[3].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['normalization.BatchNormalization.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 3, 2)
norm = BatchNormalization(epsilon=1e-05, axis=3, center=True, scale=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = norm(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(1040 + i)
if i == 2:
# variance should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('beta shape:', weights[0].shape)
print('beta:', format_decimal(weights[0].ravel().tolist()))
print('moving_mean shape:', weights[1].shape)
print('moving_mean:', format_decimal(weights[1].ravel().tolist()))
print('moving_variance shape:', weights[2].shape)
print('moving_variance:', format_decimal(weights[2].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['normalization.BatchNormalization.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 3, 2)
norm = BatchNormalization(epsilon=1e-05, axis=-1, center=False, scale=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = norm(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(1050 + i)
if i == 2:
# variance should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('gamma shape:', weights[0].shape)
print('gamma:', format_decimal(weights[0].ravel().tolist()))
print('moving_mean shape:', weights[1].shape)
print('moving_mean:', format_decimal(weights[1].ravel().tolist()))
print('moving_variance shape:', weights[2].shape)
print('moving_variance:', format_decimal(weights[2].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['normalization.BatchNormalization.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 3, 2)
norm = BatchNormalization(epsilon=0.001, axis=-1, center=False, scale=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = norm(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(1060 + i)
if i == 1:
# variance should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('moving_mean shape:', weights[0].shape)
print('moving_mean:', format_decimal(weights[0].ravel().tolist()))
print('moving_variance shape:', weights[1].shape)
print('moving_variance:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['normalization.BatchNormalization.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
import os
filename = '../../../test/data/layers/normalization/BatchNormalization.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: [normalization.BatchNormalization.1] epsilon=1e-02, axis=-1, center=True, scale=True
Step2: [normalization.BatchNormalization.2] epsilon=1e-05, axis=1, center=True, scale=True
Step3: [normalization.BatchNormalization.3] epsilon=1e-05, axis=2, center=True, scale=True
Step4: [normalization.BatchNormalization.4] epsilon=1e-05, axis=3, center=True, scale=False
Step5: [normalization.BatchNormalization.5] epsilon=1e-05, axis=-1, center=False, scale=True
Step6: [normalization.BatchNormalization.6] epsilon=0.001, axis=-1, center=False, scale=False
Step7: export for Keras.js tests
|
15,156 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q tf_nightly
import tensorflow as tf
import numpy as np
from typing import Tuple, List, Mapping, Union, Optional
import tempfile
class TensorGraph(tf.experimental.ExtensionType):
A collection of labeled nodes connected by weighted edges.
edge_weights: tf.Tensor # shape=[num_nodes, num_nodes]
node_labels: Mapping[str, tf.Tensor] # shape=[num_nodes]; dtype=any
class MaskedTensor(tf.experimental.ExtensionType):
A tensor paired with a boolean mask, indicating which values are valid.
values: tf.Tensor
mask: tf.Tensor # shape=values.shape; false for missing/invalid values.
class CSRSparseMatrix(tf.experimental.ExtensionType):
Compressed sparse row matrix (https://en.wikipedia.org/wiki/Sparse_matrix).
values: tf.Tensor # shape=[num_nonzero]; dtype=any
col_index: tf.Tensor # shape=[num_nonzero]; dtype=int64
row_index: tf.Tensor # shape=[num_rows+1]; dtype=int64
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
def replace_mask(self, new_mask):
self.values.shape.assert_is_compatible_with(new_mask.shape)
return MaskedTensor(self.values, new_mask)
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
# Constructor takes one parameter for each field.
mt = MaskedTensor(values=[[1, 2, 3], [4, 5, 6]],
mask=[[True, True, False], [True, False, True]])
# Fields are type-checked and converted to the declared types.
# E.g., mt.values is converted to a Tensor.
print(mt.values)
try:
MaskedTensor([1, 2, 3], None)
except TypeError as e:
print(f"Got expected TypeError: {e}")
class Pencil(tf.experimental.ExtensionType):
color: str = "black"
has_erasor: bool = True
length: tf.Tensor = 1.0
Pencil()
Pencil(length=0.5, color="blue")
print(MaskedTensor(values=[1, 2, 3], mask=[True, True, False]))
a = MaskedTensor([1, 2], [True, False])
b = MaskedTensor([[3, 4], [5, 6]], [[False, True], [True, True]])
print(f"a == a: {a==a}")
print(f"a == b: {a==b}")
print(f"a == a.values: {a==a.values}")
class MaskedTensor(tf.experimental.ExtensionType):
A tensor paired with a boolean mask, indicating which values are valid.
values: tf.Tensor
mask: tf.Tensor
def __validate__(self):
self.values.shape.assert_is_compatible_with(self.mask.shape)
assert self.mask.dtype.is_bool, 'mask.dtype must be bool'
try:
MaskedTensor([1, 2, 3], [0, 1, 0]) # wrong dtype for mask.
except AssertionError as e:
print(f"Got expected AssertionError: {e}")
try:
MaskedTensor([1, 2, 3], [True, False]) # shapes don't match.
except ValueError as e:
print(f"Got expected ValueError: {e}")
mt = MaskedTensor([1, 2, 3], [True, False, True])
try:
mt.mask = [True, True, True]
except AttributeError as e:
print(f"Got expected AttributeError: {e}")
try:
mt.mask[0] = False
except TypeError as e:
print(f"Got expected TypeError: {e}")
try:
del mt.mask
except AttributeError as e:
print(f"Got expected AttributeError: {e}")
class Player(tf.experimental.ExtensionType):
name: tf.Tensor
attributes: Mapping[str, tf.Tensor]
anne = Player("Anne", {"height": 8.3, "speed": 28.1})
anne_spec = tf.type_spec_from_value(anne)
print(anne_spec.name) # Records dtype and shape, but not the string value.
print(anne_spec.attributes) # Records keys and TensorSpecs for values.
spec1 = Player.Spec(name=tf.TensorSpec([], tf.float32), attributes={})
spec2 = tf.type_spec_from_value(anne)
@tf.function
def anonymize_player(player):
print("<<TRACING>>")
return Player("<anonymous>", player.attributes)
# Function gets traced (first time the function has been called):
anonymize_player(Player("Anne", {"height": 8.3, "speed": 28.1}))
# Function does NOT get traced (same TypeSpec: just tensor values changed)
anonymize_player(Player("Bart", {"height": 8.1, "speed": 25.3}))
# Function gets traced (new TypeSpec: keys for attributes changed):
anonymize_player(Player("Chuck", {"height": 11.0, "jump": 5.3}))
class MaskedTensor(tf.experimental.ExtensionType):
A tensor paired with a boolean mask, indicating which values are valid.
values: tf.Tensor
mask: tf.Tensor # shape=values.shape; false for invalid values.
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
def masked_tensor_str(values, mask):
if isinstance(values, tf.Tensor):
if hasattr(values, 'numpy') and hasattr(mask, 'numpy'):
return f'<MaskedTensor {masked_tensor_str(values.numpy(), mask.numpy())}>'
else:
return f'MaskedTensor(values={values}, mask={mask})'
if len(values.shape) == 1:
items = [repr(v) if m else '_' for (v, m) in zip(values, mask)]
else:
items = [masked_tensor_str(v, m) for (v, m) in zip(values, mask)]
return '[%s]' % ', '.join(items)
mt = MaskedTensor(values=[[1, 2, 3], [4, 5, 6]],
mask=[[True, True, False], [True, False, True]])
print(mt)
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
def with_default(self, default):
return tf.where(self.mask, self.values, default)
MaskedTensor([1, 2, 3], [True, False, True]).with_default(0)
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
@staticmethod
def from_tensor_and_value_to_mask(values, value_to_mask):
return MaskedTensor(values, values == value_to_mask)
x = tf.constant([[1, 0, 2], [3, 0, 0]])
MaskedTensor.from_tensor_and_value_to_mask(x, 0)
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
@property
def dtype(self):
return self.values.dtype
MaskedTensor([1, 2, 3], [True, False, True]).dtype
class Toy(tf.experimental.ExtensionType):
name: str
price: tf.Tensor
def __init__(self, name, price, discount=0):
self.name = name
self.price = price * (1 - discount)
print(Toy("ball", 5.0, discount=0.2)) # On sale -- 20% off!
class Toy(tf.experimental.ExtensionType):
name: str
price: tf.Tensor
@staticmethod
def new_toy_with_discount(name, price, discount):
return Toy(name, price * (1 - discount))
print(Toy.new_toy_with_discount("ball", 5.0, discount=0.2))
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
def __eq__(self, other):
result = tf.math.equal(self.values, other.values)
result = result | ~(self.mask & other.mask)
return tf.reduce_all(result)
x = MaskedTensor([1, 2, 3, 4], [True, True, False, True])
y = MaskedTensor([5, 2, 0, 4], [False, True, False, True])
print(x == y)
class Node(tf.experimental.ExtensionType):
value: tf.Tensor
children: Tuple["Node", ...] = ()
Node(3, [Node(5), Node(2)])
class TensorGraph(tf.experimental.ExtensionType):
num_nodes: tf.Tensor
edge_src: tf.Tensor # edge_src[e] = index of src node for edge e.
edge_dst: tf.Tensor # edge_dst[e] = index of dst node for edge e.
class TensorGraphWithNodeFeature(TensorGraph):
node_features: tf.Tensor # node_features[n] = feature value for node n.
def propagate_features(self, weight=1.0) -> 'TensorGraphWithNodeFeature':
updates = tf.gather(self.node_features, self.edge_src) * weight
new_node_features = tf.tensor_scatter_nd_add(
self.node_features, tf.expand_dims(self.edge_dst, 1), updates)
return TensorGraphWithNodeFeature(
self.num_nodes, self.edge_src, self.edge_dst, new_node_features)
g = TensorGraphWithNodeFeature( # Edges: 0->1, 4->3, 2->2, 2->1
num_nodes=5, edge_src=[0, 4, 2, 2], edge_dst=[1, 3, 2, 1],
node_features=[10.0, 0.0, 2.0, 5.0, -1.0, 0.0])
print("Original features:", g.node_features)
print("After propagating:", g.propagate_features().node_features)
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
shape = property(lambda self: self.values.shape)
dtype = property(lambda self: self.values.dtype)
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
def with_values(self, new_values):
return MaskedTensor(new_values, self.mask)
class Spec:
def __init__(self, shape, dtype=tf.float32):
self.values = tf.TensorSpec(shape, dtype)
self.mask = tf.TensorSpec(shape, tf.bool)
def __repr__(self):
return f"MaskedTensor.Spec(shape={self.shape}, dtype={self.dtype})"
shape = property(lambda self: self.values.shape)
dtype = property(lambda self: self.values.dtype)
@tf.experimental.dispatch_for_api(tf.stack)
def masked_stack(values: List[MaskedTensor], axis = 0):
return MaskedTensor(tf.stack([v.values for v in values], axis),
tf.stack([v.mask for v in values], axis))
x = MaskedTensor([1, 2, 3], [True, True, False])
y = MaskedTensor([4, 5, 6], [False, True, True])
tf.stack([x, y])
tf.experimental.unregister_dispatch_for(masked_stack)
def convert_to_masked_tensor(x):
if isinstance(x, MaskedTensor):
return x
else:
return MaskedTensor(x, tf.ones_like(x, tf.bool))
@tf.experimental.dispatch_for_api(tf.stack)
def masked_stack_v2(values: List[Union[MaskedTensor, tf.Tensor]], axis = 0):
values = [convert_to_masked_tensor(v) for v in values]
return MaskedTensor(tf.stack([v.values for v in values], axis),
tf.stack([v.mask for v in values], axis))
x = MaskedTensor([1, 2, 3], [True, True, False])
y = tf.constant([4, 5, 6])
tf.stack([x, y, x])
@tf.experimental.dispatch_for_unary_elementwise_apis(MaskedTensor)
def masked_tensor_unary_elementwise_api_handler(api_func, x):
return MaskedTensor(api_func(x.values), x.mask)
x = MaskedTensor([1, -2, -3], [True, False, True])
print(tf.abs(x))
print(tf.ones_like(x, dtype=tf.float32))
@tf.experimental.dispatch_for_binary_elementwise_apis(MaskedTensor, MaskedTensor)
def masked_tensor_binary_elementwise_api_handler(api_func, x, y):
return MaskedTensor(api_func(x.values, y.values), x.mask & y.mask)
x = MaskedTensor([1, -2, -3], [True, False, True])
y = MaskedTensor([[4], [5]], [[True], [False]])
tf.math.add(x, y)
class Network(tf.experimental.ExtensionType): # This version is not batchable.
work: tf.Tensor # work[n] = work left to do at node n
bandwidth: tf.Tensor # bandwidth[n1, n2] = bandwidth from n1->n2
net1 = Network([5., 3, 8], [[0., 2, 0], [2, 0, 3], [0, 3, 0]])
net2 = Network([3., 4, 2], [[0., 2, 2], [2, 0, 2], [2, 2, 0]])
class Network(tf.experimental.BatchableExtensionType):
shape: tf.TensorShape # batch shape. A single network has shape=[].
work: tf.Tensor # work[*shape, n] = work left to do at node n
bandwidth: tf.Tensor # bandwidth[*shape, n1, n2] = bandwidth from n1->n2
def __init__(self, work, bandwidth):
self.work = tf.convert_to_tensor(work)
self.bandwidth = tf.convert_to_tensor(bandwidth)
work_batch_shape = self.work.shape[:-1]
bandwidth_batch_shape = self.bandwidth.shape[:-2]
self.shape = work_batch_shape.merge_with(bandwidth_batch_shape)
def __repr__(self):
return network_repr(self)
def network_repr(network):
work = network.work
bandwidth = network.bandwidth
if hasattr(work, 'numpy'):
work = ' '.join(str(work.numpy()).split())
if hasattr(bandwidth, 'numpy'):
bandwidth = ' '.join(str(bandwidth.numpy()).split())
return (f"<Network shape={network.shape} work={work} bandwidth={bandwidth}>")
net1 = Network([5., 3, 8], [[0., 2, 0], [2, 0, 3], [0, 3, 0]])
net2 = Network([3., 4, 2], [[0., 2, 2], [2, 0, 2], [2, 2, 0]])
batch_of_networks = Network(
work=tf.stack([net1.work, net2.work]),
bandwidth=tf.stack([net1.bandwidth, net2.bandwidth]))
print(f"net1={net1}")
print(f"net2={net2}")
print(f"batch={batch_of_networks}")
dataset = tf.data.Dataset.from_tensor_slices(batch_of_networks)
for i, network in enumerate(dataset):
print(f"Batch element {i}: {network}")
def balance_work_greedy(network):
delta = (tf.expand_dims(network.work, -1) - tf.expand_dims(network.work, -2))
delta /= 4
delta = tf.maximum(tf.minimum(delta, network.bandwidth), -network.bandwidth)
new_work = network.work + tf.reduce_sum(delta, -1)
return Network(new_work, network.bandwidth)
tf.map_fn(balance_work_greedy, batch_of_networks)
class Pastry(tf.experimental.ExtensionType):
sweetness: tf.Tensor # 2d embedding that encodes sweetness
chewiness: tf.Tensor # 2d embedding that encodes chewiness
@tf.function
def combine_pastry_features(x: Pastry):
return (x.sweetness + x.chewiness) / 2
cookie = Pastry(sweetness=[1.2, 0.4], chewiness=[0.8, 0.2])
combine_pastry_features(cookie)
pastry_spec = Pastry.Spec(tf.TensorSpec([2]), tf.TensorSpec(2))
@tf.function(input_signature=[pastry_spec])
def increase_sweetness(x: Pastry, delta=1.0):
return Pastry(x.sweetness + delta, x.chewiness)
increase_sweetness(cookie)
cf = combine_pastry_features.get_concrete_function(pastry_spec)
cf(cookie)
# Example: using tf.cond to select between two MaskedTensors. Note that the
# two MaskedTensors don't need to have the same shape.
a = MaskedTensor([1., 2, 3], [True, False, True])
b = MaskedTensor([22., 33, 108, 55], [True, True, True, False])
condition = tf.constant(True)
print(tf.cond(condition, lambda: a, lambda: b))
# Example: using tf.while_loop with MaskedTensor.
cond = lambda i, _: i < 10
def body(i, mt):
return i + 1, mt.with_values(mt.values + 3 / 7)
print(tf.while_loop(cond, body, [0, b])[1])
@tf.function
def fn(x, b):
if b:
x = MaskedTensor(x, tf.less(x, 0))
else:
x = MaskedTensor(x, tf.greater(x, 0))
for i in tf.range(5 if b else 7):
x = x.with_values(x.values + 1 / 2)
return x
print(fn(tf.constant([1., -2, 3]), tf.constant(True)))
print(fn(tf.constant([1., -2, 3]), tf.constant(False)))
class Network(tf.experimental.BatchableExtensionType):
shape: tf.TensorShape # batch shape. A single network has shape=[].
work: tf.Tensor # work[*shape, n] = work left to do at node n
bandwidth: tf.Tensor # bandwidth[*shape, n1, n2] = bandwidth from n1->n2
def __init__(self, work, bandwidth):
self.work = tf.convert_to_tensor(work)
self.bandwidth = tf.convert_to_tensor(bandwidth)
work_batch_shape = self.work.shape[:-1]
bandwidth_batch_shape = self.bandwidth.shape[:-2]
self.shape = work_batch_shape.merge_with(bandwidth_batch_shape)
def __repr__(self):
return network_repr(self)
single_network = Network( # A single network w/ 4 nodes.
work=[8.0, 5, 12, 2],
bandwidth=[[0.0, 1, 2, 2], [1, 0, 0, 2], [2, 0, 0, 1], [2, 2, 1, 0]])
batch_of_networks = Network( # Batch of 2 networks, each w/ 2 nodes.
work=[[8.0, 5], [3, 2]],
bandwidth=[[[0.0, 1], [1, 0]], [[0, 2], [2, 0]]])
class BalanceNetworkLayer(tf.keras.layers.Layer):
Layer that balances work between nodes in a network.
Shifts work from more busy nodes to less busy nodes, constrained by bandwidth.
def call(self, inputs):
# This function is defined above, in "Batchable ExtensionTypes" section.
return balance_work_greedy(inputs)
input_spec = Network.Spec(shape=None,
work=tf.TensorSpec(None, tf.float32),
bandwidth=tf.TensorSpec(None, tf.float32))
model = tf.keras.Sequential([
tf.keras.layers.Input(type_spec=input_spec),
BalanceNetworkLayer(),
])
model(single_network)
model(batch_of_networks)
class MaskedTensor(tf.experimental.BatchableExtensionType):
# __name__ is required for serialization in SavedModel; see below for details.
__name__ = 'extension_type_colab.MaskedTensor'
values: tf.Tensor
mask: tf.Tensor
shape = property(lambda self: self.values.shape)
dtype = property(lambda self: self.values.dtype)
def with_default(self, default):
return tf.where(self.mask, self.values, default)
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
class Spec:
def __init__(self, shape, dtype=tf.float32):
self.values = tf.TensorSpec(shape, dtype)
self.mask = tf.TensorSpec(shape, tf.bool)
shape = property(lambda self: self.values.shape)
dtype = property(lambda self: self.values.dtype)
def with_shape(self):
return MaskedTensor.Spec(tf.TensorSpec(shape, self.values.dtype),
tf.TensorSpec(shape, self.mask.dtype))
@tf.experimental.dispatch_for_unary_elementwise_apis(MaskedTensor)
def unary_elementwise_op_handler(op, x):
return MaskedTensor(op(x.values), x.mask)
@tf.experimental.dispatch_for_binary_elementwise_apis(
Union[MaskedTensor, tf.Tensor],
Union[MaskedTensor, tf.Tensor])
def binary_elementwise_op_handler(op, x, y):
x = convert_to_masked_tensor(x)
y = convert_to_masked_tensor(y)
return MaskedTensor(op(x.values, y.values), x.mask & y.mask)
@tf.experimental.dispatch_for_api(tf.matmul)
def masked_matmul(a: MaskedTensor, b,
transpose_a=False, transpose_b=False,
adjoint_a=False, adjoint_b=False,
a_is_sparse=False, b_is_sparse=False,
output_type=None):
if isinstance(a, MaskedTensor):
a = a.with_default(0)
if isinstance(b, MaskedTensor):
b = b.with_default(0)
return tf.matmul(a, b, transpose_a, transpose_b, adjoint_a,
adjoint_b, a_is_sparse, b_is_sparse, output_type)
input_spec = MaskedTensor.Spec([None, 2], tf.float32)
masked_tensor_model = tf.keras.Sequential([
tf.keras.layers.Input(type_spec=input_spec),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(1)])
masked_tensor_model.compile(loss='binary_crossentropy', optimizer='rmsprop')
a = MaskedTensor([[1., 2], [3, 4], [5, 6]],
[[True, False], [False, True], [True, True]])
masked_tensor_model.fit(a, tf.constant([[1], [0], [1]]), epochs=3)
print(masked_tensor_model(a))
masked_tensor_model_path = tempfile.mkdtemp()
tf.saved_model.save(masked_tensor_model, masked_tensor_model_path)
imported_model = tf.saved_model.load(masked_tensor_model_path)
imported_model(a)
class CustomModule(tf.Module):
def __init__(self, variable_value):
super().__init__()
self.v = tf.Variable(variable_value)
@tf.function
def grow(self, x: MaskedTensor):
Increase values in `x` by multiplying them by `self.v`.
return MaskedTensor(x.values * self.v, x.mask)
module = CustomModule(100.0)
module.grow.get_concrete_function(MaskedTensor.Spec(shape=None,
dtype=tf.float32))
custom_module_path = tempfile.mkdtemp()
tf.saved_model.save(module, custom_module_path)
imported_model = tf.saved_model.load(custom_module_path)
imported_model.grow(MaskedTensor([1., 2, 3], [False, True, False]))
class CustomModuleWrapper(tf.Module):
def __init__(self, variable_value):
super().__init__()
self.v = tf.Variable(variable_value)
@tf.function
def var_weighted_mean(self, x: MaskedTensor):
Mean value of unmasked values in x, weighted by self.v.
x = MaskedTensor(x.values * self.v, x.mask)
return (tf.reduce_sum(x.with_default(0)) /
tf.reduce_sum(tf.cast(x.mask, x.dtype)))
@tf.function()
def var_weighted_mean_wrapper(self, x_values, x_mask):
Raw tensor wrapper for var_weighted_mean.
return self.var_weighted_mean(MaskedTensor(x_values, x_mask))
module = CustomModuleWrapper([3., 2., 8., 5.])
module.var_weighted_mean_wrapper.get_concrete_function(
tf.TensorSpec(None, tf.float32), tf.TensorSpec(None, tf.bool))
custom_module_path = tempfile.mkdtemp()
tf.saved_model.save(module, custom_module_path)
imported_model = tf.saved_model.load(custom_module_path)
x = MaskedTensor([1., 2., 3., 4.], [False, True, False, True])
imported_model.var_weighted_mean_wrapper(x.values, x.mask)
ds = tf.data.Dataset.from_tensors(Pastry(5, 5))
iter(ds).next()
mt = MaskedTensor(tf.reshape(range(20), [5, 4]), tf.ones([5, 4]))
ds = tf.data.Dataset.from_tensor_slices(mt)
for value in ds:
print(value)
def value_gen():
for i in range(2, 7):
yield MaskedTensor(range(10), [j%i != 0 for j in range(10)])
ds = tf.data.Dataset.from_generator(
value_gen, output_signature=MaskedTensor.Spec(shape=[10], dtype=tf.int32))
for value in ds:
print(value)
batched_ds = ds.batch(2)
for value in batched_ds:
print(value)
unbatched_ds = batched_ds.unbatch()
for value in unbatched_ds:
print(value)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Extension types
Step5: Extension types
Step6: The tf.experimental.ExtensionType base class works similarly to typing.NamedTuple and @dataclasses.dataclass from the standard Python library. In particular, it automatically adds a constructor and special methods (such as __repr__ and __eq__) based on the field type annotations.
Step7: Functionality added by ExtensionType
Step8: The constructor raises an TypeError if a field value can not be converted to its declared type
Step9: The default value for a field can be specified by setting its value at the class level
Step10: Printable representation
Step11: Equality operators
Step13: Note
Step14: Enforced immutability
Step15: Nested TypeSpec
Step16: TypeSpec values can be constructed explicitly, or they can be built from an ExtensionType value using tf.type_spec_from_value
Step17: TypeSpecs are used by TensorFlow to divide values into a static component and a dynamic component
Step19: For more information, see the tf.function Guide.
Step20: Defining methods
Step21: Defining classmethods and staticmethods
Step22: Defining properties
Step23: Overriding the default constructor
Step24: Alternatively, you might consider leaving the default constructor as-is, but adding one or more factory methods. E.g.
Step25: Overriding the default equality operator (__eq__)
Step26: Note
Step27: Defining subclasses
Step28: Defining private fields
Step29: Note
Step30: This overrides the default implementation for tf.stack whenever it is called with a list of MaskedTensor values (since the values argument is annotated with typing.List[MaskedTensor])
Step31: To allow tf.stack to handle lists of mixed MaskedTensor and Tensor values, you can refine the type annotation for the values parameter and update the body of the function appropriately
Step32: For a list of APIs that can be overridden, see the API documentation for tf.experimental.dispatch_for_api.
Step33: This function will now be used whenever a unary elementwise operation is called on a MaskedTensor.
Step34: Dispatch for binary all elementwise APIs
Step35: For a list of the elementwise APIs that are overridden, see the API documentation for tf.experimental.dispatch_for_unary_elementwise_apis and tf.experimental.dispatch_for_binary_elementwise_apis.
Step36: To make this type batchable, change the base type to BatchableExtensionType, and adjust the shape of each field to include optional batch dimensions. The following example also adds a shape field to keept track of the batch shape. This shape field is not required by tf.data.Dataset or tf.map_fn, but it is required by tf.Keras.
Step37: You can then use tf.data.Dataset to iterate through a batch of networks
Step38: And you can also use map_fn to apply a function to each batch element
Step39: TensorFlow APIs that support ExtensionTypes
Step40: If you wish to explicitly specify the input_signature for tf.function, then you can do so using the extension type's TypeSpec.
Step41: Concrete functions
Step42: Control flow operations
Step43: Autograph control flow
Step44: Keras
Step46: You can define a new Keras layer that processes Networks.
Step47: You can then use this layers to create a simple model. To feed an ExtensionType into a model, you can use a tf.keras.layer.Input layer with type_spec set to the extension type's TypeSpec. If the Keras model will be used to process batches, then the type_spec must include the batch dimension.
Step48: Finally, you can apply the model to a single network and to a batch of networks.
Step49: Keras example
Step50: Next, the dispatch decorators are used to override the default behavior of several TensorFlow APIs. Since these APIs are used by standard Keras layers (such as the Dense layer), overriding these will allow us to use those layers with MaskedTensor. For the purposes of this example, matmul for masked tensors is defined to treat the masked values as zeros (i.e., to not include them in the product).
Step51: You can then construct a Keras model that accepts MaskedTensor inputs, using standard Keras layers
Step52: SavedModel
Step54: Example
Step57: Loading a SavedModel when the ExtensionType is unavailable
Step58: Datasets
Step59: Batching and unbatching Datasets with extension types
|
15,157 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import sklearn
df = pd.read_table('https://raw.githubusercontent.com/sinanuozdemir/sfdat22/master/data/sms.tsv', sep='\t', header=None, names=['label', 'msg'])
df
df.label.value_counts()
value_probablity = df.label.value_counts()/len(df)
spam_probability = value_probablity.spam
ham_probability = value_probablity.ham
print('spam probability: {}, ham probability: {}'.format(spam_probability, ham_probability))
spams = df[df.label == 'spam']
sentence = 'send cash now'
spam_words_probability = 1
for word in sentence.split():
word_probability = spams[spams.msg.str.contains(word)].shape[0]/float(spams.shape[0])
print("word {} probability: {}".format(word, word_probability))
spam_words_probability *= word_probability
spam_words_probability *= spam_probability
print('spam words probability: {}'.format(spam_words_probability))
hams = df[df.label == 'ham']
sentence = 'send cash now'
ham_words_probability = 1
for word in sentence.split():
word_probability = hams[hams.msg.str.contains(word)].shape[0]/float(hams.shape[0])
print("word {} probability: {}".format(word, word_probability))
ham_words_probability *= word_probability
ham_words_probability *= ham_probability
print('ham words probability: {}'.format(ham_words_probability))
if spam_words_probability > ham_words_probability:
print('{} is more likely a spam'.format(sentence))
else:
print('{} is more likely NOT a spam'.format(sentence))
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
x_train, x_test, y_train, y_test = train_test_split(df.msg, df.label, random_state=1)
vect = CountVectorizer()
train_dtm = vect.fit_transform(x_train)
test_dtm = vect.transform(x_test)
nb = MultinomialNB()
nb.fit(train_dtm, y_train)
predicts = nb.predict(test_dtm)
predicts
from sklearn import metrics
print('accuracy: {}, confusion matrix: {}'
.format(metrics.accuracy_score(y_test, predicts), metrics.confusion_matrix(y_test, predicts)))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use the built-in library methods
|
15,158 | <ASSISTANT_TASK:>
Python Code:
# el modelo de Leslie
L = np.array([[0, 1, 2],
[0.8, 0, 0],
[0, 0.7, 0]])
# las cuatro clases de edad tienen 100 pobladoras al inicio
poblacion_inicial = [100,100,100]
# lista para contener un vector de tamaños de población por clase para
# cada unidad de tiempo transcurrida en la simulación. Inicializamos con
# la poblacion inicial en el tiempo cero.
historico_de_poblaciones = [ poblacion_inicial, ]
# en otra lista guardaremos las proporciones de cada clase respecto del
# total de la población.
historico_de_proporciones = []
# iteramos 25 veces
for t in range(25):
# nuevo vector
vector_de_clases = np.dot(L, historico_de_poblaciones[t])
# al histórico
historico_de_poblaciones.append( vector_de_clases )
poblacion_total = vector_de_clases.sum()
vector_de_proporciones = [clase/poblacion_total for clase in vector_de_clases]
historico_de_proporciones.append( vector_de_proporciones )
np.array( historico_de_poblaciones[0:10] )
figura = plt.plot( historico_de_poblaciones[0:10] )
figura = plt.plot( historico_de_poblaciones )
np.array( historico_de_proporciones[10:20] )
figura = plt.plot( historico_de_proporciones )
# lmbd = np.linalg.det(L)
v = np.linalg.eig(L)[1][:,0]
l=sum(v)
v/l
def p(x):
return (x*x*x)-(0.8*x)-1.12
# derivada de p
def pp(x):
return (3*x*x)-0.8
def N(x):
return x - (p(x)/pp(x))
j = []
x = 1
for n in range(15):
x = N(x)
j.append(x)
j
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Al paso del tiempo
Step2: Tamaños de clases de poblaciones
Step3: Proporciones de tamaños de clases de poblaciones
Step4: Eigenvector, eigenvalor
Step5: Método de newton
|
15,159 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.optimize import minimize
x0 = [0,0]
x1 = [0, 2209]
d_0_2 = 2047
d_1_2 = 3020
def dist(a,b):
return np.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)
def f(x):
return (dist(x0, x) - d_0_2)**2 + (dist(x1, x) - d_1_2)**2
initial_guess = [2047, 0]
res = minimize(f, initial_guess)
x2 = res.x
reference_points = [x1, x0, x2]
reference_points
n_ant = 24
m = np.zeros((24,3))
m[0,:] = [1563, 855, 2618]
m[1,:] = [1407, 825, 2355]
m[2,:] = [1750, 765, 2644]
m[3,:] = [839, 1373, 2416]
m[4,:] = [1151, 1422, 2986]
m[5,:] = [842, 1410, 2662]
m[6,:] = [2527, 1119, 929]
m[7,:] = [2274, 1200, 915]
m[8,:] = [2715, 1261, 824]
m[9,:] = [1684, 1064, 1457]
m[10,:] = [2238, 546, 1501]
m[11,:] = [1834, 805, 1493]
m[12,:] = [3320, 1111, 2370]
m[13,:] = [3385, 1192, 2131]
m[14,:] = [3446, 1247, 2555]
m[15,:] = [3063, 1048, 1531]
m[16,:] = [2760, 550, 2096]
m[17,:] = [2873, 784, 1689]
m[18,:] = [2342, 934, 2979]
m[19,:] = [2638, 1142, 3179]
m[20,:] = [2186, 993, 3020]
m[21,:] = [3130, 1260, 3140]
m[22,:] = [2545, 565, 2544]
m[23,:] = [2942, 1000, 2891]
import requests
import json
pos_url = "https://tart.elec.ac.nz/signal/api/v1/imaging/antenna_positions"
def get_data(path):
server = "https://tart.elec.ac.nz/signal"
r = requests.get('{}/{}'.format(server, path))
return json.loads(r.text)
def get_pos():
return np.array(get_data('api/v1/imaging/antenna_positions'))
current_pos = get_pos()
current_pos
initial_guess = np.zeros(2*n_ant)
for i in range(n_ant):
initial_guess[2*i:2*i+2] = current_pos[i][0:2]*1000
#print(current_pos[i][0:2]*1000)
initial_guess
pos_i = current_pos*1000
import matplotlib.pyplot as plt
plt.scatter(pos_i[:,0], pos_i[:,1])
plt.xlim(-2000,2000)
plt.ylim(-2000,2000)
plt.show()
def f(x):
ret = 0
for i in range(n_ant):
for j in range(3):
p = [x[2*i],x[2*i+1]]
ret += (dist(reference_points[j], p) - m[i,j])**2
return ret
print(f(initial_guess))
res = minimize(f, initial_guess)
res
pos = res.x.reshape((24,2))
pos
plt.scatter(pos[:,0], pos[:,1], color='red')
plt.scatter(pos_i[:,0], pos_i[:,1], color='blue')
plt.xlim(-2000,2000)
plt.ylim(-2000,2000)
plt.grid(True)
plt.show()
result = np.zeros((n_ant, 3))
result[:,:-1] = np.round(pos/1000.0, 3)
result
json_result = {}
json_result["antenna_positions"] = result.tolist()
print(json.dumps(json_result, indent=4, separators=(',', ': ')))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Locating the third reference point
Step2: Now a least squares estimator is used to work out the x-y coordinates of the third reference point (x2)
Step3: Finding the antennas
Step4: The following are the measured distances from [x1, x0, x2] from the reference points in millimeters. Note that their order must be the same as the order of the variable called 'reference_points'. In this case, they are x1,x0,x2.
Step5: Plot the Initial Guess Points
Step6: Criteria for Optimality
Step7: The optimized positions are now known. The final value of the function is 32. Far closer to zero than 3 million!
Step8: The API expects 3D coordinates (with a z value which is zero in this case). Therefore we add a column of zeros.
|
15,160 | <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../python')
from voxforge import *
downloadVoxforgeData('../audio')
f=loadFile('../audio/Joel-20080716-qoz.tgz')
print f.props
print f.prompts
print f.data
%xdel f
corp=loadBySpeaker('../audio', limit=30)
addPhonemesSpk(corp,'../data/lex.tgz')
print corp.keys()
spk=corp.keys()[0]
print corp[spk]
%xdel corp
convertCTMToAli('../data/ali.ctm.gz','../data/phones.list','../audio','../data/ali.pklz')
import gzip
import pickle
with gzip.open('../data/ali.pklz') as f:
ali=pickle.load(f)
print 'Number of utterances: {}'.format(len(ali))
print ali[100].spk
print ali[100].phones
print ali[100].ph_lens
print ali[100].archive
print ali[100].audiofile
print ali[100].data
import random
from sets import Set
#make a list of speaker names
spk=set()
for utt in ali:
spk.add(utt.spk)
print 'Number of speakers: {}'.format(len(spk))
#choose 20 random speakers
tst_spk=list(spk)
random.shuffle(tst_spk)
tst_spk=tst_spk[:20]
#save the list for reference - if anyone else wants to use our list (will be saved in the repo)
with open('../data/test_spk.list', 'w') as f:
for spk in tst_spk:
f.write("{}\n".format(spk))
ali_test=filter(lambda x: x.spk in tst_spk, ali)
ali_train=filter(lambda x: not x.spk in tst_spk, ali)
print 'Number of test utterances: {}'.format(len(ali_test))
print 'Number of train utterances: {}'.format(len(ali_train))
#shuffle the utterances, to make them more uniform
random.shuffle(ali_test)
random.shuffle(ali_train)
#save the data for future use
with gzip.open('../data/ali_test.pklz','wb') as f:
pickle.dump(ali_test,f,pickle.HIGHEST_PROTOCOL)
with gzip.open('../data/ali_train.pklz','wb') as f:
pickle.dump(ali_train,f,pickle.HIGHEST_PROTOCOL)
num=int(len(ali_train)*0.05)
ali_small=ali_train[:num]
with gzip.open('../data/ali_train_small.pklz','wb') as f:
pickle.dump(ali_small,f,pickle.HIGHEST_PROTOCOL)
corp=loadAlignedCorpus('../data/ali_train_small.pklz','../audio')
corp_test=loadAlignedCorpus('../data/ali_test.pklz','../audio')
print 'Number of utterances: {}'.format(len(corp))
print 'List of phonemes:\n{}'.format(corp[0].phones)
print 'Lengths of phonemes:\n{}'.format(corp[0].ph_lens)
print 'Audio:\n{}'.format(corp[0].data)
samp_num=0
for utt in corp:
samp_num+=utt.data.size
print 'Length of cropus: {} hours'.format(((samp_num/16000.0)/60.0)/60.0)
import sys
sys.path.append('../PyHTK/python')
import numpy as np
from HTKFeat import MFCC_HTK
import h5py
from tqdm import *
def extract_features(corpus, savefile):
mfcc=MFCC_HTK()
h5f=h5py.File(savefile,'w')
uid=0
for utt in tqdm(corpus):
feat=mfcc.get_feats(utt.data)
delta=mfcc.get_delta(feat)
acc=mfcc.get_delta(delta)
feat=np.hstack((feat,delta,acc))
utt_len=feat.shape[0]
o=[]
for i in range(len(utt.phones)):
num=utt.ph_lens[i]/10
o.extend([utt.phones[i]]*num)
# here we fix an off-by-one error that happens very inrequently
if utt_len-len(o)==1:
o.append(o[-1])
assert len(o)==utt_len
uid+=1
#instead of a proper name, we simply use a unique identifier: utt00001, utt00002, ..., utt99999
g=h5f.create_group('/utt{:05d}'.format(uid))
g['in']=feat
g['out']=o
h5f.flush()
h5f.close()
extract_features(corp,'../data/mfcc_train_small.hdf5')
extract_features(corp_test,'../data/mfcc_test.hdf5')
def normalize(corp_file):
h5f=h5py.File(corp_file)
b=0
for utt in tqdm(h5f):
f=h5f[utt]['in']
n=f-np.mean(f)
n/=np.std(n)
h5f[utt]['norm']=n
h5f.flush()
h5f.close()
normalize('../data/mfcc_train_small.hdf5')
normalize('../data/mfcc_test.hdf5')
!h5ls ../data/mfcc_test.hdf5/utt00001
from data import Corpus
import numpy as np
train=Corpus('../data/mfcc_train_small.hdf5',load_normalized=True)
test=Corpus('../data/mfcc_test.hdf5',load_normalized=True)
g=train.get()
tr_in=np.vstack(g[0])
tr_out=np.concatenate(g[1])
print 'Training input shape: {}'.format(tr_in.shape)
print 'Training output shape: {}'.format(tr_out.shape)
g=test.get()
tst_in=np.vstack(g[0])
tst_out=np.concatenate(g[1])
print 'Test input shape: {}'.format(tst_in.shape)
print 'Test output shape: {}'.format(tst_in.shape)
train.close()
test.close()
import sklearn
print sklearn.__version__
from sklearn.linear_model import SGDClassifier
model=SGDClassifier(loss='log',n_jobs=-1,verbose=0,n_iter=100)
%time model.fit(tr_in,tr_out)
acc=model.score(tst_in,tst_out)
print 'Accuracy: {:%}'.format(acc)
corp=loadAlignedCorpus('../data/ali_train.pklz','../audio')
extract_features(corp,'../data/mfcc_train.hdf5')
normalize('../data/mfcc_train.hdf5')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ignore any warnings above (I coudn't be bothered to compile audiolab with Alsa). Below you will find the method to download the Voxforge database. You only need to do this once, so you can run it either here or from a console or use wget. Be warned that it takes a long time (as mentioned earlier) so it's a good idea to leave it running over night.
Step2: Loading the corpus
Step3: The loadBySpeaker method will load the whole folder and organize its contents by speakers (as a dictionary). Each utterance contains only the data and the prompts. For this demo, only 30 files are read - as this isn't a method we are going to ultimately use.
Step4: The corpus can also be extended by the phonetic transcription of the utterances using a lexicon file. Voxforge does provide such a file on its website and it is downloaded automatically (if it doesn't already exist).
Step5: Aligned corpus
Step6: We store the generated datastructure into a gzipped and pickled file, so we don't need to perform this more than once. This file is already included in the repository, so you can skip the step above.
Step7: Here is an example of the structure and its attributes loaded from that file
Step8: Please note that the audio data is not yet loaded at this step (it's set to None).
Step9: To make things more managable for this demo, we will take 5% of the training set and work using that instead of the whole 80 hours. 5% should give us an amount similar to TIMIT. If you wish to re-run the experiments using the whole dataset, go to the bottom of this notebook for further instructions.
Step10: Here we load additional data using the loadAlignedCorpus method. It loads the alignment and the appropriate audio datafile for each utterance (it can take a while for larger corpora)
Step11: We have to do the same for the test data
Step12: Now we can check if we have all the neccessary data
Step13: Feature extraction
Step14: Now let's process the small training and test datasets
Step15: Normalization
Step16: To see what's inside we can run the following command in the terminal
Step17: Simple classification example
Step18: Here we create the SGD classifier model. Please note that the settings below work on the version 0.17 of scikit-learn, so it's recommended to upgrade. If you can't, then feel free to modify the settings to something that works for you. You may also turn on verbose to get more information on the training process. Here it's off to preserve space in the notebook.
Step19: Here we train the model. It took 4 minutes for me
Step20: Here we get about ~52% accuracy which is pretty bad for phoneme recogntion. In other notebooks, we will try to improve on that.
Step21: Other data
|
15,161 | <ASSISTANT_TASK:>
Python Code:
import urllib.request
url = 'https://medium.com/tag/machine-learning'
req = urllib.request.Request(url, headers={'User-Agent' : "Magic Browser"})
con = urllib.request.urlopen(req)
print(con.status, con.msg)
con.getheader('Content-Type')
text = con.read()
text[:500]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some sites will block request from urlib, so we set a custom 'User-Agent' header
Step2: Let's check the HTTP status and the message.
Step3: We can check if a specific HTTP request header exists
Step4: Now we can load the content from the website
|
15,162 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array( [2,3,4,-1,-2,256, 270] ,'uint8')
print ('Dimensões: a.shape=', a.shape)
print ('Tipo dos elementos: a.dtype=', a.dtype)
print ('Imprimindo o array completo:\n a=',a)
m = a.max()
print(m)
print(m.dtype)
b = a//a.max()
print (b)
b = np.array( [ [[1.5, 2.3, 5.2]],
[[4.2, 5.6, 4.4]]] )
print ('Um array bidimensional, dimensões: b.shape=', b.shape)
print ('Tipo dos elementos: b.dtype', b.dtype)
print ('Número de colunas:', b.shape[-1])
print ('Número de linhas:', b.shape[-2])
print ('Elementos, b=\n', b)
d = np.zeros( (2,4) )
print ('Array de 0s: \n', d)
d = np.ones( (3,2,5), dtype='int16' )
print ('\n\nArray de 1s: \n', d)
d = np.empty( (2,3), 'int' )
print ('Array não inicializado (valor é indeterminado):\n', d)
%timeit a = np.zeros((1000000,))
%timeit a = np.empty((1000000,))
print('np.arange( 10) = ', np.arange(10))
print('np.arange( 3, 8) = ', np.arange(3,8))
print('np.arange( 0, 2, 0.5) = ', np.arange(0, 2, 0.5))
print('np.linspace( 0, 2, 0 ) = ', np.linspace( 0, 2, 5 ))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Veja a seguir uma matriz bidimensional de dados ponto flutuante de 2 linhas e 3 colunas. Observe que a tupla do shape aumenta para a esquerda, isto é, se eu tiver um vetor de 3 elementos, o seu shape é (3,) e se uma nova dimensão for adicionada, por exemplo 2 linhas e 3 colunas, o shape passa a ser (3,2). O que antes shape[0] no vetor unidimensional era colunas, já na matrix bidimension shape[0] passou a ser o número de linhas. Assim o último elemento da tupla do shape indica o número de colunas, a penúltima o número de linhas. Assim se quisermos sempre buscar o número de colunas, independentemente do número de dimensões, shape[-1] informa sempre o número de colunas, shape[-2], o número de linhas.
Step2: Manipulação de arrays
Step3: Note que o Numpy permite arrays n-dimensionais. Em imagens em níveis de cinza iremos trabalhar com matrizes bidimensionais mas se a imagem for colorida, iremos representá-la em 3 canais, R, G e B, representados na estrutura com 3 dimensões. Se for um vídeo, isto é, uma sequência de imagens, teremos que adicionar mais uma dimensão. Se for uma tomografia, também podemos representar em 3 dimensões
|
15,163 | <ASSISTANT_TASK:>
Python Code:
M1450 = linspace(-30,-22,20)
zz = arange(0.7,3.5,0.5)
ple = bossqsos.BOSS_DR9_PLE()
lede = bossqsos.BOSS_DR9_LEDE()
for z in zz:
if z<2.2:
qlf = ple if z<2.2 else lede
plot(M1450,qlf(M1450,z),label='z=%.1f'%z)
legend(loc='lower left')
xlim(-21.8,-30.2)
xlabel("$M_{1450}$")
ylabel("log Phi")
_ = bossqsos.qsoSimulation(bossqsos.simParams,saveSpectra=True)
wave,qsos = load_sim_output('boss_dr9qlf_sim','.')
qsos[::40]
_ = hist(qsos['obsMag'][:,1],linspace(17,22,20),log=True)
scatter(qsos['obsMag'][:,0]-qsos['obsMag'][:,1],qsos['obsMag'][:,1]-qsos['obsMag'][:,2],
c=qsos['z'],cmap=cm.autumn_r,alpha=0.7)
colorbar()
xlabel('u-g')
ylabel('g-r')
xlim(-0.75,3)
ylim(-0.5,1.5)
qsodatahdr = fits.getheader('boss_dr9qlf_sim.fits',1)
for i,n in enumerate(qsodatahdr['LINENAME'].split(',')):
print('%d:%s, '% (i,n,),end=" ")
print()
scatter(qsos['absMag'],qsos['emLines'][:,13,1],c=qsos['z'],cmap=cm.autumn_r)
colorbar()
xlabel("$M_{1450}$")
ylabel("CIV equivalent width $\AA$")
figure(figsize=(14,4))
plot(wave/1e4,qsos['spec'][0])
yscale('log')
xlabel('wave [micron]')
figure(figsize=(14,4))
plot(wave,qsos['spec'][20])
xlim(3500,7500)
title('$z=%.3f$'%qsos['z'][20])
# XXX WARNING -- an ugly hack is needed here. Internally, a table of Voigt profiles is generated
# at startup in order to speed the forest spectra generation. This table is defined in terms of
# the wave dispersion the first time a simulation is run. Here we are changing the wavelength
# model, and thus before executing the next cells you must restart the kernel and execute only
# the first cell.
np.random.seed(12345)
wave = buildWaveGrid(dict(waveRange=(3500,4800),SpecDispersion=30000))
forest = hiforest.IGMTransmissionGrid(wave,WP11_model,1)
T = forest.next_spec(0,2.9)
figure(figsize=(14,4))
plot(wave,T)
figure(figsize=(14,4))
plot(wave,T)
xlim(4300,4800)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the simulation, save the spectra
Step2: Simulation outputs
Step3: the table of simulated quasars, including redshift, luminosity, synthetic flux/mags in nine bands, and "observed" photometry with errors included.
Step4: the distribution in g-band magnitude
Step5: color-color diagram from observed magnitudes, including errors
Step6: the list of emission lines in the model
Step7: broad CIV equivalent width, displaying the Baldwin Effect
Step8: Example spectra
Step9: zoom in on the lyman alpha - CIV region
Step10: IGM absorption model (simqso.hiforest)
|
15,164 | <ASSISTANT_TASK:>
Python Code:
# Create some data
scores = [23,453,54,235,74,234]
# Try to:
try:
# Add a list of integers and a string
scores + 'A string of characters.'
# If you get an error, set the error as 'e',
except Exception as e:
# print the error, e
print('Error:', e)
# Then,
finally:
# print end program
print('End Program')
# Try to:
try:
# Print scores
print('Worked!', scores)
# If you get an error, set the error as 'e',
except Exception as e:
# print the error, e
print('Error:', e)
# Then,
finally:
# print end program
print('End program')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Try something that doesn't work
Step2: Try something that works
|
15,165 | <ASSISTANT_TASK:>
Python Code:
import getfem as gf
import numpy as np
file_msh = './mesh/tripod.mesh'
E = 1e3
Nu = 0.3
Lambda = E*Nu/((1+Nu)*(1-2*Nu))
Mu = E/(2*(1+Nu))
m = gf.Mesh('load',file_msh)
m.set('optimize_structure')
m.export_to_pos('./pos/m.pos')
%%writefile gscript
Print "./png/m.png";
Exit;
!gmsh ./pos/m.pos gscript
from IPython.core.display import Image
Image('./png/m.png')
mfu = gf.MeshFem(m,3) # displacement
mfd = gf.MeshFem(m,1) # data
mfu.set_fem(gf.Fem('FEM_PK(3,1)'))
mfd.set_fem(gf.Fem('FEM_PK(3,0)'))
mim = gf.MeshIm(m,gf.Integ('IM_TETRAHEDRON(5)'))
P = m.pts()
print P
print 'Ymax = ', P[1,:].max()
print 'Ymin = ', P[1,:].min()
ctop = (abs(P[1,:] - 13) < 1e-6)
print 'ctop = ', ctop
cbot = (abs(P[1,:] + 10) < 1e-6)
print 'cbot = ', cbot
pidtop = np.compress(ctop,range(0,m.nbpts()))
pidbot = np.compress(cbot,range(0,m.nbpts()))
print 'pidtop = ', pidtop
print 'pidbot = ', pidbot
ftop = m.faces_from_pid(pidtop)
fbot = m.faces_from_pid(pidbot)
print 'ftop = ', ftop
print 'fbot = ', fbot
NEUMANN_BOUNDARY = 1
DIRICHLET_BOUNDARY = 2
m.set_region(NEUMANN_BOUNDARY,ftop)
m.set_region(DIRICHLET_BOUNDARY,fbot)
nbd = mfd.nbdof()
F = gf.asm_boundary_source(NEUMANN_BOUNDARY, mim, mfu, mfd, np.repeat([[0],[-100],[0]],nbd,1))
K = gf.asm_linear_elasticity(mim, mfu, mfd, np.repeat([Lambda], nbd), np.repeat([Mu], nbd))
sl = gf.Slice(('boundary',), mfu, 1)
sl.export_to_pos('./pos/F.pos',mfu,F,'F')
%%writefile gscript
Print "./png/F.png";
Exit;
!gmsh ./pos/F.pos gscript
from IPython.core.display import Image
Image('./png/F.png')
(H,R) = gf.asm_dirichlet(DIRICHLET_BOUNDARY, mim, mfu, mfd, mfd.eval('[[1,0,0],[0,1,0],[0,0,1]]'), mfd.eval('[0,0,0]'))
(N,U0) = H.dirichlet_nullspace(R)
Nt = gf.Spmat('copy',N)
Nt.transpose()
KK = Nt*K*N
FF = Nt*(F-K*U0)
P = gf.Precond('ildlt',KK)
UU = gf.linsolve_cg(KK,FF,P)
U = N*UU+U0
sl = gf.Slice(('boundary',), mfu, 1)
sl.export_to_pos('./pos/m_result.pos', mfu, U, 'Displacement')
%%writefile gscript
Print "./png/m_result.png";
Exit;
!gmsh ./pos/m_result.pos gscript
from IPython.core.display import Image
Image('./png/m_result.png')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: パラメータの設定
Step2: メッシュの読み込み
Step3: 次のコマンドで、mに設定したメッシュをgmshのスクリプトでpng画像に打ち出し確認します。
Step4: 以下のようにipythonの機能を使用して、システムのコマンドラインでの操作を行いIpython上に画像を表示してみます。もちろん、gmshで直接posファイルを表示することもできます。
Step5: これは横から見たメッシュ図です。三脚のメッシュであることがわかります。今回はこの頂点部分に荷重(NEUMANN条件)、下端に固定条件(DIRICHLET条件)を与え変位を計算することにします。
Step6: mfuとmfdにはそれぞれLagrange要素$Q_3$と$Q_1$が入ります。$Q_3$は変位用、$Q_1$はMises応力などのデータ用に準備したものです。
Step7: 立体求積法には15積分点・5次のtetrahedronを使用します。
Step8: 領域の定義
Step9: Gmshで先ほど出力したメッシュを確認してみるとY座標が最大になっている部分に三脚の上端が、Y座標が最小になっている部分に三脚の下端があることがわかります。それぞれの座標をPで確認しておきましょう。
Step10: 節点が領域に属しているかの真偽表の配列を作成
Step11: 領域に属している節点のみ節点番号を抜き出す
Step12: 節点番号から要素の面の情報を抜き出す
Step13: 要素番号と面番号の情報を元に領域に番号をつける
Step14: 外力ベクトルと剛性マトリックスの組み立て
Step15: 固定条件の設定
Step16: さて、Getfem++では、はじめにご説明した連立方程式の問題を以下のように言い換えます。
Step17: 変位の計算
Step18: ポスト処理
|
15,166 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
from collections import Counter
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {index: word for index, word in enumerate(set(sorted_vocab))}
vocab_to_int = {word: index for index, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
token_dictionary = {
'.': '<PERIOD>',
',': '<COMMA>',
'"': '<QUOTATION_MARK>',
';': '<SEMICOLON>',
'!': '<EXCLAMATION_MARK>',
'?': '<QUESTION_MARK>',
'(': '<LEFT_PARENTHESES>',
')': '<RIGHT_PARENTHESES>',
'--': '<DASH>',
'\n': '<RETURN/NEWLINE>'
}
return token_dictionary
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
return None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
15,167 | <ASSISTANT_TASK:>
Python Code:
import math
from matplotlib import pylab as plt
%matplotlib inline
def pdf(x, m, variance):
sigma = math.sqrt(variance)
probability density function
return 1 / (sigma * math.sqrt(2 * math.pi)) * math.e ** (-1 * ((x - m)**2 / (2 * variance ** 2)))
pdf(20, 20, 4)
N = range(50)
mean = 15
variance = 3
plt.plot(N, [pdf(i, mean, variance) for i in N])
def cumulative(x, m, stdv):
return 0.5 * (1 + math.erf((x-m)/ (stdv * math.sqrt(2)) ))
[(i, cumulative(i, 20.0, 3.0)) for i in range(1, 30)]
mean = 20
variance = 4
x = 19.5
q1_answer = cumulative(x, mean, math.sqrt(variance))
q1_answer
lower_hour, upper_hour = 20.0, 22.0
q2_answer = cumulative(upper_hour, mean, math.sqrt(variance)) - cumulative(lower_hour, mean, math.sqrt(variance))
q2_answer
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Day 5
Step2: Example
Step3: Question 1
Step4: Question 2
|
15,168 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (10, 20)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
vocab_to_int = {word: idx for idx, word in enumerate(sorted_vocab)}
int_to_vocab = {idx: word for idx, word in enumerate(sorted_vocab)}
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
punctuation = {}
punctuation['.'] = '<PERIOD>'
punctuation[','] = '<COMMA>'
punctuation['"'] = '<QUOTATION_MARK>'
punctuation[';'] = '<SEMICOLON>'
punctuation['!'] = '<EXCLAMATION_MARK>'
punctuation['?'] = '<QUESTION_MARK>'
punctuation['('] = '<LEFT_PAREN>'
punctuation[')'] = '<RIGHT_PAREN>'
punctuation['--'] = '<DASH>'
punctuation['\n'] = '<NEW_LINE>'
return punctuation
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
learning_rate = tf.placeholder(tf.int32)
return (inputs, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
lstm_layers = 1
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm]*lstm_layers)
initial_state = cell.zero_state(batch_size, tf.int32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
15,169 | <ASSISTANT_TASK:>
Python Code:
import theano
from theano import tensor
import numpy as np
from collections import namedtuple
Gate = namedtuple("Gate", "arity module")
# Theano's to_one_hot converts a numpy array
# to a one-hot encoded version. In order to
# know how big to make the vector, all gates
# take M, the smallest non-representable integer,
# as an argument.
from theano.tensor.extra_ops import to_one_hot
def make_constant_gate(value):
Create a gate that returns a constant distribution.
# Arguments to to_one_hot must be Numpy arrays.
arr = np.asarray([value])
def module(max_int):
Return the one-hot encoded constant.
return to_one_hot(arr, max_int)
arity = 0
return Gate(arity, module)
# Gates for 0, 1, 2: useful constants!
gate_zero = make_constant_gate(0)
gate_one = make_constant_gate(1)
gate_two = make_constant_gate(2)
x = np.ones((2, 3))
y = gate_one.module(3)
print("X =", x)
print("Y =", y.eval())
print("X + Y =", (x + y).eval())
# In Theano, writing
# >>> A = set_subtensor(A[0], x)
# corresponds to standard Python's
# >>> A = A.copy(); A[0] = x
from theano.tensor import set_subtensor
def eq_zero(A, max_int):
Given a Theano vector A, return a vector
of the same shape where the first component is
1 - A[0], the second component is A[0], and the
other components are all zero.
This corresponds to a neural gate that checks for zero.
# Initialize result to be zeros in the same shape as A.
# This should be a list of row vectors of length max_int.
# By operating on all rows at once we allow multiple samples
# to be processed in one call to this function.
result = tensor.zeros_like(A)
result = set_subtensor(result[:, 1], A[:, 0])
result = set_subtensor(result[:, 0], 1 - A[:, 0])
return result
gate_eqz = Gate(1, eq_zero)
# `roll` circularly rotates elements of a vector.
# For example,
# >>> roll([1, 2, 3], -2)
# rotates the elements two spaces to the left, resulting in
# >>> [3, 1, 2]
from theano.tensor import roll
def negate(A, max_int):
Negate a distribution over integers.
return roll(A[:, ::-1], 1, axis=1)
gate_negate = Gate(1, negate)
# `stack` stacks a list of vectors
# into a matrix. It increases the number
# of dimensions in the tensor by one.
# The dimension it adds is configurable.
from theano.tensor import stack
# `batched_dot` performs matrix multiplication
# across a set of samples, where each sample is
# a row (uses the dimension).
from theano.tensor import batched_dot
def add(A, B, max_int):
Returns the distribution for a sum of integers.
rows = [roll(B[:, ::-1], shift + 1, axis=1)
for shift in range(max_int)]
B_prime = stack(rows, axis=1).transpose(0, 2, 1)
return batched_dot(A, B_prime)
gate_add = Gate(2, add)
# Distributions over the set {0, 1, 2, 3, 4}
max_int = 5
zero = gate_zero.module(max_int)
one = gate_one.module(max_int)
two = gate_two.module(max_int)
three = add(one, two, max_int)
print("0 = 0? ", eq_zero(zero, max_int).eval())
print("0 = 1? ", eq_zero(one, max_int).eval())
print("2 ", two.eval())
print("-2 = 3 ", negate(two, max_int).eval())
print("1 + 2 = 3", three.eval())
print("3 + 3 = 1", add(three, three, max_int).eval())
def avg(distributions, coefficients):
Return the weighted average of a set of vectors.
Shapes:
distributions: (S, N, M)
coefficients: (S, N)
return value: (S, M)
where
S: number of samples to perform this operation on
N: number of vectors in the set
M: number of elements in each vector
# Shuffle coefficients to shape (S, N, 1)
coeffs = coefficients.dimshuffle(0, 1, 'x')
# Transpose distributions to (S, M, N)
dists = distributions.transpose(0, 2, 1)
# Batched multiply to get shape (S, M, 1),
# then drop the last dimension.
return batched_dot(dists, coeffs).flatten(2)
vectors = [[
[1, 2, 3], # X
[10, 20, 30], # Y
[100, 200, 300], # Z
]]
coefficients = [[0.1, 0.3, 0.6]]
from theano.tensor import as_tensor
# 0.1 X + 0.3 Y + 0.6 Z
avg(as_tensor(vectors), as_tensor(coefficients)).eval()
def run_gate(gate_inputs, gate, controller_coefficients, max_int):
Return the output of a gate in the circuit.
gate_inputs:
The values of the registers and previous gate outputs.
gate:
The gate to compute output for. Arity must
match len(controller_coefficients).
controller_coeffficients:
A list of coefficient arrays from the controller,
one coefficient for every gate input (0 for constants).
args = [avg(gate_inputs, coefficients)
for coefficients in controller_coefficients]
output = gate.module(*args, max_int)
# Special-case constant gates.
# Since they have no outputs, they always output
# one sample. Repeat their outputs as many times
# as necessary, effectively doing manual broadcasting
# to generate an output of the right size.
if gate.arity == 0:
output = output.repeat(gate_inputs.shape[0], axis=0)
return output
from theano.tensor import concatenate
def run_circuit(registers, gates, controller_coefficients, max_int):
# Initially, only the registers may be used as inputs.
gate_inputs = registers
# Run through all the gates.
for i in range(len(gates)):
output = run_gate(gate_inputs, gates[i],
controller_coefficients[i], max_int)
# Append the output of the gate as an input for future gates.
gate_inputs = concatenate([gate_inputs,
output.dimshuffle(0, 'x', 1)],
axis=1)
# All leftover coefficients are for registers.
new_registers = []
for i in range(len(gates), len(controller_coefficients)):
new_registers.append(avg(gate_inputs, controller_coefficients[i]))
return tensor.stack(new_registers, axis=1)
# Registers: R1 = 0, R2 = 1, R3 = 3
# These use the previously defined max_int = 5.
v0 = zero.eval()
v1 = one.eval()
v3 = three.eval()
registers = tensor.stack([v0, v1, v3], axis=1)
# Gates: 1, NEG(x), ADD(x, y), EQZ(x)
gates = [gate_one, gate_negate, gate_add, gate_eqz]
# Circuit:
#
# R1 ----------------------- - R1'
# 1 -- NEG -- \ /
# \ \ /
# ADD -- EQZ --X
# / \
# R2 ----------- - R2'
# R3 ---------------------------- R3'
#
# Equivalently:
# R3' = R3
# R2' = R1
# R1' = [R2 - 1 == 0]
from theano.tensor import as_tensor
def c(*cs):
Convert a list into a coefficient tensor, shape 1 x N.
return as_tensor(cs).reshape((1, len(cs)))
controller_coefficients = [
# 1 coefficients
[],
# NEG coefficients
[c(0, 0, 0, 1)],
# ADD coefficients
[c(0, 0, 0, 0, 1),
c(0, 1, 0, 0, 0)],
# EQZ coefficients
[c(0, 0, 0, 0, 0, 1)],
# R1' (not a gate, thus not a list of coefficients)
c(0, 0, 0, 0, 0, 0, 1),
# R2'
c(1, 0, 0, 0, 0, 0, 0),
# R3'
c(0, 0, 1, 0, 0, 0, 0),
]
new_registers = run_circuit(registers, gates,
controller_coefficients, max_int)
print(new_registers.eval().argmax(axis=2))
# `shared` converts a numpy array to a shared variable.
from theano import shared
from numpy.random import uniform
def init_weight(*dims, low=-0.3, high=0.3):
Create a randomly-initialized shared variable weight matrix.
weights = uniform(low=low, high=high, size=dims)
var = shared(weights.astype(np.float32), name="W{0}x{1}".format(*dims))
return var
def mlp_weights(num_registers, layer_sizes, gates):
Generate weights and biases for all the connections
in the neural network controller.
layer_sizes: Number of units in each hidden layer.
# The first layer has one input per register.
prev_layer = num_registers
# Weights for making the hidden layers.
for layer_size in layer_sizes:
# Weights.
yield init_weight(prev_layer, layer_size)
# Biases.
yield init_weight(1, layer_size)
# Keep track of last layer size.
prev_layer = layer_size
# Weights for gate coefficients (output layers).
for prev_gates, gate in enumerate(gates):
num_outputs = num_registers + prev_gates
for _ in range(gate.arity):
# Weights.
yield init_weight(prev_layer, num_outputs)
# Biases.
yield init_weight(1, num_outputs)
# Weights for new register value coefficients (output layers).
num_outputs = num_registers + len(gates)
for _ in range(num_registers):
# Weights.
yield init_weight(prev_layer, num_outputs)
# Biases.
yield init_weight(1, num_outputs)
# Weights for willingness to complete computation output.
yield init_weight(prev_layer, 1)
# Biases.
yield init_weight(1, 1)
from theano.tensor.nnet import softmax, relu, sigmoid
# Define a helper function for deconstructing
# the parameter list.
def take(values, i):
Return the next pair of weights and biases after the
starting index and the new starting index.
return values[i], values[i + 1], i + 2
def mlp_forward_prop(num_registers, num_layers, gates,
registers, params):
Run forward propogation on the register machine (one step).
# Extract 0th component from all registers.
last_layer = registers[:, :, 0]
# Propogate forward to hidden layers.
idx = 0
for _ in range(num_layers):
W, b, idx = take(params, idx)
last_layer = relu(last_layer.dot(W) + b)
# Propogate forward to gate coefficient outputs.
# In the result list, each result is a list of
# coefficients, as gates may have 0, 1, or 2 inputs.
controller_coefficients = []
for gate in gates:
coeffs = []
for _ in range(gate.arity):
W, b, idx = take(params, idx)
coeffs.append(softmax(last_layer.dot(W) + b))
controller_coefficients.append(coeffs)
# Forward propogate to new register value coefficients.
for _ in range(num_registers):
W, b, idx = take(params, idx)
coeffs = softmax(last_layer.dot(W) + b)
controller_coefficients.append(coeffs)
# Forward propogate to generate willingness to complete.
W, b, idx = take(params, idx)
complete = sigmoid(last_layer.dot(W) + b)
return controller_coefficients, complete
def step_machine(gates, max_int, num_registers, num_layers,
registers, params):
Run a single timestep of the machine.
# Run single-step forward propagation.
controller_out = mlp_forward_prop(num_registers, num_layers, gates,
registers, params)
coefficients, complete = controller_out
# Using the generated coefficients, advance the registers.
new_registers = run_circuit(registers, gates,
coefficients, max_int)
return new_registers, complete
def log_prob_correct(registers, desired_output, num_registers, max_int):
Compute log-probability of correctness over all registers.
cost = 0
# Add epsilon to every log to avoid having inf in costs.
epsilon = 1e-100
for i in range(num_registers):
# Create a mask to extract just the values we want.
# This has the same shape as the registers, and a one
# on components we want to include in the sum, a zero
# on components we don't wish to include in the sum.
mask = to_one_hot(desired_output[:, i], max_int)
# Compute the loss for this register using the mask.
loss = (mask * tensor.log(registers[:, i, :] + epsilon)).sum(axis=1)
# Accumulate costs over all registers, keeping the dimensions S x 1.
cost += tensor.shape_padright(loss, 1)
return cost
def step_cost(gates, max_int, desired_output, max_timesteps,
num_registers, num_layers, timestep, registers,
cost, cum_prob_complete, prob_incomplete, params):
# Run the machine forward one step.
machine_result = step_machine(gates, max_int, num_registers,
num_layers, registers, params)
registers, complete = machine_result
# Complete the probability that the algorithm is done
# after this step. Force the algorithm to complete after
# T timesteps.
if timestep == max_timesteps:
prob_complete = 1 - cum_prob_complete
else:
prob_complete = complete * prob_incomplete
# Update the probability that the computation isn't complete
prob_incomplete *= 1 - complete
# Accumulate the probability that a result has been produced.
cum_prob_complete += prob_complete
# Cost for this timestep.
unscaled_cost = log_prob_correct(registers, desired_output,
num_registers, max_int)
scaled_cost = prob_complete * unscaled_cost
cost -= scaled_cost
return registers, cost, cum_prob_complete, prob_incomplete
from theano.tensor import dtensor3, imatrix
# This function is a boring utility function.
# Main code of interest is below in `run`.
def make_broadcastable(weights):
Shared variables (the weights of the controller) are
not broadcastable by default. We need to make them broadcastable
to use them. This function does so manually.
broadcastable = []
for var in weights:
# Only make biases broadcastable.
if var.get_value().shape[0] == 1:
# Keep the name the same.
name = var.name
var = tensor.addbroadcast(var, 0)
var.name = name
broadcastable.append(var)
return broadcastable
def run(gates, num_registers, max_int,
num_timesteps, num_layers, reg_lambda, params):
params = make_broadcastable(params)
# Create symbolic variables for the input to the machine
# and for the desired output of the machine.
initial_registers = dtensor3("R")
desired_registers = imatrix("Y")
# Run the model for all timesteps. The arguments are
# registers, cost, cumulative probability complete,
# and probability incomplete. The latter are initialized
# to zero and to one, respectively.
v0 = as_tensor(0)
v1 = as_tensor(1)
output = [initial_registers, v0, v0, v1]
registers = []
for timestep in range(num_timesteps):
output = step_cost(gates, max_int, desired_registers,
num_timesteps, num_registers,
num_layers, timestep + 1,
*output, params)
registers.append(output[0])
# Add in regularization, to avoid overfitting simple examples.
reg_cost = reg_lambda * sum((p * p).sum() for p in params)
# Get the final cost: regularization plus loss.
final_cost = reg_cost + output[1].sum()
# Return the symbolic variables, the final cost, and the
# intermediate register values for analysis and prediction.
return initial_registers, desired_registers, final_cost, registers
def example_circuit(r1, r2, r3):
Python implementation of our example circuit.
return int(r2 - 1 == 0), r3, r1
import itertools
# Generate all sample inputs.
max_int = 50
num_registers = 3
inputs = list(itertools.product(range(max_int),
repeat=num_registers))
# Generate all sample outputs.
outputs = [example_circuit(*values) for values in inputs]
def encode(samples):
Convert inputs to one-hot matrix form.
The result is shape (S, R, M), where, as usual:
S - num samples, R - num registers, M - max int
samples = np.asarray(samples)
# Encode each register separately.
# to_one_hot requires a 1-d vector.
encoded = []
for i in range(num_registers):
encoded.append(to_one_hot(samples[:, i], max_int).eval())
return np.asarray(encoded).swapaxes(0, 1)
inputs = encode(inputs)
outputs = np.asarray(outputs, dtype=np.int32)
# Find the underrepresented class indices: R2 = 1
idxs = inputs[:, 1, 1] == 1
# Oversample the class 48X to make it equal to R2 != 1 class.
inputs_plus = np.repeat(inputs[idxs, :, :], 48, axis=0)
outputs_plus = np.repeat(outputs[idxs, :], 48, axis=0)
# Create new data set with approximately equal class sizes.
inputs = np.concatenate((inputs, inputs_plus))
outputs = np.concatenate((outputs, outputs_plus))
# Randomly shuffle the indices 0..S - 1.
# Use these indices to shuffle inputs and outputs in unison.
num_samples = inputs.shape[0]
shuffle_order = np.random.permutation(num_samples)
# Use 30% of our data for testing, 70% for training.
test_ratio = 0.3
test_indices = shuffle_order[:int(test_ratio * num_samples)]
train_indices = shuffle_order[int(test_ratio * num_samples):]
test_inputs = inputs[test_indices, :, :]
test_outputs = outputs[test_indices, :]
train_inputs = inputs[train_indices, :, :]
train_outputs = outputs[train_indices, :]
def adam_optimize(params, train, train_inputs, train_outputs,
alpha=0.001, b1=0.9, b2=0.999,
epsilon=1e-8, batch_size=1000):
Implementation of Adam optimization method, with hyperparameters
taken as recommended by the original publication.
# Initialize first and second moment estimates to zero.
# This causes some bias, which is addressed later.
moment1 = [0 for _ in params]
moment2 = [0 for _ in params]
timestep = 0 # Current optimization step
batch = 0 # Where does this batch start
converged = False
while not converged:
timestep += 1
# Train on a small batch.
inputs = train_inputs[batch:batch+batch_size, :, :]
outputs = train_outputs[batch:batch+batch_size, :]
cost, *gradients = train(inputs, outputs)
# Advance to next batch.
batch = (batch + batch_size) % train_inputs.shape[0]
# Compute first and second moment estimates.
# These are decaying moving averages; first moment
# uses the gradient, second uses squared gradient.
moment1 = [b1 * m + (1 - b1) * gradient
for (m, gradient)
in zip(moment1, gradients)]
moment2 = [b2 * v + (1 - b2) * gradient ** 2
for (v, gradient)
in zip(moment2, gradients)]
# Correct for initialization bias and compute new values.
correction1 = 1. / (1 - b1 ** timestep)
correction2 = 1. / (1 - b2 ** timestep)
corrected1 = [correction1 * m for m in moment1]
corrected2 = [correction2 * v for v in moment2]
# Compute new parameter values.
params_new = [p.get_value() - alpha * m1 / (np.sqrt(m2) + epsilon)
for (p, m1, m2) in zip(params, corrected1, corrected2)]
# Check for convergence by looking at magnitude of delta.
delta = [abs(p.get_value() - p_new)
for (p, p_new) in zip(params, params_new)]
converged = all((d < 0.5 * alpha).all() for d in delta)
# Update parameters to new values.
for p, p_new in zip(params, params_new):
p.set_value(p_new.astype('float32'))
# Provide some output for tracking during runtime.
if timestep % 100 == 1 or converged:
print("Cost (t = %4d): \t%.2f" % (timestep - 1, cost))
# Initialize neural network parameters.
layer_sizes = [5, 5]
params = list(mlp_weights(num_registers, layer_sizes, gates))
# Initialize cost and gradients.
num_timesteps = 2
reg_lambda = 0.1
result = run(gates, num_registers, max_int,
num_timesteps, len(layer_sizes),
reg_lambda, params)
init_registers, desired_registers, cost, registers = result
gradients = theano.grad(cost, params)
# Compile training function to compute gradients.
train = theano.function([init_registers, desired_registers],
[cost] + gradients)
# Compile prediction function (registers after one timestep)
predict = theano.function([init_registers], registers[0])
adam_optimize(params, train, train_inputs, train_outputs)
def percent_correct(inputs, outputs):
Compute the percent of examples that were computed correctly.
# Convert the one-hot encoding to integers.
result = predict(inputs).argmax(axis=2)
# Check how many of the registers for each sample
# had the expected result.
num_eq = (result == outputs).sum(axis=1)
# A sample was correct if *all* of the registers
# were correct. Count correct samples.
all_eq = (num_eq == inputs.shape[1]).sum()
# Return ratio of correct samples times 100.
return 100 * float(all_eq) / inputs.shape[0]
print("Correct: %.1f%%" % percent_correct(test_inputs, test_outputs))
def read(ptr, max_int, mem):
Gate to read from the memory tape.
Return a tuple of the result and the new (unmodified) memory bank.
ptr: Distribution over locations to read from.
mem: Memory bank.
ptr – S x M'
mem – S x M' x M
where
S – number of samples being operated on
M' – number of memory locations
M – number of representable integers
M = M' always, but distinguishing between them helps keep track of semantics.
# Reading from memory ends up being a matrix multiplication (per sample),
# but getting it right just involves shuffling the dimensions to be right:
#
# m = mem.transpose(0, 2, 1): S x M x M'
# p = shape_padright(ptr, 1): S x M' x 1
# batched_dot(m, p): S x M x 1
# batched_dot(m, p).flatten(2): S x M
return batched_dot(mem.transpose(0, 2, 1),
shape_padright(ptr, 1)).flatten(2), mem
def write(ptr, val, max_int, mem):
Gate to write to the memory tape.
Return a tuple of the written value and the modified memory bank.
ptr: Distribution over locations to write to.
val: Distribution over values to write.
mem: Memory bank.
ptr – S x M'
ptr – S x M
mem – S x M' x M
where
S – number of samples being operated on
M' – number of memory locations
M – number of representable integers
M = M' always, but distinguishing between them helps keep track of semantics.
# As with reading, tracking the dimensions makes this operation simple.
# We want to compute an "old" contribution and a "new" contribution.
# p = shape_padright(ptr, 1): S x M' x 1
# v = val.dimshuffle(0, 'x', 1): S x 1 x M
# 1 - ptr: S x M'
# J = shape_padright(1 - ptr, 1): S x M' x 1
# old = J * mem: S x M' x M (broadcasting)
# new = p * v: S x M' x M
p = shape_padright(ptr, 1)
v = val.dimshuffle(0, 'x', 1)
j = shape_padright(1 - ptr, 1)
new_mem = j * mem + batched_dot(p, v)
return val, new_mem
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing Gates
Step4: Constant Gates
Step5: <hr/>
Step7: For more info on broadcasting, take a look at the Theano documentation on broadcasting.
Step9: In order to implement negate, we must recognize that arithmetic in our formulation operates modulo $M$. Negative values simply correspond to other positive integers. Specifically, the mapping from values to their negatives is
Step11: Binary Gates
Step12: We use the transpose method to transpose the last two dimension but leave the first one alone (since it contains the parallel samples). This gate can be tricky, so I recommend using it as a self-test in the exercise below!
Step14: Exercise
Step15: This operation is a straight-forward matrix multiplication, made much less straight-forward by the need to massage the dimensions into a form where applying the operation to many samples at once works. We can confirm that it works with one sample
Step17: Exercise
Step18: Finally, since we can now run one gate, we can combine all our gates into a full circuit. The full circuit runs through all the gates, generating outputs, and then generates new values for the registers, similarly using the controller coefficients.
Step20: Testing this requires a bit more setup than before, as it encompasses almost the entire complexity of our system. We will create a simple system, with three registers, four gates, and $M = 5$. We must define the initial register values, the order of the gates, and the connections between the registers, the gates, and the new register values. The connections are set by choosing appropriate controller coefficients.
Step21: With the inputs set up, we can run our circuit and observe the outputs of each of the gates and the new values of each register. We have four gates and three new register values
Step23: Implementing the Controller
Step25: The neural network will start with an input layer, drawing from the registers; it will then have one or more hidden layers; finally, from the hidden layer the neural network will generate all of its outputs (controller coefficients and willingness to complete the computation).
Step28: Using these weights, we can define the outputs of the network by implementing forward propogation. All intermediate layers use rectified linear units (ReLu), and the output layers must sum to one, so use a softmax. The willingness to complete computation is a single number between zero and one, so uses a sigmoid unit.
Step30: We can now run a single timestep of our register machine from end to end. However, without training it, it does not output anything interesting.
Step32: Implementing the Cost Function
Step33: Although we could run forward propogation, collect all the results, and use them to compute the cost, this would require storing in memory all the results. Instead, we can compute the cost during the forward propogation, thus reducing the amount of data we need to actively store. Observe that in the following code we accumulate the cost over each timestep.
Step35: Putting this all together, we run the machine for $T$ timesteps, passing the outputs at each step as the next inputs
Step37: Preparing Training Data
Step38: Given the limited number of registers and integer values, we can generate all possible inputs and all corresponding outputs as training data. itertools.product makes this very easy to do by generating the Cartesian product of our integer set
Step40: Right now the inputs and outputs are stored as lists of tuples, whereas Theano and Numpy work with arrays, and expect the input to be one-hot encoded. Let's address that by encoding the inputs and storing the data as Numpy arrays
Step41: These input and output arrays are suitable to train on, but have a subtle, lurking issue. Namely
Step42: Exercise
Step44: Adam Optimization Method
Step45: Training with Backpropagation
Step46: Adam takes about a thousand iterations to fully train the model, printing the cost every 100 iterations
Step48: Now that we have a trained model, we can evaluate its performance on our test set
Step50: Our model managed to learn to solve this problem with 100% correctness – unsurprising, given that it was actually just learning to produce a constant circuit, completely independent of the registers!
Step52: Exercise
|
15,170 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
import pyxis as px
rng = np.random.RandomState(1234)
nb_samples = 10
X = rng.rand(nb_samples, 254, 254, 3)
y = np.arange(nb_samples, dtype=np.uint8)
db = px.Writer(dirpath='data', map_size_limit=30, ram_gb_limit=1)
db.put_samples('X', X, 'y', y)
db.close()
db = px.Reader('data')
gen = px.SimpleBatch(db, keys=('X', 'y'), batch_size=5, shuffle=False)
for i in range(4):
xs, ys = next(gen)
print()
print('Iteration:', i, '\tTargets:', ys)
if gen.end_of_dataset:
print('We have reached the end of the dataset')
gen = px.SimpleBatch(db, keys=('X', 'y'), batch_size=3, shuffle=False)
for i in range(6):
xs, ys = next(gen)
print()
print('Iteration:', i, '\tTargets:', ys)
if gen.end_of_dataset:
print('We have reached the end of the dataset')
gen = px.SimpleBatch(db, keys=('y'), batch_size=5, shuffle=True, rng=rng)
for i in range(6):
ys = next(gen)
print()
print('Iteration:', i, '\tTargets:', ys)
if gen.end_of_dataset:
print('We have reached the end of the dataset')
gen = px.StochasticBatch(db, keys=('y'), batch_size=5, rng=rng)
for i in range(10):
ys = next(gen)
print('Iteration:', i, '\tTargets:', ys)
gen = px.SequentialBatch(db, keys=('y'), batch_size=3)
for i in range(10):
ys = next(gen)
print('Iteration:', i, '\tTargets:', ys)
class SquareTargets(px.SimpleBatchThreadSafe):
def __init__(self, db, keys, batch_size):
super(SquareTargets, self).__init__(db, keys, batch_size,
shuffle=False,
endless=False)
def __next__(self):
with self.lock:
X, y = next(self.gen)
y = y ** 2
return X, y
gen = SquareTargets(db, keys=('X', 'y'), batch_size=2)
print('Squared targets:')
for _, y in gen:
print(y)
db.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For consistency, we will be using a random number generator with a seed for some of the iterators.
Step2: Let's start by creating a small dataset of 10 samples. Each input is a randomly generated image with shape (254, 254, 3), while the targets are scalar values.
Step3: The data is written using the pyxis writer.
Step4: Using batch iterators
Step5: Example 1 - Number of samples is a multiple of the batch size
Step6: All the iterators that come with pyxis have the mandatory keys argument. The data returned by the iterator will be the values for which these keys point to. The order of the keys matter. For example, when using the keys ('a', 'b') the iterator will return (a_val, b_val), where a_val and b_val are the values associated with the keys 'a' and 'b', respectively.
Step7: Example 2 - Number of samples is not a multiple of the batch size
Step8: The artificial dataset has 10 samples, so by letting the batch size be 3 it will take four iterations to go through the whole dataset. The artificial targets for six batches are printed out to showcase this.
Step9: Example 3 - Shuffling of data
Step10: Example 4 - Stochastic batch iterator
Step11: Example 5 - Sequential batch iterator
Step12: Example 6 - Thread-safe iterators
Step13: SquareTargets can now be used to generate batches of data from the LMDB.
Step14: Close everything
|
15,171 | <ASSISTANT_TASK:>
Python Code:
!ls
# !curl -o pronto.csv https://data.seattle.gov/api/views/tw7j-dfaw/rows.csv?accessType=DOWNLOAD
import pandas
import pandas as pd
data = pd.read_csv('pronto.csv')
data.head()
data.tail()
data.shape
data.columns
data.index
data.dtypes
data.columns
data['tripduration']
data['tripduration'] / 60
data['tripminutes'] = data['tripduration'] / 60
data.head()
data.to_csv('pronto-new.csv')
!ls
import numpy as np
np.exp(data['tripminutes'])
data['starttime'].head()
pd.to_datetime(data['starttime'].head())
times = pd.DatetimeIndex(data['starttime'].head())
times.dayofweek
data['starttime']
times = pd.DatetimeIndex(pd.to_datetime(data['starttime'], format="%m/%d/%Y %I:%M:%S %p"))
times.dayofweek
times.month
times
data.head()
pd.value_counts(data['gender'])
pd.value_counts(data['birthyear'])
pd.value_counts(data['birthyear'], sort=False)
pd.value_counts(times.dayofweek, sort=False)
pd.value_counts(times.month, sort=False)
pd.value_counts(data['gender'], dropna=False)
from IPython.display import Image
Image('split_apply_combine.png')
data.groupby('gender').count()
data.groupby('gender').mean()
data.groupby('gender')['tripminutes'].mean()
data.groupby([times.hour, 'gender'])['tripminutes'].mean()
grouped = data.groupby([times.hour, 'gender'])['tripminutes'].mean().unstack()
grouped
%matplotlib inline
grouped.plot()
import matplotlib.pyplot as plt
plt.style.use('seaborn')
grouped.plot()
grouped.plot.bar()
ax = grouped.plot.bar()
ax.set_xlim(-1, 10)
data['month'] = times.month
ax = data.groupby('month')['trip_id'].count().plot.bar();
data.groupby(['month','gender'])['trip_id'].count().unstack().plot();
data.groupby(['month','usertype'])['trip_id'].count().unstack().plot();
data['hour'] = times.month
ax = data.groupby('hour')['trip_id'].count().plot.bar();
data.groupby(['hour','gender'])['trip_id'].count().unstack().plot();
data.groupby(['hour','usertype'])['trip_id'].count().unstack().plot();
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: uncomment this to download the data
Step2: Loading Data with Pandas
Step3: Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern
Step4: Now we can use the read_csv command to read the comma-separated-value data
Step5: Note
Step6: The shape attribute shows us the number of elements
Step7: The columns attribute gives us the column names
Step8: The index attribute gives us the index names
Step9: The dtypes attribute gives the data types of each column
Step10: 4. Manipulating data with pandas
Step11: Mathematical operations on columns happen element-wise
Step12: Columns can be created (or overwritten) with the assignment operator.
Step13: Note that this manipulation only modifies the data frame in memory; if you want to save the modified dataframe to CSV, you can use the to_csv() method
Step14: More complicated mathematical operations can be done with tools in the numpy package
Step15: Working with Times
Step16: (Note
Step17: Simple Grouping of Data
Step18: Value Counts
Step19: Or to break down rides by age
Step20: By default, the values rather than the index are sorted. Use sort=False to turn this behavior off
Step21: We can explore other things as well
Step22: Group-by Operation
Step23: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
Step24: It's also possible to indes the grouped object like it is a dataframe
Step25: You can even group by multiple values
Step26: The unstack() operation can help make sense of this type of multiply-grouped data. What this technically does is split a multiple-valued index into an index plus columns
Step27: 5. Visualizing data with pandas
Step28: Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data
Step29: Adjusting the Plot Style
Step30: Other plot types
Step31: If you'd like to adjust the x and y limits of the plot, you can use the set_xlim() and set_ylim() method of the resulting object
Step32: Breakout
Step33: Split this plot by gender. Do you see any seasonal ridership patterns by gender?
Step34: Split this plot by user type. Do you see any seasonal ridership patterns by usertype?
Step35: Repeat the above three steps, counting the number of rides by time of day rather thatn by month.
|
15,172 | <ASSISTANT_TASK:>
Python Code:
L = k(X, .7)
D = diag(L)
M = inv(D).dot(L)
# Mi,j denotes the transition probability
# from the point xi to the point xj in one time step
print M
L/L.sum(axis=1).reshape(-1,1)
Ms = (diag(D,.5)).dot(M).dot(diag(D,-.5))
Ms
p = L.sum(axis=1)
for i in range(0,3):
a = []
for j in range(0,3):
a.append(L[i,j]/(p[i]*p[j])**.5)
print a
w, v0, v1 = eig(Ms, left=True)
w = w.real
print '%s\n%s' % (w, v0)
Ms.dot(v0)
w * v0
w = w[::-1]
phi = v0.T
phi = phi[::-1]
#print w, '\n', phi
print w, '\n', phi
psi = []
for i in range(3):
psi.append([])
for j in range(2):
psi[i].append(phi[j+1,i]/M[i])
psi
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for p in psi:
ax.scatter(p[0][0],p[0][1],p[0][2])
ax.scatter(p[1][0],p[1][1],p[1][2])
#print p[0][0], p[0][1], p[0][2]
l = w.real[::-1]
print l
psi = v0.T[::-1]
print psi
phi = []
for i in range(3):
phi.append(l[i] * psi[i])
phi
pairwise_distances(phi[1:])**2
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p>Equivalente a</p>
Step2: $
Step3: Equivalente a
Step4:
Step5: $
Step6: Implementação
Step7: $$
|
15,173 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from owslib.sos import SensorObservationService
import pdb
from owslib.etree import etree
import pandas as pd
import datetime as dt
import numpy as np
url = 'http://sdf.ndbc.noaa.gov/sos/server.php?request=GetCapabilities&service=SOS&version=1.0.0'
ndbc = SensorObservationService(url)
# usgs woods hole
# buoy data (single current meter)
url='http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/1211-AA.cdf'
usgs = SensorObservationService(url)
contents = usgs.contents
usgs.contents
off = usgs.offerings[1]
off.name
off.response_formats
off.observed_properties
off.procedures
# the get observation request below works. How can we recreate this using OWSLib?
# http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/1211-A1H.cdf?service=SOS&version=1.0.0&request=GetObservation&responseFormat=text%2Fxml%3Bsubtype%3D%22om%2F1.0.0%22&offering=1211-A1H&observedProperty=u_1205&procedure=urn:ioos:station:gov.usgs:1211-A1H
#pdb.set_trace()
response = usgs.get_observation(offerings=['1211-AA'],
responseFormat='text/xml;subtype="om/1.0.0"',
observedProperties=['http://mmisw.org/ont/cf/parameter/eastward_sea_water_velocity'],
procedure='urn:ioos:station:gov.usgs:1211-AA')
print(response[0:4000])
# usgs woods hole ADCP data
# url='http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/9111aqd-a.nc'
# adcp = SensorObservationService(url)
root = etree.fromstring(response)
print(root)
# root.findall(".//{%(om)s}Observation" % root.nsmap )
values = root.find(".//{%(swe)s}values" % root.nsmap )
date_value = np.array( [ (dt.datetime.strptime(d,"%Y-%m-%dT%H:%M:%SZ"),float(v))
for d,v in [l.split(',') for l in values.text.split()]] )
ts = pd.Series(date_value[:,1],index=date_value[:,0])
ts.plot(figsize=(12,4), grid='on');
start = '1977-01-03T00:00:00Z'
stop = '1977-01-07T00:00:00Z'
response = usgs.get_observation(offerings=['1211-AA'],
responseFormat='text/xml;subtype="om/1.0.0"',
observedProperties=['http://mmisw.org/ont/cf/parameter/eastward_sea_water_velocity'],
procedure='urn:ioos:station:gov.usgs:1211-AA',
eventTime='{}/{}'.format(start,stop))
root = etree.fromstring(response)
date_value = np.array( [ (dt.datetime.strptime(d,"%Y-%m-%dT%H:%M:%SZ"),float(v))
for d,v in [l.split(',') for l in values.text.split()]] )
ts = pd.Series(date_value[:,1],index=date_value[:,0])
ts.plot(figsize=(12,4), grid='on');
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now try setting time range via eventTime.
|
15,174 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim)
out = tf.tanh(logits)
return out, logits
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model, g_logits = generator(input_z, input_size)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
!mkdir checkpoints
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples, _ = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples, _ = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
|
15,175 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# Tuple of form: (Seq, Stringency, Kd)
validated_core_seqs = [ ('TTTGGTGGATAGTAA', 1, '< 512 nM'),
('AGAGGATTTGGTGGATAGT', 0, '> 512nM'),
('AGAGGATTTGGTGGATAGTAAAT', 3, '< 32 nM'),
('GAGGATTTGGTGGATAGTAAATC', 4, '< 8 nM'),
('GAGGATTTGGTGGATAGTAAATCTTTG', 4, '< 8 nM'),
('AAGAGGATTTGGTGGATAGTAAATCTT', 4, '< 8 nM'),
('CAAGAGGATTTGGTGGATAGTAAATCTTTGC', 4, '< 8 nM'),
('GATAGTAAATCTTTGCCTATCCA', 0, '> 512nM'),
('GTGGATAGTAAATCTTTGCCTATCCAG', 0, '> 512nM'),
('TTTGGTGGATAGTAAATCTTTGC', 0, '> 512nM'),
('CAAGAGGATTTGGTGGATAGTAAATCTTTGCCTAT', 3, '< 32 nM'),
('CAAGAGGATTTGGTGGATAGTAAATCTTTGCCTATCCAG', 3, '< 32 nM'),
('GTTTTTGGTGGATAG', 0, '> 512nM'),
('GTTTTTGGTGGATAGCAAA', 3, '< 32 nM'),
('ACGTTTTTGGTGGATAGCAAATG', 3, '< 32 nM'),
('ACGTTTTTGGTGGATAGCAAATGCCAG', 3, '< 32 nM'),
('ACGTTTTTGGTGGATAGCAAATGCCAGGGCC', 3, '< 32 nM'),
('ACGTTTTTGGTGGATAGCAAATGCCAGGGCCCTTT', 3, '< 32 nM'),
('ACGTTTTTGGTGGATAGCAAATGCCAGGGCCCTTTTTTG', 3, '< 32 nM'),
('GGACTGGTGGATAGT', 0, '> 512nM'),
('CGGACTGGTGGATAGTAGA', 1, '< 512 nM'),
('CGGACTGGTGGATAGTAGAGCTG', 0, '> 512nM'),
('CACGGACTGGTGGATAGTAGAGC', 1, '< 512 nM'),
('CACGGACTGGTGGATAGTAGAGCTGTG', 3, '< 32 nM'),
('GCACGGACTGGTGGATAGTAGAGCTGTGTGA', 2, '< 128 nM'),
('CACGGACTGGTGGATAGTAGAGCTGTGTGAGGTCG', 0, '> 512nM'),
('CGCACGGACTGGTGGATAGTAGAGCTGTGTGAGGT', 2, '< 128 nM'),
('GTCGCACGGACTGGTGGATAGTAGAGCTGTGTGAGGTCG', 3, '< 32 nM'),
('GATGGTGGCTGGATAGTCA', 3, '< 32 nM'),
('GATGGTGGCTGGATAGTCACCTAGTGTCTGG', 3, '< 32 nM')]
validated_core_seq_df = pd.DataFrame(validated_core_seqs, columns=['core_seq', 'stringency_level', 'Kd'])
# All ML score truncations for G12 and G13 in all screened sequence contexts.
# Upload truncation_option_seed_scores_manuscript.csv
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
with open('truncation_option_seed_scores_manuscript.csv') as f:
truncation_scores_df = pd.read_csv(f)
trunc_validated_df = truncation_scores_df.merge(validated_core_seq_df, how='inner', on='core_seq')
trunc_validated_df['core_seq_len'] = trunc_validated_df['core_seq'].apply(len)
trunc_validated_df_median = trunc_validated_df[
[u'core_seq', u'seq', u'seed_seq',
u'seed_label', u'model_score',
u'model_delta', 'core_seq_len',
'inferer_name', 'Kd',
'stringency_level']].groupby(
[u'core_seq', u'seed_seq', 'core_seq_len',
u'seed_label', 'Kd',
u'inferer_name']).median().reset_index()
trunc_validated_df_var = trunc_validated_df[[u'core_seq', u'seq', u'seed_seq',
u'seed_label', u'model_score',
u'model_delta', 'core_seq_len',
'inferer_name', 'Kd',
'stringency_level']].groupby(
[u'core_seq', u'seed_seq', 'core_seq_len',
u'seed_label', 'Kd',
u'inferer_name']).var().reset_index()
# Join Median and Var into one table to Summarize
trunc_validated_df_median_var = trunc_validated_df_median.merge(
trunc_validated_df_var,
on=['core_seq', 'inferer_name', 'seed_label', 'seed_seq', 'core_seq_len', 'Kd'],
suffixes=('_median', '_var'))
trunc_validated_df_median_var[trunc_validated_df_median_var.inferer_name == 'SuperBin'].sort_values(
by=['seed_label', 'core_seq_len'],
ascending=False)[['seed_label', 'core_seq', 'core_seq_len', 'Kd', 'inferer_name',
'model_score_median', 'model_score_var']]
def plot_swarm_and_box_plots (median_df, full_df, inferer_name, seed_label):
Plots swarm and boxplots for truncated sequences.
Args:
median_df: (pd.DataFrame) Median model scores for truncated sequence.
full_df: (pd.DataFrame) All model scores evaluated for each truncation.
inferer_name: (str) Name of model to plot data from (e.g. SuperBin)
seed_label: (str) Seed sequence for which to plot truncations.
# Subset out seed and model for inference
seed_median_df = median_df[(median_df.inferer_name == inferer_name) &
(median_df.seed_label == seed_label)].copy()
seed_full_df = full_df[(full_df.inferer_name == inferer_name) &
(full_df.seed_label == seed_label)].copy()
# Use the median df to sort the data by relative model scores
seed_median_df = seed_median_df.sort_values('model_delta', ascending=False)
# Create an offset to enable spacing between values of the same core seq len
core_seq_offset_dict = {}
for core_seq_len in seed_median_df.core_seq_len.unique():
for i, core_seq in enumerate(seed_median_df[seed_median_df.core_seq_len == core_seq_len].core_seq):
core_seq_offset_dict[core_seq] = i
# Apply these offsets back to the full set of evaluated points as well as medians
seed_full_df['seq_len_offset'] = seed_full_df['core_seq'].apply(
lambda x: core_seq_offset_dict[x])
seed_full_df['seq_len_mod'] = seed_full_df['core_seq_len'] + seed_full_df['core_seq'].apply(
lambda x: float(5 * core_seq_offset_dict[x]) / 10.)
seed_full_df['seq_len_mod2'] = -1 * seed_full_df['seq_len_mod']
# Create a categorical to enable ordering of colors
seed_full_df['$K_D$'] = pd.Categorical(seed_full_df['Kd'],
categories=['> 512nM', '< 512 nM', '< 128 nM', '< 32 nM', '< 8 nM'],
ordered=True)
# Create an ordering to enable spacing of points.
boxplot_order = [-40. , -39.5, -39. ,
-35.5, -35. , -31.5,
-31. , 30.5 , -28, -27.5, -27.,
-25. , -24.5, -24. , -23.5, -23. ,
-19.5, -19. , 15.5,
-15., -14.5, ]
# Only show the sizes we actually evaluated core sequences.
boxplot_order_strs = [40 , '', 39 ,
'', 35 , '',
31 , '', '', '', 27,
'', '', '', '', 23 ,
'', 19 ,
'', 15, '']
# First render points via swarm
boxplot_order_strs = map(str, boxplot_order_strs)
plt.figure(figsize=(10, 5))
ax = sns.swarmplot(data=seed_full_df, x='seq_len_mod2', y='model_score',
edgecolor='k',
palette='Greens', hue='$K_D$', order=boxplot_order,
dodge=False, size=3, zorder=0, linewidth=.2)
for artist in zip(ax.artists):
artist.set_edgecolor('k')
# Render boxplot on top
ax =sns.boxplot(data=seed_full_df, x='seq_len_mod2', y='model_score',
color='white', hue='$K_D$', order=boxplot_order,
dodge=False, linewidth=3, boxprops={'facecolor':'None'},
showcaps=False, showfliers=False )
# Formatting of Figure
for l in ax.lines:
# set median line style
l.set_linestyle('-')
l.set_color('k')
l.set_linewidth(2)
l.set_solid_capstyle('butt')
l.set_alpha(0.5)
xloc, xlab = plt.xticks()
xloc_filt = []
boxplot_order_strs_filt = []
for i in range(len(boxplot_order_strs)):
if boxplot_order_strs[i] != '':
boxplot_order_strs_filt.append(boxplot_order_strs[i])
xloc_filt.append(xloc[i])
plt.xticks(xloc_filt, boxplot_order_strs_filt)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
# Figure 4A
plot_swarm_and_box_plots(trunc_validated_df_median, trunc_validated_df,
'SuperBin', 'G12')
# Figure 4A
plot_swarm_and_box_plots(trunc_validated_df_median, trunc_validated_df,
'SuperBin', 'G13')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Experimental PD affinity estimate
Step2: Load in Data
Step3: Merge experimental and model score data
Step4: Examine distribution of scores for sequences
Step6: Generate Figure Plots
|
15,176 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# tensorflow와 tf.keras를 임포트합니다
import tensorflow.compat.v1 as tf
from tensorflow import keras
# 헬퍼(helper) 라이브러리를 임포트합니다
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images.shape
len(train_labels)
train_labels
test_images.shape
len(test_labels)
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
train_images = train_images / 255.0
test_images = test_images / 255.0
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('테스트 정확도:', test_acc)
predictions = model.predict(test_images)
predictions[0]
np.argmax(predictions[0])
test_labels[0]
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다
# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
# 테스트 세트에서 이미지 하나를 선택합니다
img = test_images[0]
print(img.shape)
# 이미지 하나만 사용할 때도 배치에 추가합니다
img = (np.expand_dims(img,0))
print(img.shape)
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
plt.xticks(range(10), class_names, rotation=45)
plt.show()
prediction_result = np.argmax(predictions_single[0])
print(prediction_result)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 첫 번째 신경망 훈련하기
Step2: 패션 MNIST 데이터셋 임포트하기
Step3: load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다
Step4: 데이터 탐색
Step5: 비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다
Step6: 각 레이블은 0과 9사이의 정수입니다
Step7: 테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다
Step8: 테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다
Step9: 데이터 전처리
Step10: 신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. 훈련 세트와 테스트 세트를 동일한 방식으로 전처리하는 것이 중요합니다
Step11: 훈련 세트에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
Step12: 모델 구성
Step13: 이 네트워크의 첫 번째 층인 tf.keras.layers.Flatten은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.
Step14: 모델 훈련
Step15: 모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다.
Step16: 테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 과대적합(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다.
Step17: 여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠
Step18: 이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠
Step19: 모델은 이 이미지가 앵클 부츠(class_name[9])라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠
Step20: 10개의 신뢰도를 모두 그래프로 표현해 보겠습니다
Step21: 0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.
Step22: 몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
Step23: 마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.
Step24: tf.keras 모델은 한 번에 샘플의 묶음 또는 배치(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다
Step25: 이제 이 이미지의 예측을 만듭니다
Step26: model.predict는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다
|
15,177 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import PIL.Image
import time
import functools
def tensor_to_image(tensor):
tensor = tensor*255
tensor = np.array(tensor, dtype=np.uint8)
if np.ndim(tensor)>3:
assert tensor.shape[0] == 1
tensor = tensor[0]
return PIL.Image.fromarray(tensor)
content_path = tf.keras.utils.get_file('YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')
# https://commons.wikimedia.org/wiki/File:Vassily_Kandinsky,_1913_-_Composition_7.jpg
style_path = tf.keras.utils.get_file('kandinsky5.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg')
def load_img(path_to_img):
max_dim = 512
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
shape = tf.cast(tf.shape(img)[:-1], tf.float32)
long_dim = max(shape)
scale = max_dim / long_dim
new_shape = tf.cast(shape * scale, tf.int32)
img = tf.image.resize(img, new_shape)
img = img[tf.newaxis, :]
return img
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
content_image = load_img(content_path)
style_image = load_img(style_path)
plt.subplot(1, 2, 1)
imshow(content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(style_image, 'Style Image')
import tensorflow_hub as hub
hub_module = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/1')
stylized_image = hub_module(tf.constant(content_image), tf.constant(style_image))[0]
tensor_to_image(stylized_image)
x = tf.keras.applications.vgg19.preprocess_input(content_image*255)
x = tf.image.resize(x, (224, 224))
vgg = tf.keras.applications.VGG19(include_top=True, weights='imagenet')
prediction_probabilities = vgg(x)
prediction_probabilities.shape
predicted_top_5 = tf.keras.applications.vgg19.decode_predictions(prediction_probabilities.numpy())[0]
[(class_name, prob) for (number, class_name, prob) in predicted_top_5]
vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')
print()
for layer in vgg.layers:
print(layer.name)
content_layers = ['block5_conv2']
style_layers = ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1']
num_content_layers = len(content_layers)
num_style_layers = len(style_layers)
def vgg_layers(layer_names):
중간층의 출력값을 배열로 반환하는 vgg 모델을 만듭니다.
# 이미지넷 데이터셋에 사전학습된 VGG 모델을 불러옵니다
vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')
vgg.trainable = False
outputs = [vgg.get_layer(name).output for name in layer_names]
model = tf.keras.Model([vgg.input], outputs)
return model
style_extractor = vgg_layers(style_layers)
style_outputs = style_extractor(style_image*255)
# 각 층의 출력에 대한 통계량을 살펴봅니다
for name, output in zip(style_layers, style_outputs):
print(name)
print(" 크기: ", output.numpy().shape)
print(" 최솟값: ", output.numpy().min())
print(" 최댓값: ", output.numpy().max())
print(" 평균: ", output.numpy().mean())
print()
def gram_matrix(input_tensor):
result = tf.linalg.einsum('bijc,bijd->bcd', input_tensor, input_tensor)
input_shape = tf.shape(input_tensor)
num_locations = tf.cast(input_shape[1]*input_shape[2], tf.float32)
return result/(num_locations)
class StyleContentModel(tf.keras.models.Model):
def __init__(self, style_layers, content_layers):
super(StyleContentModel, self).__init__()
self.vgg = vgg_layers(style_layers + content_layers)
self.style_layers = style_layers
self.content_layers = content_layers
self.num_style_layers = len(style_layers)
self.vgg.trainable = False
def call(self, inputs):
"[0,1] 사이의 실수 값을 입력으로 받습니다"
inputs = inputs*255.0
preprocessed_input = tf.keras.applications.vgg19.preprocess_input(inputs)
outputs = self.vgg(preprocessed_input)
style_outputs, content_outputs = (outputs[:self.num_style_layers],
outputs[self.num_style_layers:])
style_outputs = [gram_matrix(style_output)
for style_output in style_outputs]
content_dict = {content_name:value
for content_name, value
in zip(self.content_layers, content_outputs)}
style_dict = {style_name:value
for style_name, value
in zip(self.style_layers, style_outputs)}
return {'content':content_dict, 'style':style_dict}
extractor = StyleContentModel(style_layers, content_layers)
results = extractor(tf.constant(content_image))
print('스타일:')
for name, output in sorted(results['style'].items()):
print(" ", name)
print(" 크기: ", output.numpy().shape)
print(" 최솟값: ", output.numpy().min())
print(" 최댓값: ", output.numpy().max())
print(" 평균: ", output.numpy().mean())
print()
print("콘텐츠:")
for name, output in sorted(results['content'].items()):
print(" ", name)
print(" 크기: ", output.numpy().shape)
print(" 최솟값: ", output.numpy().min())
print(" 최댓값: ", output.numpy().max())
print(" 평균: ", output.numpy().mean())
style_targets = extractor(style_image)['style']
content_targets = extractor(content_image)['content']
image = tf.Variable(content_image)
def clip_0_1(image):
return tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0)
opt = tf.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)
style_weight=1e-2
content_weight=1e4
def style_content_loss(outputs):
style_outputs = outputs['style']
content_outputs = outputs['content']
style_loss = tf.add_n([tf.reduce_mean((style_outputs[name]-style_targets[name])**2)
for name in style_outputs.keys()])
style_loss *= style_weight / num_style_layers
content_loss = tf.add_n([tf.reduce_mean((content_outputs[name]-content_targets[name])**2)
for name in content_outputs.keys()])
content_loss *= content_weight / num_content_layers
loss = style_loss + content_loss
return loss
@tf.function()
def train_step(image):
with tf.GradientTape() as tape:
outputs = extractor(image)
loss = style_content_loss(outputs)
grad = tape.gradient(loss, image)
opt.apply_gradients([(grad, image)])
image.assign(clip_0_1(image))
train_step(image)
train_step(image)
train_step(image)
tensor_to_image(image)
import time
start = time.time()
epochs = 10
steps_per_epoch = 100
step = 0
for n in range(epochs):
for m in range(steps_per_epoch):
step += 1
train_step(image)
print(".", end='')
display.clear_output(wait=True)
display.display(tensor_to_image(image))
print("훈련 스텝: {}".format(step))
end = time.time()
print("전체 소요 시간: {:.1f}".format(end-start))
def high_pass_x_y(image):
x_var = image[:,:,1:,:] - image[:,:,:-1,:]
y_var = image[:,1:,:,:] - image[:,:-1,:,:]
return x_var, y_var
x_deltas, y_deltas = high_pass_x_y(content_image)
plt.figure(figsize=(14,10))
plt.subplot(2,2,1)
imshow(clip_0_1(2*y_deltas+0.5), "Horizontal Deltas: Original")
plt.subplot(2,2,2)
imshow(clip_0_1(2*x_deltas+0.5), "Vertical Deltas: Original")
x_deltas, y_deltas = high_pass_x_y(image)
plt.subplot(2,2,3)
imshow(clip_0_1(2*y_deltas+0.5), "Horizontal Deltas: Styled")
plt.subplot(2,2,4)
imshow(clip_0_1(2*x_deltas+0.5), "Vertical Deltas: Styled")
plt.figure(figsize=(14,10))
sobel = tf.image.sobel_edges(content_image)
plt.subplot(1,2,1)
imshow(clip_0_1(sobel[...,0]/4+0.5), "Horizontal Sobel-edges")
plt.subplot(1,2,2)
imshow(clip_0_1(sobel[...,1]/4+0.5), "Vertical Sobel-edges")
def total_variation_loss(image):
x_deltas, y_deltas = high_pass_x_y(image)
return tf.reduce_sum(tf.abs(x_deltas)) + tf.reduce_sum(tf.abs(y_deltas))
total_variation_loss(image).numpy()
tf.image.total_variation(image).numpy()
total_variation_weight=30
@tf.function()
def train_step(image):
with tf.GradientTape() as tape:
outputs = extractor(image)
loss = style_content_loss(outputs)
loss += total_variation_weight*tf.image.total_variation(image)
grad = tape.gradient(loss, image)
opt.apply_gradients([(grad, image)])
image.assign(clip_0_1(image))
image = tf.Variable(content_image)
import time
start = time.time()
epochs = 10
steps_per_epoch = 100
step = 0
for n in range(epochs):
for m in range(steps_per_epoch):
step += 1
train_step(image)
print(".", end='')
display.clear_output(wait=True)
display.display(tensor_to_image(image))
print("훈련 스텝: {}".format(step))
end = time.time()
print("전체 소요 시간: {:.1f}".format(end-start))
file_name = 'stylized-image.png'
tensor_to_image(image).save(file_name)
try:
from google.colab import files
except ImportError:
pass
else:
files.download(file_name)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: tf.keras를 사용한 Neural Style Transfer
Step2: 이미지를 다운로드받고 스타일 참조 이미지와 콘텐츠 이미지를 선택합니다
Step3: 입력 시각화
Step4: 이미지를 출력하기 위한 간단한 함수를 정의합니다
Step5: TF-Hub를 통한 빠른 스타일 전이
Step6: 콘텐츠와 스타일 표현 정의하기
Step7: 이제 분류층을 제외한 VGG19 모델을 불러오고, 각 층의 이름을 출력해봅니다.
Step8: 이미지의 스타일과 콘텐츠를 나타내기 위한 모델의 중간층들을 선택합니다
Step10: 스타일과 콘텐츠를 위한 중간층
Step11: 위 함수를 이용해 모델을 만들어봅시다
Step12: 스타일 계산하기
Step13: 스타일과 콘텐츠 추출하기
Step14: 이미지가 입력으로 주어졌을때, 이 모델은 style_layers의 스타일과 content_layers의 콘텐츠에 대한 그람 행렬을 출력합니다
Step15: 경사하강법 실행
Step16: 최적화시킬 이미지를 담을 tf.Variable을 정의하고 콘텐츠 이미지로 초기화합니다. (이때 tf.Variable는 콘텐츠 이미지와 크기가 같아야 합니다.)
Step17: 픽셀 값이 실수이므로 0과 1 사이로 클리핑하는 함수를 정의합니다
Step18: 옵티마이저를 생성합니다. 참조 연구에서는 LBFGS를 추천하지만, Adam도 충분히 적합합니다
Step19: 최적화를 진행하기 위해, 전체 오차를 콘텐츠와 스타일 오차의 가중합으로 정의합니다
Step20: tf.GradientTape를 사용해 이미지를 업데이트합니다.
Step21: 구현한 알고리즘을 시험해보기 위해 몇 단계를 돌려봅시다
Step22: 잘 작동하는 것을 확인했으니, 더 오랫동안 최적화를 진행해봅니다
Step23: 총 변위 손실
Step24: 위 이미지들은 고주파 구성 요소가 늘어났다는 것을 보여줍니다.
Step25: 정규화 오차는 각 값의 절대값의 합으로 표현됩니다
Step26: 식이 잘 계산된다는 것을 확인할 수 있습니다. 하지만 다행히도 텐서플로에는 이미 표준 함수가 내장되어 있기 직접 오차식을 구현할 필요는 없습니다
Step27: 다시 최적화하기
Step28: 이제 이 가중치를 train_step 함수에서 사용합니다
Step29: 최적화할 변수를 다시 초기화합니다
Step30: 최적화를 수행합니다
Step31: 마지막으로, 결과물을 저장합니다
|
15,178 | <ASSISTANT_TASK:>
Python Code:
from xml.etree import ElementTree as ET
document_tree = ET.parse( './data/mondial_database_less.xml' )
# print names of all countries
for child in document_tree.getroot():
print (child.find('name').text)
# print names of all countries and their cities
for element in document_tree.iterfind('country'):
print ('* ' + element.find('name').text + ':', end=''),
capitals_string = ''
for subelement in element.getiterator('city'):
capitals_string += subelement.find('name').text + ', '
print (capitals_string[:-2])
document = ET.parse( './data/mondial_database.xml' )
# print child and attributes
#for child in document.getroot():
# print (child.tag, child.attrib)
import pandas as pd
# Create a list of country and their Infant Mortality Rate
country_imr=[]
for country in document.getroot().findall('country'):
name = country.find('name').text
infant_mortality_rate = country.find('infant_mortality')
if infant_mortality_rate is not None:
infant_mortality_rate=infant_mortality_rate.text
else :
infant_mortality_rate = -1
country_imr.append((name, (float)(infant_mortality_rate)))
df = pd.DataFrame(country_imr, columns=['Country', 'Infant_Mortality_Rate'])
df_unknown_removed = df[df.Infant_Mortality_Rate != -1]
df_unknown_removed.set_index('Infant_Mortality_Rate').sort().head(10)
city_population=[]
for country in document.iterfind('country'):
for state in country.iterfind('province'):
for city in state.iterfind('city'):
try:
city_population.append((city.find('name').text, float(city.find('population').text)))
except:
next
for city in country.iterfind('city'):
try:
city_population.append((city.find('name').text, float(city.find('population').text)))
except:
next
df = pd.DataFrame(city_population, columns=['City', 'Population'])
#df.info()
df.sort_index(by='Population', ascending=False).head(10)
ethnic_population={}
country_population={}
for country in document.iterfind('country'):
try:
country_population[country.find('name').text]= float(country.find('population').text)
except:
next
for state in country.iterfind('province' or 'state'):
try:
country_population[country.find('name').text] += float(state.find('population').text)
except:
next
for city in state.iterfind('city'):
try:
country_population[country.find('name').text] += float(city.find('population').text)
except:
next
for country in document.iterfind('country'):
for ethnicgroup in country.iterfind('ethnicgroup'):
try:
if ethnicgroup.text in ethnic_population:
ethnic_population[ethnicgroup.text] += country_population[country.find('name').text]*float(ethnicgroup.get('percentage'))/100
else:
ethnic_population[ethnicgroup.text] = country_population[country.find('name').text]*float(ethnicgroup.get('percentage'))/100
except:
next
pd.DataFrame(sorted(ethnic_population.items(), key=lambda x:x[1], reverse=True)[:10], columns=['Ethnic_Groups', 'Population'])
rivers_list=[]
rivers_df = pd.DataFrame()
for rivers in document.iterfind('river'):
try:
rivers_list.append({'name':rivers.find('name').text, 'length':int(rivers.find('length').text), 'country':rivers.find('located').attrib['country']})
except:
next
rivers_df = pd.DataFrame(rivers_list)
rivers_df.sort('length', ascending=False).head(1)
lake_list=[]
lake_df = pd.DataFrame()
for lakes in document.iterfind('lake'):
try:
lake_list.append({'name':lakes.find('name').text, 'area':int(lakes.find('area').text), 'country':lakes.find('located').attrib['country']})
except:
next
lakes_df = pd.DataFrame(lake_list)
lakes_df.sort('area', ascending=False).head(1)
ap_list=[]
ap_df = pd.DataFrame()
for ap in document.iterfind('airport'):
try:
ap_list.append({'name':ap.find('name').text, 'elevation':int(ap.find('elevation').text), 'country':ap.attrib['country']})
except:
next
ap_df = pd.DataFrame(ap_list)
ap_df.sort('elevation', ascending=False).head(1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: XML example
Step2: XML exercise
Step3: 10 countries with the lowest infant mortality rates
Step4: 10 cities with the largest population
Step5: 10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)
Step6: Longest River
Step7: Largest Lake
Step8: Airport At Highest Elevation
|
15,179 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
numbers = {0: "", 1:"one", 2:"two", 3:"three", 4:"four", 5:"five", 6:"six", 7:"seven", 8:"eight", 9:"nine", 10:"ten", 11:"eleven",
12:"twelve", 13:"thirteen", 14:"fourteen", 15:"fifteen", 16:"sixteen", 17:"seventeen", 18:"eighteen", 19:"nineteen",
20:"twenty", 30:"thirty", 40:"forty", 50:"fifty", 60:"sixty", 70:"seventy", 80:"eighty", 90:"ninety", 100:"hundred",
1000:"onethousand"}
#I did this part with two functions to make my method more straightforward
# The first function takes a number and breaks it into 1000s, 100s, 10s and 1s.
# It also adjusts for perculiar numbers such as 'eleven and 'thirteen'. The second function
# takes the number list from the first function and turns it into words.
def number_in_parts(n):
#this breaks up a number into parts
#an entry of 526 would return a list of [500, 20, 6]
broken_number = []
length = len(str(n))
for i in range(length):
f = int(str(n)[i]) * 10**(length-(i+1))
broken_number.append(f)
for i in range(len(broken_number)):
if broken_number[i] == 10:
broken_number[i]= broken_number[i] + broken_number[i+1]
broken_number[i+1] = 0
return(broken_number)
def number_to_words(entry):
Given a number n between 1-1000 inclusive return a list of words for the number.
letters = []
for number in entry:
# if 100-900, adding "1-9" and "hundred"
if number >= 100 and number < 1000:
n = number/100
letters.append(numbers[n])
letters.append(numbers[100])
else:
letters.append(numbers[number])
# adding an "and" for anything above 99 with nonzero terms in the 10s or 1s place
if (letters[-1] != "" or letters[-2] != "") and len(entry) > 2:
letters.append("and")
final = "".join(letters)
return(final)
assert number_in_parts(123) == [100, 20, 3]
assert number_in_parts(112) == [100, 12, 0]
assert number_in_parts(1000) == [1000, 0, 0, 0]
assert number_to_words(number_in_parts(100)) == "onehundred"
assert number_to_words(number_in_parts(112)) == "onehundredtwelveand"
assert number_to_words(number_in_parts(589)) == "fivehundredeightynineand"
assert number_to_words(number_in_parts(102)) == "onehundredtwoand"
assert number_to_words(number_in_parts(140)) == "onehundredfortyand"
assert True # use this for grading the number_to_words tests.
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
x = 0
for i in range(n):
x += len(number_to_words(number_in_parts(i+1)))
return x
assert count_letters(1) == 3
assert count_letters(2) == 6
assert count_letters(3) == 11
assert count_letters(500) - count_letters(499) == len("fivehundred")
assert count_letters(459) - count_letters(458) == len(number_to_words(number_in_parts(459))) == len("fourhundredandfiftynine")
assert count_letters(1000) - count_letters(999) == len("onethousand")
assert count_letters(412) - count_letters(411) == len("fourhundredandtwelve")
assert count_letters(330) - count_letters(328) == len("threehundredandthirtythreehundredandtwentynine")
assert True # use this for grading the count_letters tests.
print(count_letters(1000))
assert True # use this for gradig the answer to the original question.
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question.
|
15,180 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
def soliton(x, t, c, a):
Return phi(x, t) for a soliton wave with constants c and a.
phiarg = (np.sqrt(c)/2.)*(x-c*t-a)
phi = .5 * np.cosh(phiarg)**2
return(phi)
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
phi = np.zeros([200,100], dtype = 'float')
for i in range(0,200):
for j in range(0,100):
phi[i,j] = soliton(x[i], t[j], c, a)
# is there a list comprehension that would make this better?
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
def plot_soliton_data(i=0):
Plot the soliton data at t[i] versus x.
plt.plot(x, phi[:,i])
plt.xlim((0,10))
plt.ylim((0,3000))
plt.title("t =" + str(t[i]))
plot_soliton_data(0)
hi there
print(hi
how are you)
assert True # leave this for grading the plot_soliton_data function
interact(plot_soliton_data, i=(0,99))
assert True # leave this for grading the interact with plot_soliton_data cell
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Using interact for animation with data
Step3: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step4: Compute a 2d NumPy array called phi
Step8: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step9: Use interact to animate the plot_soliton_data function versus time.
|
15,181 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
USE_SUBSET = True
def get_subset(x, y):
if not USE_SUBSET:
return x, y
subset_index = 10000
np.random.seed(1)
indexes = np.random.permutation(len(x))[:subset_index]
x, y = x[indexes], y[indexes]
return x, y
from tensorflow.keras.datasets import mnist
from skimage.color import gray2rgb
from skimage.transform import resize
from sklearn.model_selection import train_test_split
(x_source_train, y_source_train), (x_source_test, y_source_test) = mnist.load_data()
def process_mnist(x):
x = np.moveaxis(x, 0, -1)
x = resize(x, (32, 32), anti_aliasing=True, mode='constant')
x = np.moveaxis(x, -1, 0)
return gray2rgb(x).astype("float32")
x_source_train = process_mnist(x_source_train)
x_source_test = process_mnist(x_source_test)
x_source_train, y_source_train = get_subset(x_source_train, y_source_train)
#x_source_test, y_source_test = get_subset(x_source_test, y_source_test)
x_source_train, x_source_val, y_source_train, y_source_val = train_test_split(
x_source_train, y_source_train,
test_size=int(0.1 * len(x_source_train))
)
x_source_train.shape, x_source_val.shape, x_source_test.shape
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 15))
for i, digit in enumerate(np.unique(y_source_train), start=1):
index = np.where(y_source_train == digit)[0][0]
ax = plt.subplot(1, 10, i)
ax.imshow(x_source_train[index])
ax.set_title(digit)
import pickle as pkl
with open("mnistm_data.pkl", "rb") as f:
mnist_m = pkl.load(f)
x_target_train, y_target_train = get_subset(mnist_m["x_train"], mnist_m["y_train"])
x_target_test, y_target_test = mnist_m["x_test"], mnist_m["y_test"]
x_target_train = resize(x_target_train, (x_target_train.shape[0], 32, 32, 3), anti_aliasing=True, mode='edge').astype("float32")
x_target_test = resize(x_target_test, (x_target_test.shape[0], 32, 32, 3), anti_aliasing=True, mode='edge').astype("float32")
x_target_train.shape, x_target_test.shape
plt.figure(figsize=(20, 15))
for i, digit in enumerate(np.unique(y_target_train), start=1):
index = np.where(y_target_train == digit)[0][0]
ax = plt.subplot(1, 10, i)
ax.imshow(x_target_train[index])
ax.set_title(digit)
from tensorflow.keras.layers import MaxPool2D, Conv2D, Dense, Dropout, Flatten, Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import SGD
import tensorflow as tf
def get_network(input_shape=x_source_train.shape[1:]):
# TODO
return Model(inputs=inputs, outputs=digits_classifier)
model = get_network()
model.compile(
loss="sparse_categorical_crossentropy",
optimizer=SGD(lr=0.1, momentum=0.9, nesterov=True),
metrics=['accuracy']
)
model.summary()
# %load solutions/da_naive_model.py
model.fit(
x_source_train, y_source_train,
validation_data=(x_source_val, y_source_val),
epochs=10,
batch_size=128
)
print("Loss & Accuracy on MNIST test set:")
model.evaluate(x_source_test, y_source_test, verbose=0)
print("Loss & Accuracy on MNIST-M test set:")
model.evaluate(x_target_test, y_target_test, verbose=0)
@tf.custom_gradient
def grad_reverse(x):
y = tf.identity(x)
def custom_grad(dy):
return None # TODO
return y, custom_grad
class GradReverse(tf.keras.layers.Layer):
def __init__(self):
super().__init__(name="grl")
def call(self, x):
return grad_reverse(x)
# %load solutions/grl.py
def get_adaptable_network(input_shape=x_source_train.shape[1:]):
# TODO
return Model(inputs=inputs, outputs=None)
model = get_adaptable_network()
model.summary()
# %load solutions/da_model.py
batch_size = 128
epochs = 10
d_source_train = np.ones_like(y_source_train)
d_source_val = np.ones_like(y_source_val)
source_train_generator = tf.data.Dataset.from_tensor_slices(
(x_source_train, y_source_train, d_source_train)).batch(batch_size)
d_target_train = np.zeros_like(y_target_train)
target_train_generator = tf.data.Dataset.from_tensor_slices(
(x_target_train, d_target_train)
).batch(batch_size)
from tensorflow.keras.losses import SparseCategoricalCrossentropy, BinaryCrossentropy
from tensorflow.keras.metrics import Mean, Accuracy
optimizer = SGD(lr=0.01, momentum=0.9, nesterov=True)
cce = SparseCategoricalCrossentropy()
bce = BinaryCrossentropy()
model.compile(
optimizer=optimizer,
loss=[cce, bce],
metrics=["accuracy", "accuracy"]
)
def train_epoch(source_train_generator, target_train_generator):
global lambda_factor, global_step
# Keras provide helpful classes to monitor various metrics:
epoch_source_digits = tf.keras.metrics.Mean()
epoch_source_domains = tf.keras.metrics.Mean()
epoch_target_domains = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Fetch all trainable variables but those used uniquely for the digits classification:
variables_but_classifier = list(filter(lambda x: "digits" not in x.name, model.trainable_variables))
loss_record = collections.defaultdict(list)
for i, data in enumerate(zip(source_train_generator, target_train_generator)):
source_data, target_data = data
# Training digits classifier & domain classifier on source:
x_source, y_source, d_source = source_data
# Remember that you can do forward likewise:
# outputs = model(inputs)
with tf.GradientTape() as tape:
# TODO
gradients = tape.gradient(# TODO, # TODO)
optimizer.apply_gradients(zip(# TODO, # TODO))
# Training domain classifier on target:
x_target, d_target = target_data
with tf.GradientTape() as tape:
# TODO
gradients = tape.gradient(# TODO, # TODO)
optimizer.apply_gradients(zip(# TODO, # TODO))
# Log the various losses and accuracy
epoch_source_digits(digits_loss)
epoch_source_domains(domains_loss)
epoch_accuracy(y_source, digits_prob)
epoch_target_domains(target_loss)
print("Source digits loss={}, Source Accuracy={}, Source domain loss={}, Target domain loss={}".format(
epoch_source_digits.result(), epoch_accuracy.result(),
epoch_source_domains.result(), epoch_target_domains.result()))
for epoch in range(epochs):
print("Epoch: {}".format(epoch), end=" ")
loss_record = train_epoch(source_train_generator, target_train_generator)
print(model.metrics_names)
print("Loss & Accuracy on MNIST test set:")
model.evaluate(x_source_test, [y_source_test, np.ones_like(y_source_test)], verbose=0)
print("Loss & Accuracy on MNIST-M test set:")
model.evaluate(x_target_test, [y_target_test, np.zeros_like(y_target_test)], verbose=0)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading source dataset MNIST
Step2: Loading target dataset MNIST-M
Step3: Naive model
Step4: After training on our source dataset MNIST, we evaluate our model performance on both the source (MNIST) and the target dataset MNIST-M
Step5: Note that the two datasets are too different. The model didn't generalize on the target set.
Step6: Then define the whole model
Step7: We define our generators. Note that we also add the domain labels. We choose arbitrarily to set the source domain to 1, and the target domain to 0.
Step8: We want to train alternatively on the source and target dataset. Fill the following block.
Step9: This new model has more metrics & losses than the previous one. To know what they are we can display the metrics_name
Step10: Evaluate the performance on both source and target dataset
|
15,182 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import matplotlib as mpl
from scipy.interpolate import interp1d
from scipy.signal import fftconvolve
from scipy.integrate import cumtrapz
import seaborn as sns
sns.set_palette('colorblind')
co2_re, ch4_re = 1.7517E-15, 1.2767E-13 * 1.65
n2o_re, sf6_re = 3.8477E-13, 2.0097E-11
# AR5 2013 CO2 IRF values
a0, a1, a2, a3 = 0.2173, 0.224, 0.2824, 0.2763
tau1, tau2, tau3 = 394.4, 36.54, 4.304
#CO2 response function
def f0(t):
return a0
def f1(t):
return a1*np.exp(-t/tau1)
def f2(t):
return a2*np.exp(-t/tau2)
def f3(t):
return a3*np.exp(-t/tau3)
def CO2_AR5(t):
return f0(t) + f1(t) + f2(t) + f3(t)
#Methane response fuction
CH4tau = 12.4
def CH4_AR5(t):
return np.exp(-t/CH4tau)
#N2O response fuction
N2Otau = 121
def N2O_AR5(t):
return np.exp(-t/N2Otau)
def CO2_crf(emission, years, tstep=0.01, kind='linear'):
Transforms an array of CO2 emissions into radiative forcing with user-
defined time-step.
emission: an array of emissions, should be same size as years
years: an array of years at which the emissions take place
tstep: time step to be used in the calculations
kind: the type of interpolation to use; can be linear or cubic
# Emission is a series of emission numbers, years should match up with it
# The interpolation is for cases where irregular intervals exist between emission values
if min(years) > 0:
years = years - min(years)
end = max(years)
f = interp1d(years, emission, kind=kind)
time = np.linspace(years[0], end, end/tstep + 1)
inter_emissions = f(time)
atmos = np.resize(fftconvolve(CO2_AR5(time), inter_emissions), time.size) * tstep
rf = atmos * co2_re
crf = cumtrapz(rf, dx = tstep, initial = 0)
fil = np.zeros_like(time, dtype=bool)
for i in time:
if i == int(i):
fil[i/tstep] = True
return crf[fil]
def ch42co2(t, alpha=0.51):
As methane decays some fraction is converted to CO2. This function is
from Boucher et al (2). By default it converts 51%. The convolution of this
function with the methane emission profile gives the CO2 emission profile.
t: time
alpha: fraction of methane converted to CO2
ch4tau = 12.4
return 1/ch4tau * alpha * np.exp(-t/ch4tau)
def AR5_GTP(t):
c1, c2, d1, d2 = 0.631, 0.429, 8.4, 409.5
The default response function for radiative forcing from AR5. Source is
Boucher and Reddy (3). ECR is 3.9K, which is on the high side.
Convolve with radiative forcing to get temperature.
return c1/d1*np.exp(-t/d1) + c2/d2*np.exp(-t/d2)
def CH4_cc_tempforrf(emission, years, tstep=0.01, kind='linear', decay=True):
Transforms an array of methane emissions into temperature with user-defined
time-step. Default temperature IRF is from AR5, use 'Alt_low' or 'Alt_high'
for a sensitivity test.
emission: an array of emissions, should be same size as years
years: an array of years at which the emissions take place
tstep: time step to be used in the calculations
kind: the type of interpolation to use; can be linear or cubic
source: the source of parameters for the temperature IRF. default is AR5,
'Alt', 'Alt_low', and 'Alt_high' are also options.
decay: a boolean variable for if methane decay to CO2 should be included
# Emission is a series of emission numbers, years should match up with it
# The interpolation is for cases where irregular intervals exist between emission values
if min(years) > 0:
years = years - min(years)
end = max(years)
f = interp1d(years, emission, kind=kind)
time = np.linspace(years[0], end, end/tstep + 1)
ch4_inter_emissions = f(time)
ch4_atmos = np.resize(fftconvolve(CH4_AR5(time), ch4_inter_emissions),
time.size) * tstep
co2 = np.resize(fftconvolve(ch42co2(time), ch4_inter_emissions),
time.size) * tstep
co2_atmos = np.resize(fftconvolve(CO2_AR5(time), co2),
time.size) * tstep
if decay == True:
rf = ch4_atmos * ch4_re + co2_atmos * co2_re
else:
rf = ch4_atmos * ch4_re
temp = np.resize(fftconvolve(AR5_GTP(time), rf), time.size) * tstep
return temp
def CH4_crf_cc(emission, years, tstep=0.01, kind='linear'):
Transforms an array of methane emissions into radiative forcing with user-defined
time-step.
emission: an array of emissions, should be same size as years
years: an array of years at which the emissions take place
tstep: time step to be used in the calculations
kind: the type of interpolation to use; can be linear or cubic
# Gamma is 1Gt carbon per K temp increase from Collins et al.(4)
gamma = (44.0/12.0) * 10**12
# Emission is a series of emission numbers, years should match up with it
# The interpolation is for cases where irregular intervals exist between emission values
if min(years) > 0:
years = years - min(years)
end = max(years)
fch4 = interp1d(years, emission, kind=kind)
time = np.linspace(years[0], end, end/tstep + 1)
ch4_inter_emissions = fch4(time)
ch4_atmos = np.resize(fftconvolve(CH4_AR5(time), ch4_inter_emissions),
time.size) * tstep
co2 = 44.0/16.0 * np.resize(fftconvolve(ch42co2(time), ch4_inter_emissions),
time.size) * tstep
co2_atmos = np.resize(fftconvolve(CO2_AR5(time), co2),
time.size) * tstep
cc_co2 = CH4_cc_tempforrf(emission, years) * gamma
cc_co2_atmos = np.resize(fftconvolve(CO2_AR5(time), cc_co2),
time.size) * tstep
rf = ch4_atmos * ch4_re + (co2_atmos + cc_co2_atmos) * co2_re
crf = cumtrapz(rf, dx = tstep, initial = 0)
fil = np.zeros_like(time, dtype=bool)
for i in time:
if i == int(i):
fil[i/tstep] = True
return crf[fil]
def CH4_nonfossil_crf(emission, years, tstep=0.01, kind='linear'):
Transforms an array of methane emissions into radiative forcing with user-defined
time-step.
emission: an array of emissions, should be same size as years
years: an array of years at which the emissions take place
tstep: time step to be used in the calculations
kind: the type of interpolation to use; can be linear or cubic
# Gamma is 1Gt carbon per K temp increase from Collins et al (4)
gamma = (44.0/12.0) * 10**12
if min(years) > 0:
years = years - min(years)
# Emission is a series of emission numbers, years should match up with it
# The interpolation is for cases where irregular intervals exist between emission values
end = max(years)
fch4 = interp1d(years, emission, kind=kind)
time = np.linspace(years[0], end, end/tstep + 1)
ch4_inter_emissions = fch4(time)
ch4_atmos = np.resize(fftconvolve(CH4_AR5(time), ch4_inter_emissions),
time.size) * tstep
co2 = 44.0/16.0 * np.resize(fftconvolve(ch42co2(time), ch4_inter_emissions),
time.size) * tstep
co2_atmos = np.resize(fftconvolve(CO2_AR5(time), co2),
time.size) * tstep
cc_co2 = CH4_cc_tempforrf(emission, years) * gamma
cc_co2_atmos = np.resize(fftconvolve(CO2_AR5(time), cc_co2),
time.size) * tstep
rf = ch4_atmos * ch4_re #+ (co2_atmos) * co2_re
crf = cumtrapz(rf, dx = tstep, initial = 0)
fil = np.zeros_like(time, dtype=bool)
for i in time:
if i == int(i):
fil[i/tstep] = True
return crf[fil]
def CH4_crf(emission, years, tstep=0.01, kind='linear'):
Transforms an array of methane emissions into radiative forcing with user-defined
time-step.
emission: an array of emissions, should be same size as years
years: an array of years at which the emissions take place
tstep: time step to be used in the calculations
kind: the type of interpolation to use; can be linear or cubic
# Gamma is 1Gt carbon per K temp increase from Collins et al.(4)
gamma = (44.0/12.0) * 10**12
# Emission is a series of emission numbers, years should match up with it
# The interpolation is for cases where irregular intervals exist between emission values
if min(years) > 0:
years = years - min(years)
end = max(years)
fch4 = interp1d(years, emission, kind=kind)
time = np.linspace(years[0], end, end/tstep + 1)
ch4_inter_emissions = fch4(time)
ch4_atmos = np.resize(fftconvolve(CH4_AR5(time), ch4_inter_emissions),
time.size) * tstep
co2 = 44.0/16.0 * np.resize(fftconvolve(ch42co2(time), ch4_inter_emissions),
time.size) * tstep
co2_atmos = np.resize(fftconvolve(CO2_AR5(time), co2),
time.size) * tstep
cc_co2 = CH4_cc_tempforrf(emission, years) * gamma
cc_co2_atmos = np.resize(fftconvolve(CO2_AR5(time), cc_co2),
time.size) * tstep
rf = ch4_atmos * ch4_re + (co2_atmos) * co2_re
crf = cumtrapz(rf, dx = tstep, initial = 0)
fil = np.zeros_like(time, dtype=bool)
for i in time:
if i == int(i):
fil[i/tstep] = True
return crf[fil]
def N2O_cc_tempforrf(emission, years, tstep=0.01, kind='linear'):
Transforms an array of N2O emissions into temperature with user-defined
time-step. Default temperature IRF is from AR5, use 'Alt_low' or 'Alt_high'
for a sensitivity test.
emission: an array of emissions, should be same size as years
years: an array of years at which the emissions take place
tstep: time step to be used in the calculations
kind: the type of interpolation to use; can be linear or cubic
source: the source of parameters for the temperature IRF. default is AR5,
'Alt', 'Alt_low', and 'Alt_high' are also options.
# Emission is a series of emission numbers, years should match up with it
# The interpolation is for cases where irregular intervals exist between emission values
if min(years) > 0:
years = years - min(years)
end = max(years)
f = interp1d(years, emission, kind=kind)
time = np.linspace(years[0], end, end/tstep + 1)
n2o_inter_emissions = f(time)
n2o_atmos = np.resize(fftconvolve(N2O_AR5(time), n2o_inter_emissions),
time.size) * tstep
rf = n2o_atmos * n2o_re
temp = np.resize(fftconvolve(AR5_GTP(time), rf), time.size) * tstep
fil = np.zeros_like(time, dtype=bool)
for i in time:
if i == int(i):
fil[i/tstep] = True
return temp
def N2O_crf_cc(emission, years, tstep=0.01, kind='linear'):
Transforms an array of N2O emissions into cumulative radiative forcing
with user-defined time-step, accounting for climate-carbon feedbacks.
emission: an array of emissions, should be same size as years
years: an array of years at which the emissions take place
tstep: time step to be used in the calculations
kind: the type of interpolation to use; can be linear or cubic
# Gamma is 1Gt carbon per K temp increase from Collins et al.(4)
gamma = (44.0/12.0) * 10**12
# Emission is a series of emission numbers, years should match up with it
# The interpolation is for cases where irregular intervals exist between emission values
if min(years) > 0:
years = years - min(years)
end = max(years)
fn2o = interp1d(years, emission, kind=kind)
time = np.linspace(years[0], end, end/tstep + 1)
n2o_inter_emissions = fn2o(time)
n2o_atmos = np.resize(fftconvolve(N2O_AR5(time), n2o_inter_emissions),
time.size) * tstep
cc_co2 = N2O_cc_tempforrf(emission, years) * gamma
cc_co2_atmos = np.resize(fftconvolve(CO2_AR5(time), cc_co2),
time.size) * tstep
#From AR5 Ch.8 SM, reduction of 36 molecules CH4 for every 100 N2O emitted.
#This is more accurate when done on a mass basis
delta_ch4 = n2o_inter_emissions * -0.36
rf = n2o_atmos * n2o_re
crf = cumtrapz(rf, dx = tstep, initial = 0)
fil = np.zeros_like(time, dtype=bool)
for i in time:
if i == int(i):
fil[i/tstep] = True
return crf[fil] + CH4_crf_cc(delta_ch4, time)
def N2O_crf(emission, years, tstep=0.01, kind='linear'):
Transforms an array of N2O emissions into cumulative radiative forcing
with user-defined time-step, accounting for climate-carbon feedbacks.
emission: an array of emissions, should be same size as years
years: an array of years at which the emissions take place
tstep: time step to be used in the calculations
kind: the type of interpolation to use; can be linear or cubic
# Gamma is 1Gt carbon per K temp increase from Collins et al.(4)
gamma = (44.0/12.0) * 10**12
# Emission is a series of emission numbers, years should match up with it
# The interpolation is for cases where irregular intervals exist between emission values
if min(years) > 0:
years = years - min(years)
end = max(years)
fn2o = interp1d(years, emission, kind=kind)
time = np.linspace(years[0], end, end/tstep + 1)
n2o_inter_emissions = fn2o(time)
n2o_atmos = np.resize(fftconvolve(N2O_AR5(time), n2o_inter_emissions),
time.size) * tstep
cc_co2 = N2O_cc_tempforrf(emission, years) * gamma
cc_co2_atmos = np.resize(fftconvolve(CO2_AR5(time), cc_co2),
time.size) * tstep
#From AR5 Ch.8 SM, reduction of 36 molecules CH4 for every 100 N2O emitted
delta_ch4 = n2o_inter_emissions * -0.36
rf = n2o_atmos * n2o_re #+ (cc_co2_atmos) * co2_re
crf = cumtrapz(rf, dx = tstep, initial = 0)
fil = np.zeros_like(time, dtype=bool)
for i in time:
if i == int(i):
fil[i/tstep] = True
return crf[fil] + CH4_crf(delta_ch4, time)
end = 200
tstep = 0.01
time = np.linspace(0, end, num=end/tstep+1)
emission_length = {'pulse' : 0.01,
'1' : 1,
'3' : 3,
'19' : 19,
'30' : 30,
'60' : 60,
'80' : 80}
emission, co2_crf, ch4_crf, ch4_cc_crf, n2o_crf, n2o_cc_crf = {}, {}, {}, {}, {}, {}
for life in list(emission_length.keys()):
emission[life] = np.ones_like(time) / tstep
emission[life][emission_length[life]/tstep:] = 0
ch4_cc_crf[life] = CH4_crf_cc(emission[life], time)
ch4_crf[life] = CH4_crf(emission[life], time)
n2o_cc_crf[life] = N2O_crf_cc(emission[life], time)
n2o_crf[life] = N2O_crf(emission[life], time)
co2_crf[life] = CO2_crf(emission[life], time)
ch4_crf_cc_df = pd.DataFrame(ch4_cc_crf)
ch4_crf_cc_df.plot(xlim=(0,100))
n2o_crf_df = pd.DataFrame(n2o_cc_crf)
n2o_crf_df.plot(xlim=(0,100))
writer = pd.ExcelWriter('EPA Methane forcing.xlsx')
ch4_cc_crf_df.to_excel(writer)
writer.save()
n2o = np.ones_like(time)
n2o[35/tstep+1 :] = 0
n2o_crf = N2O_crf_cc(n2o, time)
plt.plot(n2o_crf)
writer = pd.ExcelWriter('EPA N2O forcing 35 years.xlsx')
pd.DataFrame(n2o_crf).to_excel(writer)
writer.save()
n2o = np.ones_like(time)
n2o[30/tstep+1 :] = 0
n2o_crf = N2O_crf_cc(n2o, time)
plt.plot(n2o_crf)
writer = pd.ExcelWriter('EPA N2O forcing 30 years.xlsx')
pd.DataFrame(n2o_crf).to_excel(writer)
writer.save()
r=60. #rotation periods
g = 1/np.sqrt(2*np.pi*(r/4)**2)*np.exp(-((time-r/2)**2)/(2*(r/4)**2))
g[int(r/tstep):] = 0
g = g/sp.integrate.simps(g,dx=tstep) #normalizing uptake to 1
plt.plot(time,g)
plt.xlim(0, r+30)
conv = np.resize(fftconvolve(CO2_AR5(time),g), time.size) * tstep
plt.plot(time,-conv)
plt.xlim(0, r+30)
mass_after = CO2_AR5(time) - conv
plt.plot(time, mass_after)
plt.xlabel('Years')
plt.ylabel('Mass')
plt.title('Mass in atmosphere\n60 year growth cycle after combustion', size=15)
F_after = cumtrapz(result, dx=tstep, initial=0)
plt.plot(time, F_after)
plt.xlim(0, r+30)
plt.xlabel('Years')
plt.ylabel('$kg \ yr$')
plt.title('Cumulative mass in atmosphere\n60 year growth cycle after combustion', size=15)
activity = np.ones_like(time)
activity[30/tstep + 1:] = 0
burn_grow = np.resize(fftconvolve(activity, F_after), time.size) * tstep
plt.plot(time, burn_grow)
plt.ylabel('$kg \ yr$', size=12)
plt.xlabel('Year')
plt.title('Cumulative mass in atmosphere\n60 year growth cycle after combustion\n 30 years of activity'
, size=15)
bioCO2 = -conv
combust_time = int(r/tstep)
f = np.zeros_like(time)
f[combust_time:] = np.resize(CO2_AR5(time), time[combust_time:].size)
mass_before = f + bioCO2
plt.plot(time - 60, mass_before)
plt.xlim(-60, 30)
plt.xlabel('Years')
plt.ylabel('Mass')
plt.title('Mass in atmosphere\n60 year growth cycle before combustion', size=15)
F_before = cumtrapz(mass_before, dx=tstep, initial=0)
grow_burn = np.resize(fftconvolve(activity, F_before), time.size) * tstep
plt.plot(time, grow_burn)
plt.xlabel('Years')
plt.ylabel('$kg \ yr$', size=12)
plt.title('Cumulative mass in atmosphere\n60 year growth cycle before combustion\n 30 years of activity'
, size=15)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Radiative efficiencies of each gas
Step2: Impulse response functions (IRF) for CO<sub>2</sub>, CH<sub>4</sub>, and N<sub>2</sub>O
Step4: CO<sub>2</sub> CRF calculation
Step11: CH<sub>4</sub> CRF
Step15: N<sub>2</sub>O CRF
Step16: Calculation of CH<sub>4</sub> cumulative forcing - multiple emission time frames
Step17: Calculation of N<sub>2</sub>O cumulative forcing - 35 years of emissions
Step18: Calculation of N<sub>2</sub>O cumulative forcing - 30 years of emissions
Step19: Calculation CO<sub>2</sub> in atmosphere following biomass combustion and regrowth (service life)
Step20: From Cherubini (2011,2012) the amount of CO<sub>2</sub> in the air is modeled by convoluting the emission function (biomass combustion followed by the regrowth of that biomass) and the IRF. Cheubini 2011 represents the equation as $\int_0^t [C_0\delta(t')-g(t')]y(t-t')dt'$. This represents the atmospheric concentration at time $t-t'$ for emissions (or uptake) at time $t'$.
Step21: The plot below begins with 1kg of CO<sub>2</sub> in the air at t=0. Two different things happen as we move forward in time. The first is that the pulse of CO<sub>2</sub> is removed by the typical IRF decay. This happens on a slow time scale. The second is the uptake of CO<sub>2</sub> by the biomass, which has been modified by convolution with the IRF (see above). The convolution is done to account for outgassing from the ocean and other effects caused by the removal of CO<sub>2</sub> from the atmosphere through more than one route.
Step22: F is the cumulative integration of CO<sub>2</sub> in the atmosphere from combustion and regrowth
Step23: A convolution of activity (one per year for 30 years) with the cumulative CO<sub>2</sub> gives the total cumulative atmospheric mass over time.
Step24: Calculation CO<sub>2</sub> in atmosphere following biomass growth and then combustion (service life)
|
15,183 | <ASSISTANT_TASK:>
Python Code:
import sys
import numpy as np
import tensorflow as tf
from tqdm import tqdm_notebook as tqn
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-poster')
plt.style.use('ggplot')
sys.path.append('../../..')
from batchflow import B, V
from batchflow.opensets import MNIST
from batchflow.models.tf import ResNet
sys.path.append('../../utils')
import utils
dset = MNIST()
ResNet_config = {
'inputs':{'images': {'shape': (28, 28, 1)},
'labels': {'classes': 10,
'transform': 'ohe',
'dtype': 'int32',
'name': 'targets'}},
'input_block/inputs': 'images',
'body/num_blocks': [1, 1, 1, 1],
'body/filters': [64, 128, 256, 512],
'body/block/bottleneck': True,
'body/block/post_activation': tf.nn.relu,
'body/block/layout': 'cna cna cn',
'loss': 'ce',
'optimizer': 'Adam',
}
SE_config = {
**ResNet_config,
'body/block/se_block': True,
}
res_train_ppl = (dset.train.p
.init_model('dynamic',
ResNet,
'resnet',
config=ResNet_config)
.train_model('resnet',
feed_dict={'images': B('images'),
'labels': B('labels')}))
res_test_ppl = (dset.test.p
.init_variable('resloss', init_on_each_run=list)
.import_model('resnet', res_train_ppl)
.predict_model('resnet',
fetches='loss',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('resloss'),
mode='a'))
se_train_ppl = (dset.train.p
.init_model('dynamic',
ResNet,
'se_block',
config=SE_config)
.train_model('se_block',
feed_dict={'images': B('images'),
'labels': B('labels')}))
se_test_ppl = (dset.test.p
.init_variable('seloss', init_on_each_run=list)
.import_model('se_block', se_train_ppl)
.predict_model('se_block',
fetches='loss',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('seloss'),
mode='a'))
for i in tqn(range(500)):
res_train_ppl.next_batch(300, n_epochs=None, shuffle=2)
res_test_ppl.next_batch(300, n_epochs=None, shuffle=2)
se_train_ppl.next_batch(300, n_epochs=None, shuffle=2)
se_test_ppl.next_batch(300, n_epochs=None, shuffle=2)
ResNet_loss = res_test_ppl.get_variable('resloss')
SE_loss = se_test_ppl.get_variable('seloss')
utils.draw(ResNet_loss, 'ResNet', SE_loss, 'Squeeze and excitation')
utils.draw(ResNet_loss, 'ResNet', SE_loss, 'Squeeze and excitation', bound=[300, 500, 0, 0.3])
utils.draw(ResNet_loss, 'ResNet', SE_loss, 'Squeeze and excitation', window=50, bound=[300, 500, 0, 0.3])
def get_maps(graph, ppl, sess):
operations = graph.get_operations()
head_operations = [oper for oper in operations if 'head' in oper.name]
oper_name = head_operations[1].name + ':0'
next_batch = ppl.next_batch()
maps = sess.run(oper_name,
feed_dict={
'ResNet/inputs/images:0': next_batch.images,
'ResNet/inputs/labels:0': next_batch.labels,
'ResNet/globals/is_training:0': False
})
return maps, next_batch.labels
res_sess = res_test_ppl.get_model_by_name("resnet").session
res_graph = res_sess.graph
se_sess = se_test_ppl.get_model_by_name('se_block').session
se_graph = se_sess.graph
res_maps, res_answers = get_maps(res_graph, res_test_ppl, res_sess)
se_maps, se_answers = get_maps(se_graph, se_test_ppl, se_sess)
def draw_avgpooling(maps, answers, model=True):
import seaborn as sns
from pandas import ewma
col = sns.color_palette("Set2", 8) + sns.color_palette(["#9b59b6", "#3498db"])
indices = np.array([np.where(answers == i)[0] for i in range(10)])
filters = np.array([np.mean(maps[indices[i]], axis=0).reshape(-1) for i in range(10)])
for i in range(10):
plt.plot(ewma(filters[i], span=350, adjust=False), color=col[i], label=str(i))
plt.title("Distribution of average pooling in "+("SE ResNet" if model else 'simple ResNet'))
plt.legend(fontsize=16, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylabel('Activation value', fontsize=18)
plt.xlabel('Future map index', fontsize=18)
plt.axis([0, 2060, 0., 1.])
plt.show()
draw_avgpooling(se_maps, se_answers)
draw_avgpooling(res_maps, res_answers, False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's create a dataset with MNIST data
Step2: We will use the standard ResNet from the BatchFlow models.
Step3: Now create pipelines with the given configurations for a simple ResNet model
Step4: And now the model with SE blocks
Step5: After that, train our models
Step6: It’s time to show the entire learning process
Step7: On this plot, it is very difficult to see the difference between them. Let’s look at the chart closer to see the last 200 iterations.
Step8: Because of the large variance, it is again impossible to tell which model is better. We can try to smooth out and see how the error will behave.
Step9: It's clearer now that squeeze and excitation block on average gives better quality than simple ResNet. And SE ResNet has approximately the same number of parameters
Step10: Loading our maps and answers
Step11: Draw a plot of the distribution of card activations after GAP for individual classes. Each line is the distribution of one class.
|
15,184 | <ASSISTANT_TASK:>
Python Code:
import csv
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Create empty graph
G = nx.Graph()
# Add nodes
G.add_node(1)
G.add_nodes_from([2, 3])
G.add_node(4)
G.nodes()
# add edges
G.add_edge(1, 2)
# get graph info
print(nx.info(G))
nx.draw(G, with_labels=True)
# add at creation
# nodes
G.add_node(5, favorite_color='blue')
G.add_nodes_from([(6, {'favorite_color' : 'red'}),
(7, {'favorite_color' :'purple'})])
# edges
G.add_edge(5, 6, {'relationship' : 'best friends'})
# accessing node attributes
print("Node 5 attributes:", G.node[5])
# accessing edge attributes
print("Edge 5-6 attributes:", G.edge[5][6])
favorite_foods = {
1 : 'pizza',
2 : 'mac and cheese',
3 : 'balogna sandwich',
4 : 'pizza',
5 : 'chocolate',
6 : 'pizza',
7 : 'bananas'
}
nx.set_node_attributes(G, 'favorite_food', favorite_foods)
print("Node 4's favorite food is %s" % G.node[4]['favorite_food'])
# what does it look like?
!head ../data/ga_edgelist.csv
edges = []
with open('../data/ga_edgelist.csv', 'r') as f:
filereader = csv.reader(f, delimiter=",", quotechar='"')
next(filereader) # skips header row
for row in filereader:
edges.append(row)
edges[0:5]
GA = nx.from_edgelist(edges)
print(nx.info(GA))
ga_edges = pd.read_csv('../data/ga_edgelist.csv')
ga_edges.head()
GA = nx.from_pandas_dataframe(ga_edges, source="from", target="to")
# validate info
print(nx.info(GA))
nx.draw(GA, with_labels=True)
ga_adj = pd.read_csv('../data/ga_adj.csv', index_col=0)
ga_adj.ix[0:5, 0:5]
GAAdj = nx.from_numpy_matrix(ga_adj.values)
# Numpy matrices don't have labels :(
print(GAAdj.nodes())
label_mapping = dict(zip(GAAdj.nodes(), ga_adj.columns))
GAAdj = nx.relabel_nodes(GAAdj, label_mapping)
nx.draw_spring(GAAdj, with_labels=True)
# Easiest, least robust way:
print("Edge List Graph\n", nx.info(GA))
print("\nAdj. Matrix Graph\n", nx.info(GAAdj))
# Fancy math way that checks additional conditions
print("Isomorphic?", nx.is_isomorphic(GA, GAAdj))
print("'denny' From Edge List Graph:", GA['denny'])
print("'denny' From Adjacency Matrix Graph:", GAAdj['denny'])
original_edgelist = sorted(nx.to_edgelist(GA))
adjacency_edgelist = sorted(nx.to_edgelist(GAAdj))
for i, edge in enumerate(original_edgelist):
adjacency_edge = adjacency_edgelist[i]
if edge[0] != adjacency_edge[0]:
print("Sorted Edge Mismatch at edge %s:" % i, edge, adjacency_edge)
break
nx.write_gexf(GA, '../data/ga_graph.gexf')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ⚠️
Step2: Adding and Inspecting Attributes
Step3: Adding Attributes for each existing node
Step4: Reading in Different Representations of Graphs
Step5: Mediating Data Processing through pandas
Step6: Adjacency Matrices
Step7: Are the two graphs the same?
Step8: Gotchas
Step9: ⚠️ Observation
Step10: ⚠️ Observation
|
15,185 | <ASSISTANT_TASK:>
Python Code:
#!pip freeze | grep tensorflow-hub==0.7.0 || pip install tensorflow-hub==0.7.0
import os
import tensorflow as tf
import tensorflow_hub as hub
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT = {PROJECT}
%env BUCKET = {BUCKET}
%env REGION = {REGION}
%%bash
URL=https://www.gutenberg.org/cache/epub/24564/pg24564.txt
OUTDIR=gs://$BUCKET/custom_embedding
CORPUS=surgery_manual.txt
curl $URL > $CORPUS
gsutil cp $CORPUS $OUTDIR/$CORPUS
!echo gs://$BUCKET/custom_embedding/surgery_manual.txt
!echo gs://$BUCKET/custom_embedding
!echo tensorboard --port 8080 --logdir gs://$BUCKET/custom_embedding/embeddings
MODULE = f"gs://{BUCKET}/custom_embedding/hub-module"
MODULE
med_embed = # TODO: Your code goes here.
outputs = # TODO: Your code goes here.
outputs
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Replace by your GCP project and bucket
Step2: Loading the dataset in GCS
Step3: It has very specialized language such as
Step4: and for gcs-directory-path-for-pipeline-output which we will set to
Step5: Remark
Step6: The projector view will present you with a representation of the word vectors in a 3 dimensional space (the dim is reduced through PCA) that you can interact with. Enter in the search tool a few words like "ilium" and points in the 3D space will light up.
Step 7
Step7: Now we are ready to create a KerasLayer out of our custom text embedding.
Step8: That layer when called with a list of sentences will create a sentence vector for each sentence by averaging the word vectors of the sentence.
|
15,186 | <ASSISTANT_TASK:>
Python Code:
%%file ./src/helloCython.pyx
import cython
import sys
def message():
print(" Hello World ....\n")
print(" Hello Central Ohio Python User Group ...\n")
print(" The 614 > 650::True")
print(" Another line ")
print(" The Python version is %s" % sys.version)
print(" The Cython version is %s" % cython.__version__)
print(" I hope that you learn something useful . . . .")
def main():
message()
%%file ./src/cyMath.pyx
import cython
def cy_fib(int n):
Print the Fibonacci series up to n.
cdef int a = 0
cdef int b = 1
cdef int c = 0
cdef int index = 0
while b < n:
print ("%d, %d, \n" % (index, b) )
a, b = b, a + b
index += 1
%%file ./src/printString.pyx
import cython
def display(char *bytestring):
Print out a bytestring byte by byte.
cdef char byte
for byte in bytestring:
print(byte)
%%file ./src/bits.pyx
import cython
def cy_reflect(int reg, int bits):
Reverse all the bits in a register.
reg = input register
r = output register
cdef int x
cdef int y
cdef int r
x = 1 << (bits-1)
y = 1
r = 0
while x:
if reg & x:
r |= y
x = x >> 1
y = y << 1
return r
def reflect(self,s, bits=8):
Take a binary number (byte) and reflect the bits.
x = 1<<(bits-1)
y = 1
r = 0
while x:
if s & x:
r |= y
x = x >> 1
y = y << 1
return r
%%file ./src/setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
#=========================================
# Setup the extensions
#=========================================
sources = [ "./src/cyMath.pyx", "./src/helloCython.pyx",
"./src/cy_math.pyx", "./src/bits.pyx",
"./src/printString.pyx"]
#for fileName in sources:
# setup(ext_modules=cythonize(str(fileName)))
map(lambda fileName : setup(ext_modules=cythonize(str(fileName))), sources)
!python ./src/setup.py build_ext --inplace
from src import helloCython
helloCython.message()
from src import cyMath
cyMath.cy_fib(100)
from src import bits
from bits import cy_reflect
hexlist = [int(0x01),int(0x02),int(0x04),int(0x08)]
[hex(cy_reflect(item,8)) for item in hexlist]
from src import printString
printString.display('123')
# A little list comprehension here ...
# A comparative method to the Cython printString function
numberList = [1,2,3]
[ord(str(value)) for value in numberList]
%%file ./src/cyFib.pyx
def cyfib(int n):
cdef int a = 0
cdef int b = 1
cdef int index = 0
while b < n:
a, b = b, a+b
index += 1
return b
!makecython ./src/cyFib.pyx
def pyfib(n):
a = 0
b = 1
index = 0
while b < n:
a, b = b, a+b
index += 1
return b
%timeit pyfib(1000)
import cyFib
%timeit cyFib.cyfib(1000)
import dis
dis.dis(pyfib)
import cProfile
cProfile.run('pyfib(1000)')
%%file ./src/cyPoly.pyx
def cypoly(int n, int k):
return map(lambda x:(1.0*x**2 + 0.5*x + 0.25*x), range(k))
!makecython ./src/cyPoly.pyx
def pypoly(n,k):
return map(lambda x:.1*x**2 + .5*x + 0.25*x, range(k))
from src import cyPoly
cyPoly.cypoly(4,50)
pypoly(4,50)
%%file ./src/sineWave.pyx
import cython
from libc.math cimport sin
def sinewave(double x):
Calculate a sinewave for specified number of cycles, Ncycles, at a given frequency.
return sin(x)
!makecython ./src/sineWave.pyx
from src import sineWave
import math
angle90 = math.pi/2
sineWave.sinewave(angle90)
%matplotlib inline
import numpy as np
x = np.linspace(0,2*np.pi,2000)
%timeit plot(x,np.sin(x),'r')
## %timeit plot(x,sineWave.sinewave(x),'r') <== Why is this a problem ??
xlim(0,6.28)
title('Sinewave for Array Data')
grid(True)
%%file ./src/myFunc.pyx
import cython
import numpy as np
cimport numpy as np
@cython.boundscheck(False)
@cython.wraparound(False)
def myfunc(np.ndarray[double, ndim=1] A):
return np.sin(A)
!makecython ./src/myFunc.pyx
%matplotlib inline
from src import myFunc
import cython
import numpy as np
x = np.linspace(0,2*np.pi,2000)
y = myFunc.myfunc(x)
%timeit plot(x,y,'r')
xlim(0,6.28)
title('Sinewave for Array Data with Cython')
grid(True)
!python-config --cflags
!python-config --ldflags
!ls -a ./src
%%file ./src/quad.pyx
module:: This is a Cython file that uses decorators for arguments.
import cython
cython.declare(a = double, x = double, y = double)
def exp(a, x):
funtion that uses cython.locals
cdef int y
y = a**x
return y
!makecython ./src/quad.pyx
%%file ./src/setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
#=========================================
# Setup the extensions
#=========================================
sources = [ "./src/cyMath.pyx", "./src/helloCython.pyx",
"./src/cy_math.pyx", "./src/bits.pyx",
"./src/printString.pyx", "./src/quad.pyx"]
#for fileName in sources:
# setup(ext_modules=cythonize(str(fileName)))
map(lambda fileName : setup(ext_modules=cythonize(str(fileName))), sources)
!python ./src/setup.py build_ext --inplace
from src import quad
quad.exp(2,3)
def quadPy(a,x):
return a*(x**2)
%timeit quadPy(2.0, 5.0)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Cython -- A Transcompiler Language
Step6: Now let's see the time difference between a cyfib and pyfib ...
Step7: Introducing runcython !!
Step8: NOW THAT IS A CONSIDERABLE SPEEDUP ...
Step9: Another Example, with a polynomial this time ...
Step10: Now to compare the two ....
Step12: Now's lets do something graphically, like plot a trig function. Let's also use a float/double type.
Step13: Now let's looking a data that involves arrays, and look at both python and numpy versions as well.
Step16: Summary & Conclusions
|
15,187 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
filename = 'training_data.csv'
training_data = pd.read_csv(filename)
training_data
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
training_data.describe()
blind = training_data[training_data['Well Name'] == 'SHANKLE']
training_data = training_data[training_data['Well Name'] != 'SHANKLE']
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
#count the number of unique entries for each facies, sort them by
#facies number (instead of by number of entries)
facies_counts = training_data['Facies'].value_counts().sort_index()
#use facies labels to index each count
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,
title='Distribution of Training Data by Facies')
facies_counts
#save plot display settings to change back to when done plotting with seaborn
inline_rc = dict(mpl.rcParams)
import seaborn as sns
sns.set()
sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
#switch back to default matplotlib plot style
mpl.rcParams.update(inline_rc)
correct_facies_labels = training_data['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.1, random_state=42)
from sklearn import svm
clf = svm.SVC()
clf.fit(X_train,y_train)
predicted_labels = clf.predict(X_test)
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
#model selection takes a few minutes, change this variable
#to true to run the parameter loop
do_model_selection = True
if do_model_selection:
C_range = np.array([.01, 1, 5, 10, 20, 50, 100, 1000, 5000, 10000])
gamma_range = np.array([0.0001, 0.001, 0.01, 0.1, 1, 10])
fig, axes = plt.subplots(3, 2,
sharex='col', sharey='row',figsize=(10,10))
plot_number = 0
for outer_ind, gamma_value in enumerate(gamma_range):
row = int(plot_number / 2)
column = int(plot_number % 2)
cv_errors = np.zeros(C_range.shape)
train_errors = np.zeros(C_range.shape)
for index, c_value in enumerate(C_range):
clf = svm.SVC(C=c_value, gamma=gamma_value)
clf.fit(X_train,y_train)
train_conf = confusion_matrix(y_train, clf.predict(X_train))
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
cv_errors[index] = accuracy(cv_conf)
train_errors[index] = accuracy(train_conf)
ax = axes[row, column]
ax.set_title('Gamma = %g'%gamma_value)
ax.semilogx(C_range, cv_errors, label='CV error')
ax.semilogx(C_range, train_errors, label='Train error')
plot_number += 1
ax.set_ylim([0.2,1])
ax.legend(bbox_to_anchor=(1.05, 0), loc='lower left', borderaxespad=0.)
fig.text(0.5, 0.03, 'C value', ha='center',
fontsize=14)
fig.text(0.04, 0.5, 'Classification Accuracy', va='center',
rotation='vertical', fontsize=14)
clf = svm.SVC(C=10, gamma=1)
clf.fit(X_train, y_train)
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
blind
y_blind = blind['Facies'].values
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
X_blind = scaler.transform(well_features)
y_pred = clf.predict(X_blind)
blind['Prediction'] = y_pred
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
X_unknown = scaler.transform(well_features)
#predict facies of unclassified data
y_unknown = clf.predict(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
well_data.to_csv('well_data_with_facies.csv')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
Step2: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Step3: These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone.
Step4: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
Step5: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters.
Step6: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
Step7: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Step8: Conditioning the data set
Step9: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie
Step10: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
Step11: Training the SVM classifier
Step12: Now we can train the classifier using the training set we created above.
Step13: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
Step14: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
Step15: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 23 were correctly indentified as SS, 21 were classified as CSiS and 2 were classified as FSiS.
Step16: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
Step17: Model parameter selection
Step18: The best accuracy on the cross validation error curve was achieved for gamma = 1, and C = 10. We can now create and train an optimized classifier based on these parameters
Step19: Precision and recall are metrics that give more insight into how the classifier performs for individual facies. Precision is the probability that given a classification result for a sample, the sample actually belongs to that class. Recall is the probability that a sample will be correctly classified for a given class.
Step20: To interpret these results, consider facies SS. In our test set, if a sample was labeled SS the probability the sample was correct is 0.8 (precision). If we know a sample has facies SS, then the probability it will be correctly labeled by the classifier is 0.78 (recall). It is desirable to have high values for both precision and recall, but often when an algorithm is tuned to increase one, the other decreases. The F1 score combines both to give a single measure of relevancy of the classifier results.
Step21: Considering adjacent facies, the F1 scores for all facies types are above 0.9, except when classifying SiSh or marine siltstone and shale. The classifier often misclassifies this facies (recall of 0.66), most often as wackestone.
Step22: The label vector is just the Facies column
Step23: We can form the feature matrix by dropping some of the columns and making a new dataframe
Step24: Now we can transform this with the scaler we made before
Step25: Now it's a simple matter of making a prediction and storing it back in the dataframe
Step26: Let's see how we did with the confusion matrix
Step27: We managed 0.71 using the test data, but it was from the same wells as the training data. This more reasonable test does not perform as well...
Step28: ...but does remarkably well on the adjacent facies predictions.
Step29: Applying the classification model to new data
Step30: The data needs to be scaled using the same constants we used for the training data.
Step31: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
Step32: We can use the well log plot to view the classification results along with the well logs.
Step33: Finally we can write out a csv file with the well data along with the facies classification results.
|
15,188 | <ASSISTANT_TASK:>
Python Code::
import tensorflow as tf
from sklearn.model_selection import train_test_split
TRAINING_FILENAMES, VALIDATION_FILENAMES = train_test_split(
tf.io.gfile.glob(r'data\tfrecords\ld_train*.tfrec'),
test_size=0.3, random_state=101
)
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
15,189 | <ASSISTANT_TASK:>
Python Code:
import math
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import BEAMphsf
import text_loader
import H1Dn
import H1Du
import ListTable
%matplotlib inline
C = 25
phsfname = "PHSF" + "." + str(C)
phsfname = "../" + phsfname
print ("We're reading the {1}mm phase space file = {0}".format(phsfname, C))
events, nof_photons, nof_electrons, nof_positrons = text_loader.load_events(phsfname, -1)
print("Number of loaded events: {0}".format(len(events)))
print("Number of loaded photons: {0}".format(nof_photons))
print("Number of loaded electrons: {0}".format(nof_electrons))
print("Number of loaded positrons: {0}".format(nof_positrons))
print("Yield: {0}".format(nof_photons/40000000000.0))
# make scale with explicit bins at 1.17 MeV and 1.33 MeV
nbins = 5
scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001)
he = H1Dn.H1Dn(scale)
for e in events:
WT = e[0]
E = e[1]
he.fill(E, WT)
print("Number of events in histogram: {0}".format(he.nof_events()))
print("Integral in histogram: {0}".format(he.integral()))
print("Underflow bin: {0}".format(he.underflow()))
print("Overflow bin: {0}".format(he.overflow()))
X = []
Y = []
W = []
scale = he.x()
n = len(scale)
norm = 1.0/he.integral()
sum = 0.0
for k in range (-1, he.size()+1):
x = 0.0
w = (he.lo() - x)
if k == he.size():
w = (scale[-1]-scale[-2])
x = he.hi()
elif k >= 0:
w = (scale[k+1] - scale[k])
x = scale[k]
d = he[k] # data from bin with index k
y = d[0] / w # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(w)
sum += y*w
print("PDF normalization: {0}".format(sum))
E133_5 = Y[-2]
E117_5 = Y[-2-nbins]
p1 = plt.bar(X, Y, W, color='r')
plt.xlabel('Energy(MeV)')
plt.ylabel('PDF of the photons')
plt.title('Energy distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# saving peak values
print("Peak PDF value at 1.33 MeV: {0}".format(E133_5))
print("Peak PDF value at 1.17 MeV: {0}".format(E117_5))
# make scale with explicit bins at 1.17 MeV and 1.33 MeV
nbins = 10
scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001)
he = H1Dn.H1Dn(scale)
for e in events:
WT = e[0]
E = e[1]
he.fill(E, WT)
print("Number of events in histogram: {0}".format(he.nof_events()))
print("Integral in histogram: {0}".format(he.integral()))
print("Underflow bin: {0}".format(he.underflow()))
print("Overflow bin: {0}".format(he.overflow()))
X = []
Y = []
W = []
scale = he.x()
n = len(scale)
norm = 1.0/he.integral()
sum = 0.0
for k in range (-1, he.size()+1):
x = 0.0
w = (he.lo() - x)
if k == he.size():
w = (scale[-1]-scale[-2])
x = he.hi()
elif k >= 0:
w = (scale[k+1] - scale[k])
x = scale[k]
d = he[k] # data from bin with index k
y = d[0] / w # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(w)
sum += y*w
print("PDF normalization: {0}".format(sum))
E133_10 = Y[-2]
E117_10 = Y[-2-nbins]
p1 = plt.bar(X, Y, W, color='r')
plt.xlabel('Energy(MeV)')
plt.ylabel('PDF of the photons')
plt.title('Energy distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# saving peak values
print("Peak PDF value at 1.33 MeV: {0}".format(E133_10))
print("Peak PDF value at 1.17 MeV: {0}".format(E117_10))
# make scale with explicit bins at 1.17 MeV and 1.33 MeV
nbins = 20
scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001)
he = H1Dn.H1Dn(scale)
for e in events:
WT = e[0]
E = e[1]
he.fill(E, WT)
print("Number of events in histogram: {0}".format(he.nof_events()))
print("Integral in histogram: {0}".format(he.integral()))
print("Underflow bin: {0}".format(he.underflow()))
print("Overflow bin: {0}".format(he.overflow()))
X = []
Y = []
W = []
scale = he.x()
n = len(scale)
norm = 1.0/he.integral()
sum = 0.0
for k in range (-1, he.size()+1):
x = 0.0
w = (he.lo() - x)
if k == he.size():
w = (scale[-1]-scale[-2])
x = he.hi()
elif k >= 0:
w = (scale[k+1] - scale[k])
x = scale[k]
d = he[k] # data from bin with index k
y = d[0] / w # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(w)
sum += y*w
print("PDF normalization: {0}".format(sum))
E133_20 = Y[-2]
E117_20 = Y[-2-nbins]
p1 = plt.bar(X, Y, W, color='r')
plt.xlabel('Energy(MeV)')
plt.ylabel('PDF of the photons')
plt.title('Energy distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# saving peak values
print("Peak PDF value at 1.33 MeV: {0}".format(E133_20))
print("Peak PDF value at 1.17 MeV: {0}".format(E117_20))
table = ListTable.ListTable()
table.append(["Nbins", "E=1.17", "E=1.33"])
table.append(["", "MeV", "MeV"])
table.append([5, 1.0, 1.0])
table.append([10, E133_10/E133_5, E133_10/E133_5])
table.append([20, E133_20/E133_5, E133_20/E133_5])
table
Znow = 197.5 # we at 200mm at the cooolimator exit
Zshot = 380.0 # shot isocenter is at 380mm
# radial, X and Y, all units in mm
hr = H1Du.H1Du(120, 0.0, 40.0)
hx = H1Du.H1Du(128, -32.0, 32.0)
hy = H1Du.H1Du(128, -32.0, 32.0)
for e in events:
WT = e[0]
xx, yy, zz = BEAMphsf.move_event(e, Znow, Zshot)
#xx = e[2]
#yy = e[3]
#zz = e[4]
r = math.sqrt(xx*xx + yy*yy)
hr.fill(r, WT)
hx.fill(xx, WT)
hy.fill(yy, WT)
print("Number of events in R histogram: {0}".format(hr.nof_events()))
print("Integral in R histogram: {0}".format(hr.integral()))
print("Underflow bin: {0}".format(hr.underflow()))
print("Overflow bin: {0}\n".format(hr.overflow()))
print("Number of events in X histogram: {0}".format(hx.nof_events()))
print("Integral in X histogram: {0}".format(hx.integral()))
print("Underflow bin: {0}".format(hx.underflow()))
print("Overflow bin: {0}\n".format(hx.overflow()))
print("Number of events in Y histogram: {0}".format(hy.nof_events()))
print("Integral in Y histogram: {0}".format(hy.integral()))
print("Underflow bin: {0}".format(hy.underflow()))
print("Overflow bin: {0}".format(hy.overflow()))
X = []
Y = []
W = []
norm = 1.0/hr.integral()
sum = 0.0
st = hr.step()
for k in range (0, hr.size()+1):
r_lo = hr.lo() + float(k) * st
r_hi = r_lo + st
r = 0.5*(r_lo + r_hi)
ba = math.pi * (r_hi*r_hi - r_lo*r_lo) # bin area
d = hr[k] # data from bin with index k
y = d[0] / ba # first part of bin is collected weights
y = y * norm
X.append(r)
Y.append(y)
W.append(st)
sum += y * ba
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, 0.0, color='b')
plt.xlabel('Radius(mm)')
plt.ylabel('PDF of the photons')
plt.title('Radial distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/hx.integral()
sum = 0.0
st = hx.step()
for k in range (0, hx.size()):
x_lo = hx.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = hx[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='b')
plt.xlabel('X(mm)')
plt.ylabel('PDF of the photons')
plt.title('X distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/hy.integral()
sum = 0.0
st = hy.step()
for k in range (0, hy.size()):
x_lo = hy.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = hy[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='b')
plt.xlabel('Y(mm)')
plt.ylabel('PDF of the photons')
plt.title('Y distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# angular, WZ, WX and WY, all units in radians
h_wz = H1Du.H1Du(100, 1.0 - 0.05, 1.0)
h_wx = H1Du.H1Du(110, -0.055, 0.055)
h_wy = H1Du.H1Du(110, -0.055, 0.055)
for e in events:
WT = e[0]
wx = e[5]
wy = e[6]
wz = e[7]
h_wz.fill(wz, WT)
h_wx.fill(wx, WT)
h_wy.fill(wy, WT)
print("Number of events in WZ histogram: {0}".format(h_wz.nof_events()))
print("Integral in WZ histogram: {0}".format(h_wz.integral()))
print("Underflow bin: {0}".format(h_wz.underflow()))
print("Overflow bin: {0}\n".format(h_wz.overflow()))
print("Number of events in WX histogram: {0}".format(h_wx.nof_events()))
print("Integral in WX histogram: {0}".format(h_wx.integral()))
print("Underflow bin: {0}".format(h_wx.underflow()))
print("Overflow bin: {0}\n".format(h_wx.overflow()))
print("Number of events in WY histogram: {0}".format(h_wy.nof_events()))
print("Integral in WY histogram: {0}".format(h_wy.integral()))
print("Underflow bin: {0}".format(h_wy.underflow()))
print("Overflow bin: {0}".format(h_wy.overflow()))
X = []
Y = []
W = []
norm = 1.0/h_wz.integral()
sum = 0.0
st = h_wz.step()
for k in range (0, h_wz.size()+1):
x_lo = h_wz.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = h_wz[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='g')
plt.xlabel('WZ')
plt.ylabel('PDF of the photons')
plt.title('Angular Z distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/h_wx.integral()
sum = 0.0
st = h_wx.step()
for k in range (0, h_wx.size()):
x_lo = h_wx.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = h_wx[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='g')
plt.xlabel('WX')
plt.ylabel('PDF of the photons')
plt.title('Angular X distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/h_wy.integral()
sum = 0.0
st = h_wy.step()
for k in range (0, h_wy.size()):
x_lo = h_wy.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = h_wy[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='g')
plt.xlabel('WY')
plt.ylabel('PDF of the photons')
plt.title('Angular Y distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, set filename to what we want to examine and read PhSF header
Step2: Checking PhSF header parameters
Step3: Energy Spectrum tests
Step4: Underflow bin is empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Step5: Filling energy histogram with double number of bins
Step6: Underflow bin is empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Step7: Filling energy histogram with quadruple number of bins
Step8: Underflow bin is empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Step9: Comparing peak values
Step10: The result is as expected. Only few percent of the values in the 1.33 and 1.17 MeV bins are due to scattered radiation. Most values are coming from primary source and are δ-peaks in energy.
Step11: NB
Step12: We find spatial distribution to be consistent with the collimation setup
|
15,190 | <ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
batch_size = 100
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
15,191 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from IPython.display import Image
import base64
Image(data=base64.decodestring("iVBORw0KGgoAAAANSUhEUgAAAMYAAABFCAYAAAARv5krAAAYl0lEQVR4Ae3dV4wc1bYG4D3YYJucc8455yCSSIYrBAi4EjriAZHECyAk3rAID1gCIXGRgIvASIQr8UTmgDA5imByPpicTcYGY+yrbx+tOUWpu2e6u7qnZ7qXVFPVVbv2Xutfce+q7hlasmTJktSAXrnn8vR/3/xXmnnadg1aTfxL3/7rwfSPmT+kf/7vf098YRtK+FnaZaf/SS++OjNNathufF9caiT2v/xxqbTGki/SXyM1nODXv/r8+7Tb+r+lnxZNcEFHEG/e3LnpoINXSh/PWzxCy/F9eWjOnDlLrr/++jR16tQakgylqdOWTZOGFqX5C/5IjXNLjdt7/NTvv/+eTjnllLT//vunr776Kl100UVpueWWq8n10lOmpSmTU5o/f0Fa3DDH1ry9p0/++eefaZ999slYYPS0005LK664Yk2eJ02ekqZNnZx+XzA/LfprYgGxePHitOqqq6YZM2akyfPmzUvXXXddHceoic2EOckxDj300CzPggUL0g033NC3OKy00krDer3pppv6FgcBIjvGUkv9u5paZZVVhoHpl4Mvv/wyhfxDQ0NZ7H7EQbacPHny39Tejzj88ccfacqUKRmHEecYf0Nr8GGAQJ8gMHCMPlH0QMzmEBg4RnN4DVr3CQIDx+gTRQ/EbA6BgWM0h9egdZ8g8PeliD4RutfF/Ouvfz9OtZy8aNGiNH/+/GGWl1122XzseYuVNKtqsaI23Ghw0DYCA8doG8JqO+AUG2+8cVq4cGHaY4890vLLL5/WXXfdfI6jvPDCC3lJ8amnnkoezP3000/pl19+GThHtWpIPekYomTxFS7HnkqKjMsss0yGgFE4r62tSBFVJ02aNPyconi9V4/JwzHwT9ZNNtkkeZ6w5ZZbph133DH99ttv6ccff8zXX3nllcRRnHNfv2cNGMQWGRaOrWbUrjsGBRLAA6U4Lhoqw9h2223ztRBq6aWXzsbgvueffz4Lu9NOO2UnYTgrr7xy7tO9nOH111/Pbb744ov0ww8/jAvngAdFMvQDDjggG/0GG2yQX1GZNm1aziCCwzrrrJPl3muvvXKwePnll9M333wzHDCKWPbLMbuAkfISjnvvvXcW/emnn85lqCBqa4a65hiYR/Gk2RNGRlwm3n7ggQfmdrKD9sqJtdZaKxvCnDlz8n3Tp09PXmPYeuutc0SVNQjvnmuvvTa3efzxx9N33303PGZ5rF75DBvvqq233nrp22+/TWeddVbyikpgxCE4vQDhlQUBRfDw2esbs2fPTquvvnqviNN1PuIdJ4GErVx44YUZowsuuCB9+umn6eeff84BspmsWqljhPFDxjGGYx/lDkN33udajCoVlAjRzl4U8LjefRwnPjsXG8OJqKBd8NB1LTU5IHyCd7LJGOYXNoGjFqaGIKtrERDIDKtukfGMH/zRZa1A101+YBF44KfMYzO8VOYYjDWiukiGqc022yyXOUqdzTffPJ/z1ialeqNVxA9gi0wzlOJ5juJlR8JeddVV+ZrIKTq4ZvJp/8EHH+SU+txzz+W2SqmxVFZRplrH5DTRXmGFFdKuu+6azjjjjOzosl5g6D54CQCI4mGjhNQO5occckh2LvLTA6fqJOEnyhU6kNlkZmUuvrtNcFx77bUzhsZWXgoSsm6t4Dsa/tp2DErCmA04HAI4FLjaaqtlBhmnSKiNY4rDtHZFB6jFMMH0RVDH+nCPYxtDCFJnKkniRbDitWjTK3sykQUuMLPn3DZGX8SFnCG/fVyz5zCCBtIHTLshdzif8fERn8cKXxjCNOwCTu3Qf6yqhV4AQokiP489//zzM0DxnQYKwqAtIkko1kQzFFxvaNcJ6u3Pe+65J/cRRvDee+9lA2BInIyRff/997nNO++8k7t0vl2A6vHWynmyiPJ43WKLLbIijz/++LTddtvlTCdzwIWSg9yjxBJ0GN/DDz+c7zv77LOzbEceeWSekwVGgsOsWbNyNo0+qt7DfPvtt8/dmtvIGnPnzk3PPPPMsJ6rHrNef/BBeJA90RprrJEDcNhctMkXR/mnbccwuCjNGTbaaKMc8TBZprITxOdgOvbuKxqGz6LSJ598kseJ9Gi1CYmSv/76a3YyJZWMZJ6Ceskp8EMusihFEAyUmVaa8G2rxTNHIrd733///eH7YeaLNe5xrEzlWNF/HqQDf0Tm+GIbvYdD43MsKAIo/JDgE0G5aFfN8NaWYxiUshikqGYTTUSt0TCkjXsYNqJQQso+rgGa0vX58ccf56hQTtk+48F92rmvlnE1A0on2uKP0Yrw+Nxzzz0zn+ZhjKwRXq6vueaa2TmUiRQfS7SyNeMks9IV9vrvJOl/q622yo4Mfw5Pvm6TMclLdit6shh+YAMnq1E29tEsteUYBgMSgxa5MOAzJZcVXQs4bUR8XxhCHIwzMALCBuCcx5q0tF3u133l8XrRMchFiRYNyMxBKM/5IjZlWVzjULKwACISytIWFsi56aab5mvOKyEikmdAO/iHY+BDCRUZuoPD1e1akECyLseA7d13352DhdKak8Cmlt3U7TSl9p58FwejYK8ncAwKpDTnGDcARbWiAUjHiNEHsITSPlagpEZChcfrZzwSOfBOiQwXLuR3PjAhtwAD08iAMCO/a+5xPTIm3ALjwERf0V+c69QeT7ZujVdLDhgKBrANXAMreMESRkU7rdVPrXNtZ4xIpSLH1VdfnR3j4IMPzkbw2Wefpa+//jovo5188slZsZjArAcvFP3YY4+lSy+9NEdTdTTy0I5xHHfccfm1CH2LtuORKEqmkwVlVU+sBY+IdJRmE0zeeOONnEXuu+++7AhnnnlmWn/99XMJ5brtzTffzHMJx/o555xzkgdb0U8rRtAKrnTYqtG1Ml6teyxInHDCCdlGYByBmG2Z97ChVvFo2zEwbHCRTbqP7EDxPjN2pUBEe86AXAcsg+f10TYMSTvnRM1ulQe1wG/nHEXZZEJZUIYQ5cgWMsEgMgqclFdkdh+MbFFyuddnWMLNfTYkcuuXHlBkpFYNI3dS+mMMfCHHsZWadfUjmQVn8iLywscG21apMscQwR555JEM3KuvvpoZ5LHOmzgjAvBwzFt2/Oijj3Lm4Ayin/MU/eGHH+b2N998c/5MGSaZ44nw7OEd5Rx77LE5+1EehYXxkpes5li2K6+8Mhv8Lrvsko381ltvzcEBfvHQKh5auk9GPvHEE3NJAx+/eKL/HXbYIQcbK3nwN067xAk4s5VHdbvsx0nxrYQeKxJMZAfBA7GlRx99NC9EtCN7JY4RoPBeAHIAyrB3jpHYwqu1d02d7HpZcfqINo5dL7eJMXtxTzk2sgWFM/gcsnCakI2cFOk+523O+Qw7WaeYHYpYRp9xn4BkbPdWSfgJXYYM+ne+2xRj2sdx8EDu8rm4Ntp9pY4RSmb0CIPOAVNGoLA47yU4S2xen37ppZdy9CkLE/3lm8bJHzJbbiavt2Q9p7AkK7oyXAZOLk7gs9c4PJC0AOE8DDyrgJkaWgYQkSPYuAdpWySfteU8HhqKouYq+io6ZfGeZo7xpbT1+jt+jGULfprpq922ePHMBibwjWVq523KVrzBsIzTaMeu1DFi0HI0YyyYtAekY5MltbRyihFJiROBKIYTwMCTWJNubwdQFCXFapK9z96mtbjgs3thFKWnUgjBzNZIya5FOyUcPG36q4LwRgZ6Ix8HtBk3tirGGU0feAkslHfk5PzBh2cXSkvtWqWOOEaRGcoSHdXDMoYn1tK8yaON0ahbCWgFS/vxSnjn5F4ItLeiFAGAzCKc7MDA1OlIjc4pLFKE7FEyxb5ZPNTbtuiv2fvrtddfOFsYXcwj8d8qv/XGq3femLvvvnvOvrIYPPEjG+PDseDbDnXcMXiyiGiyyACOPvrovN95552zV3/++ef5zVveznlEo6CICvG5l/d4JSvHP+qoo7JjKDs4PkVSGPm9HSz9W5rlPEoCQYHjVFXyRGnBOcKA28VOP/qTBWX6YnS2IKB8qYL/enyGHPbKziOOOCLj6sGeslGW8L6Y4ANr2MY99fpsdL7jjmFwkSTSr6gDVCk+tmDQedcJ5LgdwaLPbu7xjJRRNlErSsiQhVHJlOEQoh182o1wRTnharwYs3itnWP9Rd/RD5mLW5yveh/YRhYMjItyBh/wjPat8tEVx6B00RKo5513XpIl7rzzzuwEourMmTOz95uIcyBfTSXYiy++mCOrSFS1klsFrNZ9eGPoJtmeyRx00EE5cpGbIi21XnbZZbkMee2117KMHIKMIVcotVb/vXoOz6I0+URoMlVFcBFE7L1+IjNYIo6v/fo+D3tC+FCR+FHuwNUCgfOtUlccI5hnJMoIBhN1sBICqMoNNaLP3pkiFGciIIBC4HaEbRWk0dyHb3Mp/EY0I6+NsytvyKxsKhpQr8ozGpm1IZ8IbV+PyllGuyh1YBXXOQEcy6R8M5eAHzuxxX3GRvbaCKJ4aRfXrjkG5jEbk00Prxi8SZTJKmc5/PDDc5v99tsvC+hBjWtqStmD0F4Ma1foMvDtfqZMUc3/lYjMSFFW3NS7JtyyoKzSiTocHoFJHMc+MlK7Mta7n9NbATJerbEYvQWIWCVitIyaXrV3nsG7H2Y2GVcbxyj6NX+waKEPmOvbfShwtjhQDDz5Ygt/uuoY+OPtnICDEMBTWsAQUu0NBBsDEgFEWOADAiDaVRERWsCq5i34IRN+TbTJgn8KwzOFuR4KDUXW7Kyik53Ep8w/+RkxWeO5S1EM5wVABguXMGp69dk1x87D0ObdL32GHI5tsDQGHtwbm/Hw4TpnKvNY5Ge0x113DEwT3tIsIdSnDIfxcxJAevCHfE9cXcmotHXfAw88kIFUdgFjLMn4HuZRuh9FExmjRCCnZxRqcPxz8ioUVk9eRhJkPAYHV8ZVFRkjjFSfAtw222yTy2OZ0iv15fHcQ4dKaMcwsBdEEL26RzaIh5+yK7LSBGPno8yOZX+vzRhfXzZ8cRrtyzzkzpr803XHwB8wTJYIRol+VY8zqMMBbP0f+cExE1qTdbU7x3jwwQdzVBYdesExKNiEWx2MfwoOAyCbJ9uRHZvUTcPmsENhGNE4HBKOHKNqZzQu3KNfX9H1nRABQZlbNkpt4SNo4DWIIesDj9qYnwki2giWqol3330348kZLPm7xvi1Pffcc7MzhA3gy/0oeIuxWtmPiWNgNCIFYwcCAa2FA1ikJZz1aeUVsBmge9TyoqGoIqKUFdEKCFXcU0/pHJizVMUnXBiBh6IicdTTzsEOnuZkDE/2rcJI4KMf/TF+0TucwDhkZ+DGL4/nGkPGV/AIC+2RvfP6ZPTI4gu5XNM/Um7RPzuIFyn1zW7wpQ9UHj+fbOHPmDlGCOGBGIeQQfwuq0jnISBQfOHft7JEHN94Q5xF6XLFFVfkyKIEGyuiGAo3r6BIx0imcM6k+6GHHspOEQbcDq+UTl4BwRu7PstUiPEJFsa9/PLL83nXg6d2xnUvoxS5L7744uGyh/wyRpRF9YwSHsHjE088kWWADQeRFThZkTgBstensZG5h4m56oEdcAp9CwTOVUlj6hgECcGBpA6XDazeiLKhVABQAhKB3cNxbEAL4KoEppm+gjf3OMafDf+UW7zeTL/ltqIiAxBMOIIxnLOHgbFsMGQ4InhE0nJfrXw2hnIRD3SFBKmYWDfqE49woFvOzZno3NxM0HDciMjBDsjEBgLTsJHYN+qjmWtj7hjBLKFFQgL7qRz14jHHHJPBcC2M3wRPVDT5ohzZRv0Z16O/sdozAKmdopUH5kftTrzJpl+lk29CcgpLw3BgpMbwwqF/S80pGJ6xO0WM+8Ybbxw2TuOEoTYakwyovB/JKdzDMVQOHvCRzXju890fL11aGhcMqqIxdwwCRkYQDZAaE7lWBhyosQEmQM439MgffDHm0Si8EcuBC0ezcQSZVKYktzFEW+3sfQ4natRvu9eMTS9F7IvHo+m/2fb6LNuCc0WsW+mzHq9j6hgE9YCHp5tkez2EAVjlMOmyUlU2Lis8ygVR0rykyoltPZCaOY9fr32Qp50X6xi7pWCGbsHBvwLgGIcddljGxvcsjOU1GseyiKjJQWydpiqNsBlei85BfhNxeJunVCl31x0jBOMAjJ9jRC3OEERDS7QMI0qQohIYgLSq7FJuMZbi9WZA7kRbvFAWx5Dyy449mjEDG/dyDPW4VSiy2iNvBcCSUdxyyy35OYHrqJUx843j8I/qQpA074BVVdR1x+AIHCIiIGewsqIuds41tSSlOxeOFHuOQ/E+2zPEuFYVKM32U3RMvGy44YbZMTg2B2+GOIXXJcjpR9lkUy/QyZ7GUU8zAD9RCiuR0oQYVv1IMAk7qFL+rjkGg7GZQPLufffdN69QKJtkCAKKjNGu1p7gMgWDYEDRpkpAmu0rnMLehie/RavcI49Sr1ZW0w6V91ac/IsxmdHPB0U5pQ+4+TExDudNUhPufnaKIn7N6m2k9h11jKLRqP+UQJb2eHh4uYjK0LW1D0MpCq0NR4g24RTR/0hCdvM6/m14FtljeTL4D/liedFeO7LYcyh7eMGDY8X16IM8Vp9kWjj2GwWG5IZb2FKVOHTMMTCvDKBgD2Z22223bNynnnpqVrZXBFxjQDZUFJiwIqKHN8qHO+64IxvN/fffn9vG/VWC0UpfeC5uZMEbg/ctM/8SzYOxZ599Nhs4ebSx0ECpcDFvMCdRggkesoQ+zaHU0N4EgAEnue2227JTON+LgaEVDFu5h+w2Wdl33GFkEUIQqYIqdYwwbJGO8q2xOydqUiTFWpJVPzsuUwhlzzFETxlGdFSCqaMB4XwvUzgKWU3AyW4uwFns4QMbilUyxbq8p/4cw3UEB8FDGQUDx/acqB8zRS2dw5qthe3VatPKucocg6JiYu3lP2nfawvekKVITzgJQLH24QTBtPZeE2D89957b27jwZ1IwIm8R2OMWHmJ+3pxTzaK8l+HyMrgTzrppMxqOIEsGoZvz0nsyWiliRMUl2G9aOk6POyLZVUvYtBpniL4wA1m9lVSW46BOQqKpTLK9FnUsxftvW4swssa4dkhCGFCMNfcp08lhM9KKc4h0obgsa8ShHb6Cv5DJnu8IwHB9TB852DkOlzIRV6kXbSVMfQj48BWdhE0TLr1Fe3zQR/+gRMK5yjuq4KjZccQ2SlYjexHmCnSkiLjtsesmlnpQ5naFo1A5GMAHoJxBI709ttv54ygntZWmWEcQMS9VQleRT9kNmfAG0P3HRPGbHnVudg4gEyJOAYiE0wikHAAcxHyxndO4KI/WHEK/Qzo7wjAXfaFNdurikaNtIERRTqmYIYdE2tGEs8hfJ8iFB/3xV67MCjG8NZbb6Unn3wyC+XfDxfnDxFp496qhK6qn5CDA5twK/fIRH5Gb0MMOhxCFgkKjOBoHqKEkmWvueaanG04iTHcP3CKQO0/e3ZhgceP2smqcKyKRuUYlEKhPDL+d5z1c4qVFTDnmBIZMwZ9DiKAzTmvCetPNFR7W7fXXt/KLddqTcyjr17bRybkEF5XiQhPHnMuDlF07MCB3I49l4EDxTrnfsFBJBxQbQSKeGoROqjdurWzIzoGJqRxS2KUf/rpp2flcRDRjRKVCdpFhCwz7rOVKE5z++235/7uuuuuXDq5P5yKEY0np8B3TKb9K1/vLTF0/7MiJtyRPYrq4fx+7R2e7vFDDzDyfx1goPwcUGMEYG/rFI3oGAYW0UUyimQIcRwGzbgpVsZAUTYE065xCtc5GUeSHTyg4kzKs/FKoSBljyhvTz6y2gseZAwlwgI+cNBGtpV9ZRj4BobjFY9O8g0bQcXWaRpxBE5hHuFnJ0XB6dOn56ge2QGDlK2dFSSG4b8kxVzEdSWGVxgYQLzrxJkIGgbTaUE73b9MZ/KNfIMOJpdcckndYZWmFAwv+wgydW/o8wsCK3xnz56dFzx8oxPGtk7QiI5h0FBaeGzRKYIpjDN2ig6lB9OiprmI60qNieIMIXvsQy7yotjH9eI+2hbPDY4bI8D+2JdnWTYY+iwDs78qaUTHEM0sI1pClAVMnqX9ImGQszB6DHoNOLzZNZlGRlEq9JNB9JOsRXvoxDGnsDTudwFUHTNmzMjDqEaU9xYvGgWiZnka0TEo16CeNyCM1SLtwmt5cNEoCOUa5xjQAIFWEGBP5rbKdTRr1qwcfGUMthXVTCt917pnRMdwE6ZiQm0JckADBMYCgWLwtXjTSeq/d5Y7ieag7wmDwMAxJowqB4JUicDAMapEc9DXhEFgcjxcM7vvR4on7bHS1q84WNkpUr/iEL+aOLRw4cIlQCmuIhUBmsjHlpQ9c7EmzjEsN1vd6DeCg8UVT+qRd7b6EQey8wMT+6El8RSu36xhIO8AgQYI9F94bADG4NIAgUDg/wHX+3lgThDIegAAAABJRU5ErkJggg==".encode('utf-8')), embed=True)
import os
from six.moves.urllib.request import urlretrieve
SOURCE_URL = 'https://storage.googleapis.com/cvdf-datasets/mnist/'
WORK_DIRECTORY = "/tmp/mnist-data"
def maybe_download(filename):
A helper to download the data files if not present.
if not os.path.exists(WORK_DIRECTORY):
os.mkdir(WORK_DIRECTORY)
filepath = os.path.join(WORK_DIRECTORY, filename)
if not os.path.exists(filepath):
filepath, _ = urlretrieve(SOURCE_URL + filename, filepath)
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
else:
print('Already downloaded', filename)
return filepath
train_data_filename = maybe_download('train-images-idx3-ubyte.gz')
train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')
test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')
test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz')
import gzip, binascii, struct, numpy
import matplotlib.pyplot as plt
with gzip.open(test_data_filename) as f:
# Print the header fields.
for field in ['magic number', 'image count', 'rows', 'columns']:
# struct.unpack reads the binary data provided by f.read.
# The format string '>i' decodes a big-endian integer, which
# is the encoding of the data.
print(field, struct.unpack('>i', f.read(4))[0])
# Read the first 28x28 set of pixel values.
# Each pixel is one byte, [0, 255], a uint8.
buf = f.read(28 * 28)
image = numpy.frombuffer(buf, dtype=numpy.uint8)
# Print the first few values of image.
print('First 10 pixels:', image[:10])
%matplotlib inline
# We'll show the image and its pixel value histogram side-by-side.
_, (ax1, ax2) = plt.subplots(1, 2)
# To interpret the values as a 28x28 image, we need to reshape
# the numpy array, which is one dimensional.
ax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(image, bins=20, range=[0,255]);
# Let's convert the uint8 image to 32 bit floats and rescale
# the values to be centered around 0, between [-0.5, 0.5].
#
# We again plot the image and histogram to check that we
# haven't mangled the data.
scaled = image.astype(numpy.float32)
scaled = (scaled - (255 / 2.0)) / 255
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(scaled, bins=20, range=[-0.5, 0.5]);
with gzip.open(test_labels_filename) as f:
# Print the header fields.
for field in ['magic number', 'label count']:
print(field, struct.unpack('>i', f.read(4))[0])
print('First label:', struct.unpack('B', f.read(1))[0])
IMAGE_SIZE = 28
PIXEL_DEPTH = 255
def extract_data(filename, num_images):
Extract the images into a 4D tensor [image index, y, x, channels].
For MNIST data, the number of channels is always 1.
Values are rescaled from [0, 255] down to [-0.5, 0.5].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and dimensions; we know these values.
bytestream.read(16)
buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images)
data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)
data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH
data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1)
return data
train_data = extract_data(train_data_filename, 60000)
test_data = extract_data(test_data_filename, 10000)
print('Training data shape', train_data.shape)
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys);
ax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys);
NUM_LABELS = 10
def extract_labels(filename, num_images):
Extract the labels into a 1-hot matrix [image index, label index].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and count; we know these values.
bytestream.read(8)
buf = bytestream.read(1 * num_images)
labels = numpy.frombuffer(buf, dtype=numpy.uint8)
# Convert to dense 1-hot representation.
return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32)
train_labels = extract_labels(train_labels_filename, 60000)
test_labels = extract_labels(test_labels_filename, 10000)
print('Training labels shape', train_labels.shape)
print('First label vector', train_labels[0])
print('Second label vector', train_labels[1])
VALIDATION_SIZE = 5000
validation_data = train_data[:VALIDATION_SIZE, :, :, :]
validation_labels = train_labels[:VALIDATION_SIZE]
train_data = train_data[VALIDATION_SIZE:, :, :, :]
train_labels = train_labels[VALIDATION_SIZE:]
train_size = train_labels.shape[0]
print('Validation shape', validation_data.shape)
print('Train size', train_size)
import tensorflow as tf
# We'll bundle groups of examples during training for efficiency.
# This defines the size of the batch.
BATCH_SIZE = 60
# We have only one channel in our grayscale images.
NUM_CHANNELS = 1
# The random seed that defines initialization.
SEED = 42
# This is where training samples and labels are fed to the graph.
# These placeholder nodes will be fed a batch of training data at each
# training step, which we'll write once we define the graph structure.
train_data_node = tf.placeholder(
tf.float32,
shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))
train_labels_node = tf.placeholder(tf.float32,
shape=(BATCH_SIZE, NUM_LABELS))
# For the validation and test data, we'll just hold the entire dataset in
# one constant node.
validation_data_node = tf.constant(validation_data)
test_data_node = tf.constant(test_data)
# The variables below hold all the trainable weights. For each, the
# parameter defines how the variables will be initialized.
conv1_weights = tf.Variable(
tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.
stddev=0.1,
seed=SEED))
conv1_biases = tf.Variable(tf.zeros([32]))
conv2_weights = tf.Variable(
tf.truncated_normal([5, 5, 32, 64],
stddev=0.1,
seed=SEED))
conv2_biases = tf.Variable(tf.constant(0.1, shape=[64]))
fc1_weights = tf.Variable( # fully connected, depth 512.
tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],
stddev=0.1,
seed=SEED))
fc1_biases = tf.Variable(tf.constant(0.1, shape=[512]))
fc2_weights = tf.Variable(
tf.truncated_normal([512, NUM_LABELS],
stddev=0.1,
seed=SEED))
fc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS]))
print('Done')
def model(data, train=False):
The Model definition.
# 2D convolution, with 'SAME' padding (i.e. the output feature map has
# the same size as the input). Note that {strides} is a 4D array whose
# shape matches the data layout: [image index, y, x, depth].
conv = tf.nn.conv2d(data,
conv1_weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Bias and rectified linear non-linearity.
relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))
# Max pooling. The kernel size spec ksize also follows the layout of
# the data. Here we have a pooling window of 2, and a stride of 2.
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
conv = tf.nn.conv2d(pool,
conv2_weights,
strides=[1, 1, 1, 1],
padding='SAME')
relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Reshape the feature map cuboid into a 2D matrix to feed it to the
# fully connected layers.
pool_shape = pool.get_shape().as_list()
reshape = tf.reshape(
pool,
[pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])
# Fully connected layer. Note that the '+' operation automatically
# broadcasts the biases.
hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)
# Add a 50% dropout during training only. Dropout also scales
# activations such that no rescaling is needed at evaluation time.
if train:
hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)
return tf.matmul(hidden, fc2_weights) + fc2_biases
print('Done')
# Training computation: logits + cross-entropy loss.
logits = model(train_data_node, True)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=train_labels_node, logits=logits))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +
tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))
# Add the regularization term to the loss.
loss += 5e-4 * regularizers
# Optimizer: set up a variable that's incremented once per batch and
# controls the learning rate decay.
batch = tf.Variable(0)
# Decay once per epoch, using an exponential schedule starting at 0.01.
learning_rate = tf.train.exponential_decay(
0.01, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
train_size, # Decay step.
0.95, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(loss,
global_step=batch)
# Predictions for the minibatch, validation set and test set.
train_prediction = tf.nn.softmax(logits)
# We'll compute them only once in a while by calling their {eval()} method.
validation_prediction = tf.nn.softmax(model(validation_data_node))
test_prediction = tf.nn.softmax(model(test_data_node))
print('Done')
# Create a new interactive session that we'll use in
# subsequent code cells.
s = tf.InteractiveSession()
# Use our newly created session as the default for
# subsequent operations.
s.as_default()
# Initialize all the variables we defined above.
tf.global_variables_initializer().run()
BATCH_SIZE = 60
# Grab the first BATCH_SIZE examples and labels.
batch_data = train_data[:BATCH_SIZE, :, :, :]
batch_labels = train_labels[:BATCH_SIZE]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
print('Done')
print(predictions[0])
# The highest probability in the first entry.
print('First prediction', numpy.argmax(predictions[0]))
# But, predictions is actually a list of BATCH_SIZE probability vectors.
print(predictions.shape)
# So, we'll take the highest probability for each vector.
print('All predictions', numpy.argmax(predictions, 1))
print('Batch labels', numpy.argmax(batch_labels, 1))
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1))
total = predictions.shape[0]
print(float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
def error_rate(predictions, labels):
Return the error rate and confusions.
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1))
total = predictions.shape[0]
error = 100.0 - (100 * float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
return error, confusions
print('Done')
# Train over the first 1/4th of our training set.
steps = train_size // BATCH_SIZE
for step in range(steps):
# Compute the offset of the current minibatch in the data.
# Note that we could use better randomization across epochs.
offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)
batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :]
batch_labels = train_labels[offset:(offset + BATCH_SIZE)]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
# Print out the loss periodically.
if step % 100 == 0:
error, _ = error_rate(predictions, batch_labels)
print('Step %d of %d' % (step, steps))
print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr))
print('Validation error: %.1f%%' % error_rate(
validation_prediction.eval(), validation_labels)[0])
test_error, confusions = error_rate(test_prediction.eval(), test_labels)
print('Test error: %.1f%%' % test_error)
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
for i, cas in enumerate(confusions):
for j, count in enumerate(cas):
if count > 0:
xoff = .07 * len(str(count))
plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white')
plt.xticks(numpy.arange(NUM_LABELS))
plt.hist(numpy.argmax(test_labels, 1));
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: We're going to be building a model that recognizes these digits as 5, 0, and 4.
Step3: Working with the images
Step4: The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.
Step5: The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.
Step6: Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].
Step8: Indeed, the first label of the test set is 7.
Step9: A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.
Step11: Looks good. Now we know how to index our full set of training and test images.
Step12: As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations.
Step13: The 1-hot encoding looks reasonable.
Step14: Defining the model
Step16: Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.
Step17: Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.
Step18: Training and visualizing results
Step19: Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example.
Step20: Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities.
Step21: As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels.
Step22: Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class.
Step23: Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch.
Step25: Now let's wrap this up into our scoring function.
Step26: We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.
Step27: The error seems to have gone down. Let's evaluate the results using the test set.
Step28: We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.
|
15,192 | <ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
class NoeudTri (object):
def __init__(self,s):
self.mot = s
NoeudTri("a")
class NoeudTri (object):
def __init__(self,s):
self.mot = s
def __str__(self):
return self.mot + "\n" # \n : passage à la ligne
print(NoeudTri("a"))
class NoeudTri (object):
def __init__(self,s):
self.mot = s
def __str__(self):
return self.mot + "\n" # \n : passage à la ligne
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
NoeudTri("a")
class NoeudTri (object):
def __init__(self,s):
self.mot = s
def __str__(self):
return self.mot + "\n"
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
def insere(self,s):
if s < self.mot:
self.avant = NoeudTri (s) # ajout d'un successeur
elif s > self.mot:
self.apres = NoeudTri (s) # ajout d'un successeur
else:
# égalite, on ne fait rien
pass
n = NoeudTri("a")
n.insere("b")
class NoeudTri (object):
def __init__(self,s):
self.mot = s
def __str__(self):
s = ""
if hasattr(self, "avant"):
s += self.avant.__str__ ()
s += self.mot + "\n"
if hasattr(self, "apres"):
s += self.apres.__str__()
return s
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
def insere(self,s):
if s < self.mot:
self.avant = NoeudTri (s) # ajout d'un successeur
elif s > self.mot:
self.apres = NoeudTri (s) # ajout d'un successeur
else:
# égalite, on ne fait rien
pass
n = NoeudTri("a")
n.insere("b")
print(n)
class SecondeInserstion (AttributeError):
"insertion d'un mot déjà inséré"
class NoeudTri :
def __init__(self,s):
self.mot = s
# la création d'un nouveau noeud a été placée dans une méthode
def nouveau_noeud(self, s) :
return self.__class__(s)
def __str__(self):
s = ""
if hasattr(self, "avant"):
s += self.avant.__str__ ()
s += self.mot + "\n"
if hasattr(self, "apres"):
s += self.apres.__str__()
return s
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
def insere(self,s):
if s < self.mot:
if hasattr(self, "avant"):
self.avant.insere (s) # délégation
else:
self.avant = self.nouveau_noeud(s) # création
elif s > self.mot:
if hasattr(self, "apres"):
self.apres.insere (s) # délégation
else:
self.apres = self.nouveau_noeud(s) # création
else:
raise SecondeInsertion(s)
li = ["un", "deux", "unite", "dizaine", "exception", "dire", \
"programme", "abc", "xyz", "opera", "quel"]
racine = None
for mot in li:
if racine is None:
# premier cas : aucun mot --> on crée le premier noeud
racine = NoeudTri(mot)
else :
# second cas : il y a déjà un mot, on ajoute le mot suivant à l'arbre
racine.insere(mot)
print(racine)
class NoeudTri :
def __init__(self,s):
self.mot = s
# la création d'un nouveau noeud a été placée dans une méthode
def nouveau_noeud(self, s) :
return self.__class__(s)
def __str__(self):
s = ""
if hasattr(self, "avant"):
s += self.avant.__str__ ()
s += self.mot + "\n"
if hasattr(self, "apres"):
s += self.apres.__str__()
return s
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
def insere(self,s):
if s < self.mot:
if hasattr(self, "avant"):
self.avant.insere (s) # délégation
else:
self.avant = self.nouveau_noeud(s) # création
elif s > self.mot:
if hasattr(self, "apres"):
self.apres.insere (s) # délégation
else:
self.apres = self.nouveau_noeud(s) # création
else:
raise SecondeInsertion(s)
def dessin(self):
s = ""
if hasattr(self, "avant"):
s += 'n{0} -> n{1} [label="-"];\n'.format(id(self), id(self.avant))
s += self.avant.dessin()
s += 'n{0} [label="{1}"];\n'.format(id(self), self.mot)
if hasattr(self, "apres"):
s += 'n{0} -> n{1} [label="+"];\n'.format(id(self), id(self.apres))
s += self.apres.dessin()
return s
li = ["un", "deux", "unite", "dizaine", "exception", "dire", \
"programme", "abc", "xyz", "opera", "quel"]
racine = None
for mot in li:
if racine is None:
# premier cas : aucun mot --> on crée le premier noeud
racine = NoeudTri(mot)
else :
# second cas : il y a déjà un mot, on ajoute le mot suivant à l'arbre
racine.insere(mot)
print(racine.dessin())
from pyensae.graphhelper import draw_diagram
img = draw_diagram(
blockdiag {{
{0}
}}
.format(racine.dessin()))
img
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q1
Step2: Q2
Step3: Q3
Step4: La méthode insere prévoit de ne rien faire dans le cas où le mot s passé en argument est égal à l'attribut mot
Step5: L'insertion des mots donnés dans l'énoncé produit le code suivant
Step7: Q7
|
15,193 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import arcgis
df = pd.read_csv('./store_locations.csv', index_col='OBJECTID')
df.head()
df['SHAPE'] = df.apply(lambda row: arcgis.geometry.Point({'x': row.X, 'y': row.Y, 'spatialReference': {'wkid': 4326}}), axis=1)
df = df.drop(['X', 'Y'], axis=1)
df.head()
sdf = arcgis.features.SpatialDataFrame(df)
sdf.set_geometry(col='SHAPE') # assign the properly formatted shape field to be recognized by the SpatialDataFrame
sdf.reset_index(inplace=True, drop=True)
sdf.head()
# get a subset to test with, just the first five records
sdf = sdf[:5]
fs_store_locations = sdf.to_featureset()
fs_store_locations
from getpass import getpass
gis_coldbrew = arcgis.gis.GIS(
url='http://portal.coldbrew.esri.com/portal',
username='headless',
password=getpass('Please enter the headless password: ')
)
resp_get_vars = gis_coldbrew._con.get(
path='https://ba.coldbrew.esri.com/arcgis/rest/services/DefaultMap/MapServer/exts/BAServer/GetVariables/execute'
)
resp_get_vars
fs_service_area = arcgis.features.FeatureSet(result_service_area['saPolygons']['features'])
df = pd.DataFrame([field for field in resp_get_vars['results'][0]['value'] if field['name'].endswith('_CY')])
df.head()
enrichment_variables = df.name.unique().tolist()
enrichment_variables
enrichment_categories = df.category.unique().tolist()
enrichment_categories
trade_area_drive_time = 8 # in minutes
study_area_options = '{"areaType":"DriveTimeBuffer","bufferUnits":"esriDriveTimeUnitsMinutes",' + \
'"bufferRadii":' + '[{drive_time}]'.format(drive_time=trade_area_drive_time) + '}"'
study_area_options = '{"areaType":"DriveTimeBuffer","bufferUnits":"esriDriveTimeUnitsMinutes","bufferRadii":[5]}'
url_geoenrich = gis_coldbrew.properties.helperServices.geoenrichment.url + "/Geoenrichment/Enrich"
payload = {
'studyAreas': fs_store_locations.features,
# 'analysisVariables': enrichment_variables,
'dataCollections': '["KeyUSFacts"]',
'studyAreasOptions': study_area_options,
'f': 'json'
}
headers = {
'content-type': "application/x-www-form-urlencoded",
'cache-control': "no-cache"
}
resp_enrich = gis_coldbrew._con.post(url_geoenrich, postdata=payload)
resp_enrich
response_feature_set = response['results'][0]['value']['FeatureSet'][0]
response_feature_set
fs_enrich = arcgis.features.FeatureSet(
features=resp_enrich['results'][0]['value']['FeatureSet'][0]['features'],
fields=resp_enrich['results'][0]['value']['FeatureSet'][0]['fields']
)
fs_enrich
[field['name'] for field in response_feature_set.fields]
df_enrich = fs_enrich.df
df_enrich
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data into a Pandas DataFrame from a CSV file.
Step2: While the coordinates for each store are contained in an X (longitude) and Y (latitude) field, the data is not yet able to be recognized spatially. We need to create a point geometry for each location in a new field so the data will be recognized as spatial. Once this is done, we also can get rid of the explicity X and Y fields, since the location is now stored in the SHAPE field.
Step3: Now, with the location data properly formatted to be recoginzed as point geometry, we can create a SpatialDataFrame with the store locations so the data will now be recognized as spatial data.
Step4: Convert the SpatailDataFrame to a FeatureSet to use as input for subsequent analysis steps.
Step5: Acquire Demographic Analysis Factors Based on Drive Time Trade Areas Around Stores
Step6: Get Variables for Enrichment from Server
Step7: Convert the JSON response to a Pandas DataFrame filtered to just use the fields with names ending in _CY for current year variabes, which are the basic descriptive demographics. Using the DataFrame makes getting summarized values a little easier.
Step8: Using the unique method to get a list of unique variable names, along with the enrichment categories.
Step9: Perform Geoenrichment
|
15,194 | <ASSISTANT_TASK:>
Python Code:
import csv
import numpy as np
import matplotlib.pyplot as plt
import os
import itertools
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
nba_games = []
fname = 'nba-games.csv'
with open(fname,"r") as f:
# game_id,game_date,team_id,team_name,net_rating
reader = csv.reader(f)
next(reader)
nba_games = list(reader)
def team_net_ratings(the_team_name):
team name is one of
["76ers", "Bucks", "Bulls", "Cavaliers", "Celtics",
"Clippers", "Grizzlies", "Hawks", "Heat",
"Hornets", "Jazz", "Kings", "Knicks", "Lakers",
"Magic", "Mavericks", "Nets", "Nuggets", "Pacers",
"Pelicans", "Pistons", "Raptors", "Rockets", "Spurs",
"Suns", "Thunder", "Timberwolves", "Trail Blazers", "Warriors", "Wizards"]
return [
float(net_rating)
for game_id,game_date,team_id,team_name,net_rating in nba_games
if team_name == the_team_name]
import matplotlib.mlab as mlab
plt.hist(team_net_ratings('Pistons'), bins=15)
plt.title("Pistons Net Rating for 2015/2016 Games")
plt.xlabel("Net Rating")
plt.ylabel("Num Games")
plt.show()
plt.hist(team_net_ratings("Warriors"), bins=15, color='b', label='Warriors')
plt.hist(team_net_ratings("76ers"), bins=15, color='r', alpha=0.5, label='76ers')
plt.title("Net Rating for 2015/2016 Games")
plt.xlabel("Net Rating")
plt.ylabel("Num Games")
plt.legend()
plt.show()
def box_plot_teams(team_names):
reversed_names = list(reversed(team_names))
data = [team_net_ratings(team_name) for team_name in reversed_names]
plt.figure()
plt.xlabel('Game Net Ratings')
plt.boxplot(data, labels=reversed_names, vert=False)
plt.show()
box_plot_teams(['Cavaliers', 'Raptors', 'Celtics', 'Heat', 'Hawks', 'Hornets', 'Pacers', 'Bulls', 'Pistons'])
def mean_std(team_name):
nrs = team_net_ratings(team_name)
return (team_name, np.mean(nrs), np.std(nrs))
[mean_std(team_name) for team_name in
['Cavaliers', 'Raptors', 'Celtics', 'Heat', 'Hawks', 'Hornets', 'Pacers', 'Bulls', 'Pistons']]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Next, a function that extracts the net rating for a team.
Step3: The Pistons
Step4: As experienced as a fan this year, we are a bit bi-modal, sometimes playing great, even beating some of the leagu's best teams, other times getting blown out (that -40 net rating was most recently against The Wizards).
Step5: Yep, the warriors usually win, and the 76ers usually lose. Still striking to see how many games the warriors win by a safe margin.
Step6: We can see that The Pistons have the largest spread, but have a median slightly better than The Bulls (they are in fact neck and neck) with potentially more upside. The 3rd quartile net rating of close to 10 is what makes us Pistons fans feel like we could have a shot against most teams in The Eastern Conference.
|
15,195 | <ASSISTANT_TASK:>
Python Code:
tf.reset_default_graph()#important for tensorboard when using jupyter notebook
students = tf.Variable(13, name="students")
coffee = tf.Variable(-10, name="coffee")
lees_checking = students*coffee
#one way to do it but pretty verbose
sess = tf.Session()
sess.run(students.initializer)
sess.run(coffee.initializer)
result = sess.run(lees_checking)
print("Lee's checking balance:" + str(result))
sess.close()
#but this is better
with tf.Session() as sess:
students.initializer.run()
coffee.initializer.run()
result = lees_checking.eval()
print(result)
tf.reset_default_graph()
students = tf.Variable(13, name="students")
coffee = tf.Variable(-10, name="coffee")
lees_checking = students*coffee
init = tf.global_variables_initializer() # prepare an init node
with tf.Session() as sess:
init.run() # actually initialize all the variables
print(result)
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]
scaler = StandardScaler()
scaled_housing_data = scaler.fit_transform(housing.data)
scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]
print(scaled_housing_data_plus_bias.shape)
print(scaled_housing_data_plus_bias.mean())
print(scaled_housing_data_plus_bias.mean(axis=0))
print(scaled_housing_data_plus_bias.mean(axis=1))
tf.reset_default_graph()
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)
with tf.Session() as sess:
theta_value = theta.eval()
print(theta_value)
n_epochs = 1000
learning_rate = 0.01
# all the data, batch gradient descent
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = 2/m * tf.matmul(tf.transpose(X), error)
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch%100==0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print(theta_value)
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
# all the data, batch gradient descent
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = tf.gradients(mse, [theta])[0]
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
%%time
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
tf.reset_default_graph()
A = tf.placeholder(tf.float32, shape=(None, 3))
B = A + 5
with tf.Session() as sess:
B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]})
B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]})
print(B_val_1)
print(B_val_2)
%%time
#Batch parameters
n_epochs = 1000
learning_rate = 0.01
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
batch_index =0
rnd.seed(42)
indices = rnd.randint(m,size=m)
mb_indices = np.array_split(indices,n_batches)
#Construction phase (Our graph again)
tf.reset_default_graph()
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
#Execution phase
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_inds in mb_indices:
X_batch = scaled_housing_data_plus_bias[batch_inds]
y_batch = housing.target.reshape(-1, 1)[batch_inds]
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
%%time
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
def fetch_batch(epoch, batch_index, batch_size):
rnd.seed(epoch * n_batches + batch_index)
indices = rnd.randint(m, size=batch_size)
X_batch = scaled_housing_data_plus_bias[indices]
y_batch = housing.target.reshape(-1, 1)[indices]
return X_batch, y_batch
n_epochs = 1000
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
save_path = saver.save(sess, "/tmp/my_model.ckpt")
sess.run(training_op)
best_theta = theta.eval()
save_path = saver.save(sess, "my_model_final.ckpt")
print("Best theta:")
print(best_theta)
tf.reset_default_graph()
%%time
'''Time stamp log directory'''
#if you don't use a different log directory every time you run the program
# TensorBoard will merge stats from different runs
from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
logdir = "{}/run-{}/".format(root_logdir, now)
'''Hyper parameters'''
learning_rate = 0.01
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
'''Construction phase'''
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
#FOR TENSORBOARD
#The first line creates a node in the graph that evaluatees the MSE value and writes it
#to the tensorboard log string called a 'summary'. The second line creates a FileWriter
# that we'll use to write summaries to logfiles.
# logdir : path of log directory
# tf.get_default_graph() : graph you want to visualize. optional
mse_summary = tf.summary.scalar('MSE', mse)
# merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
step = epoch * n_batches + batch_index
train_writer.add_summary(summary_str, step)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
train_writer.flush()
train_writer.close()
print("Best theta:")
print(best_theta)
9 % 10 == 0
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can do a lil bit better
Step2: Linear regression example
Step3: Using the Normal Equation
Step4: Batch gradient descent with manually computed gradients
Step5: Batch gradient descent with using autodiff
Step6: Gradient Descent with Optimizer
Step7: Feeding data
Step8: Mini-batch Gradient Descent
Step9: another mini batch method
Step10: saving and restoring
|
15,196 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
# Example of a picture
index = 88
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig[1].shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid( np.dot(w.T, X) + b ) # compute activation
cost = -(1/m)*np.sum( Y * np.log(A) + (1-Y) * np.log(1-A) ) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = (1/m) * np.dot(X, (A - Y).T)
db = (1/m) * (np.sum(A - Y))
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw # Use broadcasting
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid( np.dot(w.T, X) + b )
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0,i] = 1 if A[0,i] > 0.5 else 0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "dog.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 - Overview of the Problem set
Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Step4: Expected Output for m_train, m_test and num_px
Step5: Expected Output
Step7: <font color='blue'>
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Expected Output
Step16: Expected Output
Step17: Run the following cell to train your model.
Step18: Expected Output
Step19: Let's also plot the cost function and the gradients.
Step20: Interpretation
Step21: Interpretation
|
15,197 | <ASSISTANT_TASK:>
Python Code:
T = 3.0 # duration in seconds
sr = 22050 # sampling rate in Hertz
amplitude = numpy.logspace(-3, 0, int(T*sr), endpoint=False, base=10.0) # time-varying amplitude
print(amplitude.min(), amplitude.max()) # starts at 110 Hz, ends at 880 Hz
t = numpy.linspace(0, T, int(T*sr), endpoint=False)
x = amplitude*numpy.sin(2*numpy.pi*440*t)
ipd.Audio(x, rate=sr)
librosa.display.waveplot(x, sr=sr)
frame_length = 1024
hop_length = 512
def rmse(x):
return numpy.sqrt(numpy.mean(x**2))
plt.semilogy([rmse(x[i:i+frame_length])
for i in range(0, len(x), hop_length)])
frames = librosa.util.frame(x, frame_length=frame_length, hop_length=hop_length)
plt.semilogy([rmse(frame) for frame in frames.T])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the signal
Step2: Listen to the signal
Step3: Plot the signal
Step4: Segmentation Using Python List Comprehensions
Step5: Define a helper function
Step6: Using a list comprehension, plot the RMSE for each frame on a log-y axis
Step7: librosa.util.frame
|
15,198 | <ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
iris = datasets.load_iris()
X, y = iris.data[:, 1:3], iris.target
from sklearn import cross_validation
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
import numpy as np
np.random.seed(123)
clf1 = LogisticRegression()
clf2 = RandomForestClassifier()
clf3 = GaussianNB()
print('5-fold cross validation:\n')
for clf, label in zip([clf1, clf2, clf3], ['Logistic Regression', 'Random Forest', 'naive Bayes']):
scores = cross_validation.cross_val_score(clf, X, y, cv=5, scoring='accuracy')
print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
from sklearn.base import BaseEstimator
from sklearn.base import ClassifierMixin
import numpy as np
import operator
class EnsembleClassifier(BaseEstimator, ClassifierMixin):
Ensemble classifier for scikit-learn estimators.
Parameters
----------
clf : `iterable`
A list of scikit-learn classifier objects.
weights : `list` (default: `None`)
If `None`, the majority rule voting will be applied to the predicted class labels.
If a list of weights (`float` or `int`) is provided, the averaged raw probabilities (via `predict_proba`)
will be used to determine the most confident class label.
def __init__(self, clfs, weights=None):
self.clfs = clfs
self.weights = weights
def fit(self, X, y):
Fit the scikit-learn estimators.
Parameters
----------
X : numpy array, shape = [n_samples, n_features]
Training data
y : list or numpy array, shape = [n_samples]
Class labels
for clf in self.clfs:
clf.fit(X, y)
def predict(self, X):
Parameters
----------
X : numpy array, shape = [n_samples, n_features]
Returns
----------
maj : list or numpy array, shape = [n_samples]
Predicted class labels by majority rule
self.classes_ = np.asarray([clf.predict(X) for clf in self.clfs])
if self.weights:
avg = self.predict_proba(X)
maj = np.apply_along_axis(lambda x: max(enumerate(x), key=operator.itemgetter(1))[0], axis=1, arr=avg)
else:
maj = np.asarray([np.argmax(np.bincount(self.classes_[:,c])) for c in range(self.classes_.shape[1])])
return maj
def predict_proba(self, X):
Parameters
----------
X : numpy array, shape = [n_samples, n_features]
Returns
----------
avg : list or numpy array, shape = [n_samples, n_probabilities]
Weighted average probability for each class per sample.
self.probas_ = [clf.predict_proba(X) for clf in self.clfs]
avg = np.average(self.probas_, axis=0, weights=self.weights)
return avg
np.random.seed(123)
eclf = EnsembleClassifier(clfs=[clf1, clf2, clf3], weights=[1,1,1])
for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'Ensemble']):
scores = cross_validation.cross_val_score(clf, X, y, cv=5, scoring='accuracy')
print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
import pandas as pd
np.random.seed(123)
df = pd.DataFrame(columns=('w1', 'w2', 'w3', 'mean', 'std'))
i = 0
for w1 in range(1,4):
for w2 in range(1,4):
for w3 in range(1,4):
if len(set((w1,w2,w3))) == 1: # skip if all weights are equal
continue
eclf = EnsembleClassifier(clfs=[clf1, clf2, clf3], weights=[w1,w2,w3])
scores = cross_validation.cross_val_score(
estimator=eclf,
X=X,
y=y,
cv=5,
scoring='accuracy',
n_jobs=1)
df.loc[i] = [w1, w2, w3, scores.mean(), scores.std()]
i += 1
df.sort(columns=['mean', 'std'], ascending=False)
class ColumnSelector(object):
A feature selector for scikit-learn's Pipeline class that returns
specified columns from a numpy array.
def __init__(self, cols):
self.cols = cols
def transform(self, X, y=None):
return X[:, self.cols]
def fit(self, X, y=None):
return self
from sklearn.pipeline import Pipeline
from sklearn.lda import LDA
pipe1 = Pipeline([
('sel', ColumnSelector([1])), # use only the 1st feature
('clf', GaussianNB())])
pipe2 = Pipeline([
('sel', ColumnSelector([0, 1])), # use the 1st and 2nd feature
('dim', LDA(n_components=1)), # Dimensionality reduction via LDA
('clf', LogisticRegression())])
eclf = EnsembleClassifier([pipe1, pipe2])
scores = cross_validation.cross_val_score(eclf, X, y, cv=5, scoring='accuracy')
print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
pipe1 = Pipeline([
('sel', ColumnSelector([1])), # use only the 1st feature
('clf', RandomForestClassifier())])
pipe2 = Pipeline([
('sel', ColumnSelector([0, 1])), # use the 1st and 2nd feature
('dim', LDA(n_components=1)), # Dimensionality reduction via LDA
('clf', LogisticRegression())])
pipe3 = Pipeline([
('eclf', EnsembleClassifier([pipe1, pipe2])),
])
parameters = {
'eclf__clfs__dim__n_components':(1,1),
}
grid_search = GridSearchCV(pipe3, parameters, n_jobs=-1, cv=5, verbose=5, refit=True, scoring=None)
grid_search.fit(X, y)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: As we can see from the cross-validation results above, the performance of the three models is almost equal.
Step6: <br>
Step8: <br>
|
15,199 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# 加载 TensorBoard notebook 插件
%load_ext tensorboard
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
import numpy as np
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
data_size = 1000
# 80% 的数据用来训练
train_pct = 0.8
train_size = int(data_size * train_pct)
# 创建在(-1,1)范围内的随机数作为输入
x = np.linspace(-1, 1, data_size)
np.random.shuffle(x)
# 生成输出数据
# y = 0.5x + 2 + noise
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (data_size, ))
# 将数据分成训练和测试集
x_train, y_train = x[:train_size], y[:train_size]
x_test, y_test = x[train_size:], y[train_size:]
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(lr=0.2),
)
print("Training ... With default parameters, this takes less than 10 seconds.")
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback],
)
print("Average test loss: ", np.average(training_history.history['loss']))
%tensorboard --logdir logs/scalars
print(model.predict([60, 25, 2]))
# 理想的输出结果是:
# [[32.0]
# [14.5]
# [ 3.0]]
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()
def lr_schedule(epoch):
Returns a custom learning rate that decreases as epochs progress.
learning_rate = 0.2
if epoch > 10:
learning_rate = 0.02
if epoch > 20:
learning_rate = 0.01
if epoch > 50:
learning_rate = 0.005
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(),
)
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback, lr_callback],
)
%tensorboard --logdir logs/scalars
print(model.predict([60, 25, 2]))
# 理想的输出结果是:
# [[32.0]
# [14.5]
# [ 3.0]]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorBoard Scalars
Step2: 配置数据用来训练回归
Step3: 训练模型和记录损失 (loss)
Step4: 使用 TensorBoard 检查损失 (loss)
Step5: <img class="tfo-display-only-on-site" src="https
Step7: 并不差!
Step8: 查看 TensorBoard
Step9: <img class="tfo-display-only-on-site" src="https
|