Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
โ | cleaned_prompt
stringlengths 168
30.3k
โ |
---|---|---|
14,600 | <ASSISTANT_TASK:>
Python Code:
print("Hello World")
a = 1 + 1
a
2 + 3
a = 2 + 3
a + 1
42 - 15.3
100 * 11
7 / 5
-7/5
7.0 / 5
7//5
7.0//5
-7//5
-7.0//5
7%5
-7%5
-7.0%5
2 ** 3
9 ** 0.5
# int a_number = 2
a_number = 2
a_word = 'dog'
print(a_number)
print(a_word)
type(a_number)
type(a_word)
a_number + 7
(a_number * 6.0) / 5
first_result = 8 / 3.5
first_result
type(first_result)
"Bull " + a_word
a_word * 2
a_number + a_word
print(a_number)
a_number = 5
print(a_number)
new_float = 4.0
print(new_float)
int(4.8)
float(2)
basic_int = 2
print(float(basic_int))
print(type(basic_int))
float_basic_int = float(basic_int)
print(type(float_basic_int))
int = 4
print("What have we done to int?", int)
int(5.0)
del int
int(5.0)
eqn1 = 2 * 3 - 2
print(eqn1)
eqn2 = -2 + 2 * 3
print( eqn2 )
eqn3 = -2 + (2 % 3)
print( eqn3 )
eqn4 = (.3 + 5) // 2
print(eqn4)
eqn5 = 2 ** 4 // 2
print(eqn5)
puppy = True
print(puppy)
type(puppy)
puppies = False
puppy, puppies = True, False
print("Do I have a puppy?", puppy)
print("Do I have puppies?", puppies)
True and True
True and False
puppy and puppies
not puppies
not puppy
puppy and not puppies
puppy or puppies
False or False
4 == 4
4 == 5
4 != 2
4 != 4
4 > 2
4 > 4
4 >= 4
False or False
def average(a, b):
๋ ๊ฐ์ ์ซ์ a์ b๊ฐ ์ฃผ์ด์ก์ ๋,
๋ ์ซ์์ ํ๊ท ์ ๋ฆฌํดํ๋ ํจ์
return (a + b) * 0.5
average(10, 20)
average(10, 4)
help(average)
def even_test(n):
if n%2 == 0:
return True
else:
return False
even_test(17)
even_test(4)
def even_test1(n):
if not n%2:
return True
else:
return False
even_test1(17)
even_test1(4)
def distance(a, b):
return abs(a-b)
distance(3, 4)
distance(3, 1)
import math
def circle_area(r):
return math.pi * r**2
circle_area(3)
circle_area(math.pi)
import math
def geometic_mean(a, b):
c = math.sqrt(a * b)
return c
geometic_mean(2, 2)
geometic_mean(2, 8)
geometic_mean(2, 1)
def pyramid_volume(A, h):
4๊ฐ๋ฟ์ ๋ถํผ๋ ๋ฐ๋ฉด์ * ๋์ด * 1/3
๋ฆฌํด๊ฐ์ด ํญ์ float ์๋ฃํ์ด ๋๋๋ก ํ๋ค.
V = A * h / 3.0
return V
pyramid_volume(1, 2)
# ํ๋ฃจ๋ ์๋ ์ซ์๋งํผ์ ์ด๋ก ์ด๋ฃจ์ด์ง๋ค.
# ํ๋ฃจ = 24์๊ฐ * 60๋ถ * 60์ด.
daysec = 60 * 60 * 24
# ์ด์ ์ด๋ฅผ ์ผ ๋จ์๋ก ๋ณ๊ฒฝํ ์ ์๋ค.
def seconds2days(sec):
sec์ ์ผ ๋จ์๋ก ๋ณ๊ฒฝํ๋ ํจ์.
๊ฐ์ ํ๋ณํ์ ์ฃผ์ํ ๊ฒ
return (sec/daysec)
seconds2days(43200)
def box_surface(a, b, c):
๊ฐ ๋ณ์ ๊ธธ์ด๊ฐ ๊ฐ๊ฐ a, b, c์ธ ๋ฐ์ค์ ํ๋ฉด์ ์ ๋ฆฌํดํ๋ ํจ์.
ํํธ: 6๊ฐ์ ๋ฉด์ ํฉ์ ๊ตฌํ๋ฉด ๋๋ค
s1, s2, s3 = a * b, b * c, c * a
return 2 * (s1 + s2 + s3)
box_surface(1, 1, 1)
box_surface(2, 2, 3)
def triangle_area(a, b, c):
s = (a + b + c) / 2.0
A = (s * (s - a) * (s - b) * (s - c))
return math.sqrt(A)
triangle_area(2, 2, 3)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ๋ณ์๋ฅผ ์ ์ธํ๊ณ ๊ฐ์ ๋ฐ๋ก ํ์ธํ ์ ์๋ค.
Step2: ํ์ด์ฌ์ ๊ณ์ฐ๊ธฐ์ฒ๋ผ ํ์ฉํ ์๋ ์๋ค.
Step3: ์ฃผ์
Step4: ๋๋จธ์ง๋ฅผ ๊ณ์ฐํ๋ ์ฐ์ฐ์๋ % ์ด๋ค.
Step5: ์ง์ ๊ณ์ฐ
Step6: ๋ณ์ ์ ์ธ ๋ฐ ํ์ฉ
Step7: ์๋ฅผ ๋ค์ด, C ์ธ์ด์ ๊ฒฝ์ฐ ์๋์ ๊ฐ์ด ์ ์ธํด์ผ ํ๋ค.
Step8: ๋ณ์์ ํ ๋น๋ ๊ฐ์ ์๋ฃํ์ ํ์ธํ๋ ค๋ฉด type() ํจ์๋ฅผ ํธ์ถํ๋ค.
Step9: ์ ์ธ๋ ๋ณ์๋ฅผ ์ด์ฉํ์ฌ ์ฐ์ฐ์ ํ ์๋ ์๋ค.
Step10: ์ฐ์ฐ์ ๊ฒฐ๊ณผ๋ฅผ ๋ณ์์ ํ ๋นํ ์ ์๋ค. ํด๋น ๋ณ์์๋ ์ฐ์ฐ์ ๊ฒฐ๊ณผ๋ง์ ๊ธฐ์ตํ๋ค.
Step11: ๊ณ์ฐ๋ ๊ฒฐ๊ณผ์ ์๋ฃํ๋ type() ํจ์๋ฅผ ์ด์ฉํ์ฌ ํ์ธํ ์ ์๋ค.
Step12: ๋ฌธ์์ด์ ๊ฒฝ์ฐ ๋ง์
๊ณผ ๊ณฑ์
์ฐ์ฐ์๋ฅผ ์ฌ์ฉํ ์ ์๋ค.
Step13: ํ์ง๋ง ๋ณ์์ ํ ๋น๋ ๊ฐ์ ์๋ฃํ์ ๋ฐ๋ผ ์ฐ์ฐ์ ๊ฐ๋ฅ์ฌ๋ถ๊ฐ ๊ฒฐ์ ๋๋ค.
Step14: ์ฃผ์
Step15: ๊ธฐ๋ณธ ์๋ฃํ
Step16: ์ ์์ ์ค์ ์ฌ์ด์ ๊ฐ์ ๋ก ํ๋ณํ ๊ฐ๋ฅํ๋ค. ์ค์๋ฅผ ์ ์๋ก ๋ณํํ๊ณ ์ ํ ๊ฒฝ์ฐ int() ํจ์๋ฅผ ์ฌ์ฉํ๋ค. ๊ทธ๋ฌ๋ฉด ์์์ ์ดํ๋ ๋ฒ๋ ค์ง๋ค.
Step17: ์ ์๋ฅผ ์ค์๋ก ํ๋ณํํ๋ ค๋ฉด float() ํจ์๋ฅผ ์ฌ์ฉํ๋ค.
Step18: ์ฃผ์
Step19: ํค์๋ ๊ด๋ จ ์ฃผ์์ฌํญ
Step20: ์ฆ, int() ํจ์์ ๋ณธ๋์ ์ ์๊ฐ ์ฌ๋ผ์ก๋ค. ์ด๋ด ๋๋ ์๋์ ๊ฐ์ด ์๋์ ํจ์๋ก ๋๋๋ฆด ์ ์๋ค.
Step21: ์ฐ์ฐ์ ์ฐ์ ์์
Step22: ๋ถ๋ฆฌ์ธ ๊ฐ(bool)
Step23: ๋ ๊ฐ์ ๋ณ์ ์ ์ธ์ ์๋์ ๊ฐ์ด ๋์์ ํ ์ ์๋ค. ๋ฑํธ๊ธฐํธ ์ผํธ๊ณผ ์ค๋ฅธํธ์ ์ฌ์ฉ๋๋ ๋ณ์์ ๊ฐ์ ๊ฐ์๊ฐ ๋์ผํด์ผ ํจ์ ์ฃผ์ํ๋ค.
Step24: ์ฃผ์
Step25: ๋ถ๋ฆฌ์ธ ์๋ฃํ์ ๋ณ์๋ฅผ ์ด์ฉํ์ฌ ์ฐ์ฐ์ ์ํํ ์๋ ์๋ค.
Step26: ๋ถ๋ฆฌ์ธ ์ฐ์ฐ์ ์ฐ์ ์์
Step27: ์ซ์ ๋น๊ต
Step29: ์ฐ์ต๋ฌธ์
Step30: ์ฃผ์
Step32: ํจ์์ ๋ํ ์ ๋ณด๋ฅผ ์ป๊ณ ์ ํ ๊ฒฝ์ฐ help() ํจ์๋ฅผ ํ์ฉํ ์ ์๋ค.
Step33: ์ฐ์ต
Step34: ์๋ ๋ฐฉ์๋ ๊ฐ๋ฅํ๋ค. (์ด์ ๋ฅผ ์ค์ค๋ก ์ค๋ช
ํ ์ ์์ด์ผ ํ๋ค.)
Step35: ์ฐ์ต
Step37: abs ํจ์๋ ์ธ์๋ก ์
๋ ฅ๋ ์ซ์์ ์ ๋๊ฐ์ ๋ฆฌํดํ๋ ํจ์์ด๋ค.
Step38: ์ฐ์ต
Step39: ์ฐ์ต
Step40: sqrt์ ๋ํด ์๊ณ ์ถ์ผ๋ฉด help ํจ์๋ฅผ ํ์ฉํ๋ค.
Step42: ์ฐ์ต
Step43: ์ฃผ์
Step45: ์ฐ์ต
Step47: ํ์ด์ฌ2์ ๊ฒฝ์ฐ์๋ ์๋์ ๊ฐ์ด ์ ์ํด๋ ๋๋ค.
Step48: ์ฐ์ต
|
14,601 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.io as sio
import scipy.stats as ss
from functools import partial
import elfi
from elfi.examples import gauss
m = gauss.get_model()
seed = 20170616
n_obs = 50
batch_size = 100
mu, sigma = (5, 1)
y_obs = gauss.gauss(mu, sigma, n_obs=n_obs, batch_size=1,
random_state=np.random.RandomState(seed))
# Hyperparameters
mu0, sigma0 = (7, 100)
# Posterior parameters
n = y_obs.shape[1]
mu1 = (mu0/sigma0**2 + y_obs.sum()/sigma**2)/(1/sigma0**2 + n/sigma**2)
sigma1 = (1/sigma0**2 + n/sigma**2)**(-0.5)
# Model
m = elfi.new_model()
sim_fn = partial(gauss.gauss, sigma=sigma, n_obs=n_obs)
elfi.Prior('norm', mu0, sigma0, model=m, name='mu')
elfi.Simulator(sim_fn, m['mu'], observed=y_obs, name='Gauss')
elfi.Summary(lambda x: x.mean(axis=1), m['Gauss'], name='ss_mean')
elfi.Distance('euclidean', m['ss_mean'], name='d')
res = elfi.Rejection(m['d'], output_names=['ss_mean'], batch_size=batch_size, seed=seed).sample(1000, threshold=1)
original = res.outputs['mu']
original = original[np.isfinite(original)] # omit non-finite values
# Kernel density estimate for visualization
kde = ss.gaussian_kde(original)
plt.figure(figsize=(16,10))
t = np.linspace(3, 7, 100)
plt.plot(t, ss.norm(loc=mu1, scale=sigma1).pdf(t), '--')
plt.plot(t, kde.pdf(t))
plt.legend(['Reference', 'Rejection sampling, $\epsilon$=1'])
plt.xlabel('mu')
plt.ylabel('posterior density')
plt.axvline(x=mu);
adj = elfi.adjust_posterior(sample=res, model=m, summary_names=['ss_mean'])
kde_adj = ss.gaussian_kde(adj.outputs['mu'])
plt.figure(figsize=(16,10))
t = np.linspace(2, 8, 100)
plt.plot(t, ss.norm(loc=mu1, scale=sigma1).pdf(t), '--')
plt.plot(t, kde.pdf(t))
plt.plot(t, kde_adj(t))
plt.legend(['Reference', 'Rejection sampling, $\epsilon$=1', 'Adjusted posterior'])
plt.xlabel('mu')
plt.ylabel('posterior density')
plt.axvline(x=mu);
from elfi.examples import ma2
seed = 20170511
threshold = 0.2
batch_size = 1000
n_samples = 500
m2 = ma2.get_model(n_obs=100, true_params=[0.6, 0.2], seed_obs=seed)
rej2 = elfi.Rejection(m2, m2['d'], output_names=['S1', 'S2'], batch_size=batch_size, seed=seed)
res2 = rej2.sample(n_samples, threshold=threshold)
adj2 = elfi.adjust_posterior(model=m2, sample=res2, parameter_names=['t1', 't2'],
summary_names=['S1', 'S2'])
plt.figure(figsize=(16,10))
t = np.linspace(0, 1.25, 100)
plt.plot(t, ss.gaussian_kde(res2.outputs['t1'])(t))
plt.plot(t, ss.gaussian_kde(adj2.outputs['t1'])(t))
plt.legend(['Rejection sampling, $\epsilon$=0.2', 'Adjusted posterior'])
plt.xlabel('t1')
plt.ylabel('posterior density')
plt.axvline(x=0.6);
plt.figure(figsize=(16,10))
t = np.linspace(-0.5, 1, 100)
plt.plot(t, ss.gaussian_kde(res2.outputs['t2'])(t))
plt.plot(t, ss.gaussian_kde(adj2.outputs['t2'])(t))
plt.legend(['Rejection sampling, $\epsilon$=0.2', 'Adjusted posterior'])
plt.xlabel('t2')
plt.ylabel('posterior density')
plt.axvline(x=0.2);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will use a simple model of a univariate Gaussian with an unknown mean to illustrate posterior adjustment. The observed data is 50 data points sampled from a Gaussian distribution with a mean of 5 and a standard deviation of 1. Since we use a Gaussian prior for the mean, the posterior is also Gaussian with known parameters.
Step2: We use a normal distribution with a large standard deviation $\sigma = 100$ and a non-centered mean $\mu = 7$ as an uninformative prior for the unknown mean. A large acceptance radius of $\epsilon = 1$ does not produce very accurate results. The large standard deviation leads to inefficient sampling and is used for illustrative purpse here. In practice, it is common to use more informative priors.
Step3: By regressing on the differences between the sampled and observed summary statistics, we attempt to correct the posterior sample with a linear model. The posterior adjustment is done with the adjust_posterior function. By default this performs a linear adjustment on all the parameters in the Sample object, but the adjusted parameters can also be chosen using the parameter_names argument. Other regression models, instead of a linear one, can be specified with the adjustment keyword argument.
Step4: Similarly to other ABC algorithms in ELFI, post-processing produces a Result object. As can be seen, the linear correction improves the posterior approximation of this model compared to rejection sampling alone.
Step5: Multiple parameters
|
14,602 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('data/human_body_temperature.csv')
df.head()
import numpy as np
import math
import pylab
import scipy.stats as stats
import matplotlib.pyplot as plt
plt.hist(df.temperature)
plt.show()
stats.probplot(df.temperature, dist="norm", plot=pylab)
pylab.show()
sample_size = df.temperature.count()
print('sample size is ' + str(sample_size))
mean = np.mean(df.temperature)
se = (np.std(df.temperature))/math.sqrt(sample_size)
z = (98.6 - mean)/se
p_z = (1-stats.norm.cdf(z))*2
print('p value for z test is ' + str(p_z))
dgf = sample_size - 1
p_t = 2*(1-stats.t.cdf(z, dgf))
print('p value for t test is ' + str(p_t))
ub = mean + 1.96*se
lb = mean - 1.96*se
print('Mean: ' + str(mean))
print('95 % Confidence Interval: [' + str(lb) + ', ' + str(ub) + ']')
male_temp = df[df.gender=='M'].temperature
female_temp = df[df.gender=='F'].temperature
mean_diff = abs(np.mean(male_temp) - np.mean(female_temp))
se = math.sqrt(np.var(male_temp)/male_temp.count() + np.var(female_temp)/female_temp.count() )
z = mean_diff/se
p_z = (1-stats.norm.cdf(z))*2
print('mean for male is ' + str(np.mean(male_temp)))
print('mean for female is ' + str(np.mean(female_temp)))
print('p value for z test is ' + str(p_z))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (1) The histogram and normal probability plot shows that the distribution of body temperatures approximately follows a normal distribution
Step2: (2) The sample size is 130, which is large enough (>30) for the assumption of CLT. In addition, 130 people is <10% of the human population, so we can assume that the observations are independent.
Step3: (3) We can use one-sample z test (the sample size is much larger than 30)
Step4: (4) We would consider someone's temperature to be "abnormal" if it doesn't fall within the 95% confidence interval [98.12, 98.37]
Step5: (5) We can use two-sample z test
|
14,603 | <ASSISTANT_TASK:>
Python Code:
Testing pbnt.
Run this before anything else
to get pbnt to work!
import sys
# from importlib import reload
if('pbnt/combined' not in sys.path):
sys.path.append('pbnt/combined')
from exampleinference import inferenceExample
# Should output:
# ('The marginal probability of sprinkler=false:', 0.80102921)
#('The marginal probability of wetgrass=false | cloudy=False, rain=True:', 0.055)
inferenceExample()
from Node import BayesNode
from Graph import BayesNet
def make_power_plant_net():
Create a Bayes Net representation of
the above power plant problem.
T_node = BayesNode(0,2,name='temperature')
G_node = BayesNode(1,2,name='gauge')
A_node = BayesNode(2,2,name='alarm')
F_G_node = BayesNode(3,2,name='faulty gauge')
F_A_node = BayesNode(4,2,name='faulty alarm')
T_node.add_child(G_node)
T_node.add_child(F_G_node)
G_node.add_parent(T_node)
F_G_node.add_parent(T_node)
F_G_node.add_child(G_node)
G_node.add_parent(F_G_node)
G_node.add_child(A_node)
A_node.add_parent(G_node)
F_A_node.add_child(A_node)
A_node.add_parent(F_A_node)
nodes = [T_node, G_node, F_G_node, A_node, F_A_node]
return BayesNet(nodes)
from probability_tests import network_setup_test
power_plant = make_power_plant_net()
network_setup_test(power_plant)
def is_polytree():
Multiple choice question about polytrees.
choice = 'c'
answers = {
'a' : 'Yes, because it can be decomposed into multiple sub-trees.',
'b' : 'Yes, because its underlying undirected graph is a tree.',
'c' : 'No, because its underlying undirected graph is not a tree.',
'd' : 'No, because it cannot be decomposed into multiple sub-trees.'
}
return answers[choice]
from numpy import zeros, float32
import Distribution
from Distribution import DiscreteDistribution, ConditionalDiscreteDistribution
def set_probability(bayes_net):
Set probability distribution for each
node in the power plant system.
A_node = bayes_net.get_node_by_name("alarm")
F_A_node = bayes_net.get_node_by_name("faulty alarm")
G_node = bayes_net.get_node_by_name("gauge")
F_G_node = bayes_net.get_node_by_name("faulty gauge")
T_node = bayes_net.get_node_by_name("temperature")
nodes = [A_node, F_A_node, G_node, F_G_node, T_node]
#for completely independent nodes
T_dist = DiscreteDistribution(T_node)
index = T_dist.generate_index([],[])
T_dist[index] = [0.8,0.2]
T_node.set_dist(T_dist)
F_A_dist = DiscreteDistribution(F_A_node)
index = F_A_dist.generate_index([],[])
F_A_dist[index] = [0.85,0.15]
F_A_node.set_dist(F_A_dist)
#for single parent node
dist = zeros([T_node.size(), F_G_node.size()], dtype=float32)
dist[0,:] = [0.95,0.05]
dist[1,:] = [0.2,0.8]
F_G_dist = ConditionalDiscreteDistribution(nodes = [T_node, F_G_node], table=dist)
F_G_node.set_dist(F_G_dist)
#for double parent node
dist = zeros([F_G_node.size(), T_node.size(), G_node.size()], dtype=float32)
dist[0,0,:] = [0.95,0.05]
dist[0,1,:] = [0.05,0.95]
dist[1,0,:] = [0.2,0.8]
dist[1,1,:] = [0.8,0.2]
G_dist = ConditionalDiscreteDistribution(nodes=[F_G_node,T_node,G_node], table=dist)
G_node.set_dist(G_dist)
dist = zeros([F_A_node.size(),G_node.size(),A_node.size()], dtype=float32)
dist[0,0,:] = [0.9,0.1]
dist[0,1,:] = [0.1,0.9]
dist[1,0,:] = [0.55,0.45]
dist[1,1,:] = [0.45,0.55]
A_dist = ConditionalDiscreteDistribution(nodes=[F_A_node,G_node,A_node], table=dist)
A_node.set_dist(A_dist)
return bayes_net
set_probability(power_plant)
from probability_tests import probability_setup_test
probability_setup_test(power_plant)
def get_alarm_prob(bayes_net, alarm_rings):
Calculate the marginal
probability of the alarm
ringing (T/F) in the
power plant system.
A_node = bayes_net.get_node_by_name('alarm')
engine = JunctionTreeEngine(bayes_net)
Q = engine.marginal(A_node)[0]
index = Q.generate_index([alarm_rings],range(Q.nDims))
alarm_prob = Q[index]
return alarm_prob
def get_gauge_prob(bayes_net, gauge_hot):
Calculate the marginal
probability of the gauge
showing hot (T/F) in the
power plant system.
G_node = bayes_net.get_node_by_name('gauge')
engine = JunctionTreeEngine(bayes_net)
Q = engine.marginal(G_node)[0]
index = Q.generate_index([gauge_hot],range(Q.nDims))
gauge_prob = Q[index]
return gauge_prob
from Inference import JunctionTreeEngine
def get_temperature_prob(bayes_net,temp_hot):
Calculate theprobability of the
temperature being hot (T/F) in the
power plant system, given that the
alarm sounds and neither the gauge
nor alarm is faulty.
T_node = bayes_net.get_node_by_name('temperature')
A_node = bayes_net.get_node_by_name('alarm')
F_A_node = bayes_net.get_node_by_name('faulty alarm')
F_G_node = bayes_net.get_node_by_name('faulty gauge')
engine = JunctionTreeEngine(bayes_net)
engine.evidence[A_node] = True
engine.evidence[F_A_node] = False
engine.evidence[F_G_node] = False
Q = engine.marginal(T_node)[0]
index = Q.generate_index([temp_hot],range(Q.nDims))
temp_prob = Q[index]
return temp_prob
print get_alarm_prob(power_plant,True)
print get_gauge_prob(power_plant,True)
print get_temperature_prob(power_plant,True)
def get_game_network():
Create a Bayes Net representation
of the game problem.
#create the network
A = BayesNode(0,4,name='Ateam')
B = BayesNode(1,4,name='Bteam')
C = BayesNode(2,4,name='Cteam')
AvB = BayesNode(3,3,name='AvB')
BvC = BayesNode(4,3,name='BvC')
CvA = BayesNode(5,3,name='CvA')
A.add_child(AvB)
A.add_child(CvA)
B.add_child(AvB)
B.add_child(BvC)
C.add_child(BvC)
C.add_child(CvA)
AvB.add_parent(A)
AvB.add_parent(B)
BvC.add_parent(B)
BvC.add_parent(C)
CvA.add_parent(C)
CvA.add_parent(A)
nodes = [A,B,C,AvB,BvC,CvA]
game_net = BayesNet(nodes)
#setting priors for team skills
skillDist = DiscreteDistribution(A)
index = skillDist.generate_index([],[])
skillDist[index] = [0.15,0.45,0.3,0.1]
A.set_dist(skillDist)
skillDist = DiscreteDistribution(B)
index = skillDist.generate_index([],[])
skillDist[index] = [0.15,0.45,0.3,0.1]
B.set_dist(skillDist)
skillDist = DiscreteDistribution(C)
index = skillDist.generate_index([],[])
skillDist[index] = [0.15,0.45,0.3,0.1]
C.set_dist(skillDist)
#setting probability priors for winning
dist = zeros([A.size(),B.size(),AvB.size()], dtype=float32)
dist[0,0,:] = [0.1,0.1,0.8]
dist[1,1,:] = [0.1,0.1,0.8]
dist[2,2,:] = [0.1,0.1,0.8]
dist[3,3,:] = [0.1,0.1,0.8]
dist[0,1,:] = [0.2,0.4,0.2]
dist[1,2,:] = [0.2,0.4,0.2]
dist[2,3,:] = [0.2,0.4,0.2]
dist[0,2,:] = [0.15,0.75,0.1]
dist[1,3,:] = [0.15,0.75,0.1]
dist[0,3,:] = [0.05,0.9,0.05]
dist[3,0,:] = [0.9,0.05,0.05]
dist[1,0,:] = [0.4,0.2,0.2]
dist[2,1,:] = [0.4,0.2,0.2]
dist[3,2,:] = [0.4,0.2,0.2]
dist[2,0,:] = [0.75,0.15,0.1]
dist[3,1,:] = [0.75,0.15,0.1]
AvB_dist = ConditionalDiscreteDistribution(nodes=[A,B,AvB], table=dist)
AvB.set_dist(AvB_dist)
dist = zeros([B.size(),C.size(),BvC.size()], dtype=float32)
dist[0,0,:] = [0.1,0.1,0.8]
dist[1,1,:] = [0.1,0.1,0.8]
dist[2,2,:] = [0.1,0.1,0.8]
dist[3,3,:] = [0.1,0.1,0.8]
dist[0,1,:] = [0.2,0.4,0.2]
dist[1,2,:] = [0.2,0.4,0.2]
dist[2,3,:] = [0.2,0.4,0.2]
dist[0,2,:] = [0.15,0.75,0.1]
dist[1,3,:] = [0.15,0.75,0.1]
dist[0,3,:] = [0.05,0.9,0.05]
dist[3,0,:] = [0.9,0.05,0.05]
dist[1,0,:] = [0.4,0.2,0.2]
dist[2,1,:] = [0.4,0.2,0.2]
dist[3,2,:] = [0.4,0.2,0.2]
dist[2,0,:] = [0.75,0.15,0.1]
dist[3,1,:] = [0.75,0.15,0.1]
BvC_dist = ConditionalDiscreteDistribution(nodes=[B,C,BvC], table=dist)
BvC.set_dist(BvC_dist)
dist = zeros([C.size(),A.size(),CvA.size()], dtype=float32)
dist[0,0,:] = [0.1,0.1,0.8]
dist[1,1,:] = [0.1,0.1,0.8]
dist[2,2,:] = [0.1,0.1,0.8]
dist[3,3,:] = [0.1,0.1,0.8]
dist[0,1,:] = [0.2,0.4,0.2]
dist[1,2,:] = [0.2,0.4,0.2]
dist[2,3,:] = [0.2,0.4,0.2]
dist[0,2,:] = [0.15,0.75,0.1]
dist[1,3,:] = [0.15,0.75,0.1]
dist[0,3,:] = [0.05,0.9,0.05]
dist[3,0,:] = [0.9,0.05,0.05]
dist[1,0,:] = [0.4,0.2,0.2]
dist[2,1,:] = [0.4,0.2,0.2]
dist[3,2,:] = [0.4,0.2,0.2]
dist[2,0,:] = [0.75,0.15,0.1]
dist[3,1,:] = [0.75,0.15,0.1]
CvA_dist = ConditionalDiscreteDistribution(nodes=[C,A,CvA], table=dist)
CvA.set_dist(CvA_dist)
return game_net
game_net = get_game_network()
import random
def MH_sampling(bayes_net, initial_value):
Complete a single iteration of the
Metropolis-Hastings algorithm given a
Bayesian network and an initial state
value. Returns the state sampled from
the probability distribution.
var_id = random.randint(0,2)
new_val = random.randint(0,2)
sample = [i for i in initial_value]
sample[var_id] = new_val
for node in bayes_net.nodes:
if node.id == 0:
A = node
if node.id == 1:
B = node
if node.id == 2:
C = node
if node.id == 3:
AvB = node
if node.id == 4:
BvC = node
if node.id == 5:
CvA = node
Adist = A.dist.table
Bdist = B.dist.table
Cdist = C.dist.table
AvBdist = AvB.dist.table
BvCdist = BvC.dist.table
CvAdist = CvA.dist.table
p_x = Adist[initial_value[0]]*Bdist[initial_value[1]]*Cdist[initial_value[2]]*AvBdist[initial_value[0],initial_value[1],initial_value[3]]*BvCdist[initial_value[1],initial_value[2],initial_value[4]]*CvAdist[initial_value[2],initial_value[0],initial_value[5]]
p_x_dash = Adist[sample[0]]*Bdist[sample[1]]*Cdist[sample[2]]*AvBdist[sample[0],sample[1],sample[3]]*BvCdist[sample[1],sample[2],sample[4]]*CvAdist[sample[2],sample[0],sample[5]]
if p_x!=0:
A = p_x_dash/p_x
else:
A = 1
if A>=1:
return sample
else:
weighted = [(1,int(A*100)), (0, 100-int(A*100))]
population = [val for val, cnt in weighted for i in range(cnt)]
acceptance = random.choice(population)
if acceptance == 0:
sample = initial_value
else:
sample = initial_value
sample[var_id] = new_val
return sample
# arbitrary initial state for the game system
initial_value = [0,0,0,0,2,1]
sample = MH_sampling(game_net, initial_value)
print sample
import random
def Gibbs_sampling(bayes_net, initial_value):
Complete a single iteration of the
Gibbs sampling algorithm given a
Bayesian network and an initial state
value. Returns the state sampled from
the probability distribution.
sample = initial_value
if initial_value:
#randomly select one variable and sample from it's posterior distribution for a new value
else:
#default
for i in range(0,5):
upper_bound = 3 if i<3 else 2
sample[i] = random.randint(0,upper_bound)
return sample
def calculate_posterior(games_net):
Calculate the posterior distribution
of the BvC match given that A won against
B and tied C. Return a list of probabilities
corresponding to win, loss and tie likelihood.
posterior = [0,0,0]
engine = JunctionTreeEngine(games_net)
AvB = games_net.get_node_by_name('AvB')
BvC = games_net.get_node_by_name('BvC')
CvA = games_net.get_node_by_name('CvA')
engine.evidence[AvB] = 0
engine.evidence[CvA] = 2
Q = engine.marginal(BvC)[0]
index = Q.generate_index([0],range(Q.nDims))
posterior[0] = Q[index]
index = Q.generate_index([1],range(Q.nDims))
posterior[1] = Q[index]
index = Q.generate_index([2],range(Q.nDims))
posterior[2] = Q[index]
return posterior
iter_counts = [1e1,1e3,1e5,1e6]
def compare_sampling(bayes_net, posterior):
Compare Gibbs and Metropolis-Hastings
sampling by calculating how long it takes
for each method to converge to the
provided posterior.
# TODO: finish this function
return Gibbs_convergence, MH_convergence
def sampling_question():
Question about sampling performance.
# TODO: assign value to choice and factor
choice = 1
options = ['Gibbs','Metropolis-Hastings']
factor = 0
return options[choice], factor
# test your sampling methods here
posterior = calculate_posterior(game_net)
compare_sampling(game_net, posterior)
def complexity_question():
# TODO: write an expression for complexity
complexity = 'O(2^n)'
return complexity
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assignment 3
Step3: Part 1
Step5: 1b
Step7: 1c
Step11: 1d
Step13: Part 2
Step15: 2b
Step17: 2c
Step21: 2d
Step22: 2e
|
14,604 | <ASSISTANT_TASK:>
Python Code:
a = [2, 3, 5, 7]
# Length of a list
len(a)
# Append a value to the end
a.append(11)
a
# Addition concatenates lists
a + [13, 17, 19]
# sort() method sorts in-place
a = [2, 5, 1, 6, 3, 4]
a.sort()
a
a = [1, 'two', 3.14, [0, 3, 5]]
a
a = [2, 3, 5, 7, 11]
a[0]
a[1]
a[-1]
a[-2]
a[0:3]
a[:3]
a[-3:]
a[::2] # equivalent to a[0:len(a):2]
a[::-1]
a[0] = 100
print(a)
a[1:3] = [55, 56]
print(a)
t = (1, 2, 3)
t = 1, 2, 3
print(t)
len(t)
t[0]
numbers = {'one':1, 'two':2, 'three':3}
# or
numbers = dict(one=1, two=2, three=2)
# Access a value via the key
numbers['two']
# Set a new key:value pair
numbers['ninety'] = 90
print(numbers)
primes = {2, 3, 5, 7}
odds = {1, 3, 5, 7, 9}
a = {1, 1, 2}
a
# union: items appearing in either
primes | odds # with an operator
primes.union(odds) # equivalently with a method
# intersection: items appearing in both
primes & odds # with an operator
primes.intersection(odds) # equivalently with a method
# difference: items in primes but not in odds
primes - odds # with an operator
primes.difference(odds) # equivalently with a method
# symmetric difference: items appearing in only one set
primes ^ odds # with an operator
primes.symmetric_difference(odds) # equivalently with a method
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lists have a number of useful properties and methods available to them.
Step2: One of the powerful features of Python's compound objects is that they can contain a mix of objects of any type.
Step3: This flexibility is a consequence of Python's dynamic type system.
Step4: Python uses zero-based indexing, so we can access the first and second element in using the following syntax
Step5: Elements at the end of the list can be accessed with negative numbers, starting from -1
Step6: You can visualize this indexing scheme this way
Step7: we can equivalently write
Step8: Similarly, if we leave out the last index, it defaults to the length of the list.
Step9: Finally, it is possible to specify a third integer that represents the step size; for example, to select every second element of the list, we can write
Step10: A particularly useful version of this is to specify a negative step, which will reverse the array
Step11: Both indexing and slicing can be used to set elements as well as access them.
Step12: A very similar slicing syntax is also used in other data containers, such as NumPy arrays, as we will see in Day 2 sessions.
Step13: They can also be defined without any brackets at all
Step14: Like the lists discussed before, tuples have a length, and individual elements can be extracted using square-bracket indexing
Step15: The main distinguishing feature of tuples is that they are immutable
Step16: Items are accessed and set via the indexing syntax used for lists and tuples, except here the index is not a zero-based order but valid key in the dictionary
Step17: New items can be added to the dictionary using indexing as well
Step18: Keep in mind that dictionaries do not maintain any order for the input parameters.
Step19: If you're familiar with the mathematics of sets, you'll be familiar with operations like the union, intersection, difference, symmetric difference, and others.
|
14,605 | <ASSISTANT_TASK:>
Python Code:
import lib.ngagent as ngagent
ag_cfg = {
'agent_id':'test',
'voc_cfg':{
'voc_type':'sparse_matrix',
'M':5,
'W':10
},
'strat_cfg':{
'strat_type':'naive',
'voc_update':'Minimal'
}
}
testagent=ngagent.Agent(**ag_cfg)
testagent
print(testagent)
import random
M=ag_cfg['voc_cfg']['M']
W=ag_cfg['voc_cfg']['W']
for i in range(0,15):
k=random.randint(0,M-1)
l=random.randint(0,W-1)
testagent._vocabulary.add(k,l,1)
print(testagent)
testagent.visual()
testagent.visual("hom")
testagent.visual()
testagent.visual("syn")
testagent.visual()
testagent.visual("pick_mw",iterr=500)
testagent.visual()
testagent.visual("guess_m",iterr=500)
testagent.visual()
testagent.visual("pick_w",iterr=500)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's create an agent. Vocabulary and strategy are created at the same time.
Step2: We can get visuals of agent objects from strategy and vocabulary visuals, with same syntax.
|
14,606 | <ASSISTANT_TASK:>
Python Code:
import altair
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# Set the plotting style as for a "paper" (smaller labels)
# and using a white background with a grid ("whitegrid")
sns.set(context='paper', style='whitegrid')
%matplotlib inline
import macosko2015
macosko2015.BASE_URL
urlname = macosko2015.BASE_URL + 'differential_clusters_expression.csv'
urlname
expression = pd.read_csv(urlname, index_col=0)
print(expression.shape)
expression.head()
expression_log10 = np.log10(expression + 1)
print(expression_log10.shape)
expression_log10.head()
urlname = macosko2015.BASE_URL + 'differential_clusters_cell_metadata.csv'
cell_metadata = pd.read_csv(urlname, index_col=0)
cell_metadata.head()
def upperizer(genes):
return [x.upper() for x in genes]
figure5a_genes = ['Nrxn2', 'Atp1b1', 'Pax6', 'Slc32a1', 'Slc6a1', 'Elavl3']
figure5a_genes_upper = upperizer(figure5a_genes)
figure5a_genes_upper
# YOUR CODE HERE
figure5a_expression = expression[figure5a_genes_upper]
print(figure5a_expression.shape)
figure5a_expression.head()
figure5a_expression_mean = figure5a_expression.groupby(cell_metadata['cluster_n'], axis=0).mean()
print(figure5a_expression_mean.shape)
figure5a_expression_mean.head()
figure5a_expression_mean_unstack = figure5a_expression_mean.unstack()
print(figure5a_expression_mean_unstack.shape)
figure5a_expression_mean_unstack.head()
figure5a_expression_tidy = figure5a_expression_mean_unstack.reset_index()
print(figure5a_expression_tidy.shape)
figure5a_expression_tidy.head()
renamer = {'level_0': 'gene_symbol', 0: 'expression'}
renamer
figure5a_expression_tidy = figure5a_expression_tidy.rename(columns=renamer)
print(figure5a_expression_tidy.shape)
figure5a_expression_tidy.head()
figure5a_expression_tidy['expression_log'] = np.log10(figure5a_expression_tidy['expression'] + 1)
print(figure5a_expression_tidy.shape)
figure5a_expression_tidy.head()
altair.Chart(figure5a_expression_tidy).mark_circle().encode(
size='expression', x=altair.X('gene_symbol'), y=altair.Y('cluster_n'))
# YOUR CODE HERE
altair.Chart(figure5a_expression_tidy).mark_circle().encode(
size='expression_log', x=altair.X('gene_symbol'), y=altair.Y('cluster_n'))
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
figure5b_genes = ['Chat', "Gad1", 'Gad2', 'Slc17a8', 'Slc6a9', 'Gjd2', 'Gjd2', 'Ebf3']
figure5b_genes_upper = upperizer(figure5b_genes)
figure5b_genes_upper
range(5)
list(range(5))
rows = cell_metadata.cluster_n.isin(range(5))
rows
print('cell_metadata.shape', cell_metadata.shape)
cell_metadata_subset = cell_metadata.loc[rows]
print('cell_metadata_subset.shape', cell_metadata_subset.shape)
cell_metadata_subset.head()
cell_metadata_subset.cluster_n.unique()
sorted(cell_metadata_subset.cluster_n.unique())
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
rows = cell_metadata.cluster_n.isin(range(3, 24))
figure5b_cell_metadata = cell_metadata.loc[rows]
print(figure5b_cell_metadata.shape)
figure5b_cell_metadata.head()
sorted(figure5b_cell_metadata.cluster_n.unique())
figure5b_cell_metadata.index
# YOUR CODE HERE
figure5b_expression = expression.loc[figure5b_cell_metadata.index, figure5b_genes_upper]
print(figure5b_expression.shape)
figure5b_expression.head()
figure5b_cell_metadata.index
figure5b_expression.index
figure5b_tidy = figure5b_expression.unstack().reset_index()
figure5b_tidy = figure5b_tidy.rename(columns={'level_1': 'barcode', 'level_0': 'gene_symbol', 0: 'expression'})
figure5b_tidy['expression_log'] = np.log10(figure5b_tidy['expression'] + 1)
print(figure5b_tidy.shape)
figure5b_tidy.head()
def tidify_and_log(data):
tidy = data.unstack().reset_index()
tidy = tidy.rename(columns={'level_1': 'barcode', 'level_0': 'gene_symbol', 0: 'expression'})
tidy['expression_log'] = np.log10(tidy['expression'] + 1)
return tidy
figure5b_tidy_clusters = figure5b_tidy.join(figure5b_cell_metadata, on='barcode')
print(figure5b_tidy_clusters.shape)
figure5b_tidy_clusters.head()
sns.violinplot?
sns.violinplot('expression', 'cluster_id', data=figure5b_tidy_clusters)
sns.FacetGrid?
# YOUR CODE HERE
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol')
facetgrid
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol')
facetgrid.set_titles('{col_name}')
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol')
facetgrid.map(sns.violinplot, 'expression', 'cluster_id')
facetgrid.set_titles('{col_name}')
# YOUR CODE HERE
# YOUR CODE HERE
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id')
facetgrid.set_titles('{col_name}')
sns.violinplot?
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width', linewidth=1)
facetgrid.set_titles('{col_name}')
# YOUR CODE HERE
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol',
gridspec_kws=dict(hspace=0, wspace=0), sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width', linewidth=1, inner=None, cut=True)
facetgrid.set_titles('{col_name}')
sns.palplot(sns.color_palette('husl', n_colors=50))
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width',
linewidth=1, inner=None, cut=True, palette='husl')
facetgrid.set_titles('{col_name}')
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol',
size=4, aspect=0.25, gridspec_kws=dict(wspace=0),
sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width',
linewidth=1, palette='husl', inner=None, cut=True)
facetgrid.set_titles('{col_name}')
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
cluster_order = figure5b_tidy_clusters.cluster_id.sort_values().unique()
cluster_order
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', size=4, aspect=0.25,
gridspec_kws=dict(wspace=0), sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width',
linewidth=1, palette='husl', inner=None, cut=True, order=cluster_order)
facetgrid.set_titles('{col_name}')
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', size=4, aspect=0.25,
gridspec_kws=dict(wspace=0), sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width',
linewidth=1, palette='husl', inner=None, cut=True, order=cluster_order)
facetgrid.set(xlabel='', xticks=[])
facetgrid.set_titles('{col_name}')
# YOUR CODE HERE
# YOUR CODE HERE
figure5b_tidy_clusters.head()
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', size=4, aspect=0.25,
gridspec_kws=dict(wspace=0), sharex=False)
facetgrid.map(sns.violinplot, 'expression_log', 'cluster_id', scale='width',
linewidth=1, palette='husl', inner=None, cut=True, order=cluster_order)
facetgrid.set(xlabel='', xticks=[])
facetgrid.set_titles('{col_name}')
%%file plotting_code.py
import seaborn as sns
def violinplot_grid(tidy, col='gene_symbol', size=4, aspect=0.25, gridspec_kws=dict(wspace=0),
sharex=False, scale='width', linewidth=1, palette='husl', inner=None,
cut=True, order=None):
facetgrid = sns.FacetGrid(tidy, col=col, size=size, aspect=aspect,
gridspec_kws=gridspec_kws, sharex=sharex)
facetgrid.map(sns.violinplot, 'expression_log', 'cluster_id', scale=scale,
linewidth=linewidth, palette=palette, inner=inner, cut=cut, order=order)
facetgrid.set(xlabel='', xticks=[])
facetgrid.set_titles('{col_name}')
cat plotting_code.py
import plotting_code
plotting_code.violinplot_grid(figure5b_tidy_clusters, order=cluster_order)
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
figure5c_genes = ['Gng7', 'Gbx2', 'Tpbg', 'Slitrk6', 'Maf', 'Tac2', 'Loxl2', 'Vip', 'Glra1',
'Igfbp5', 'Pdgfra', 'Slc35d3', 'Car3', 'Fgf1', 'Igf1', 'Col12a1', 'Ptgds',
'Ppp1r17', 'Cck', 'Shisa9', 'Pou3f3']
figure5c_genes_upper = upperizer(figure5c_genes)
figure5c_expression = expression.loc[figure5b_cell_metadata.index, figure5c_genes_upper]
print(figure5c_expression.shape)
figure5c_expression.head()
figure5c_genes_upper
figure5c_tidy = tidify_and_log(figure5c_expression)
print(figure5c_tidy.shape)
figure5c_tidy.head()
figure5c_tidy_cell_metadata = figure5c_tidy.join(cell_metadata, on='barcode')
print(figure5c_tidy_cell_metadata.shape)
figure5c_tidy_cell_metadata.head()
plotting_code.violinplot_grid(figure5c_tidy_cell_metadata, order=cluster_order, aspect=0.2)
# Import a file I wrote with a cleaned-up clustermap
import fig_code
amacrine_cluster_n = sorted(figure5b_cell_metadata.cluster_n.unique())
amacrine_cluster_to_color = dict(zip(amacrine_cluster_n, sns.color_palette('husl', n_colors=len(amacrine_cluster_n))))
amacrine_cell_colors = [amacrine_cluster_to_color[i] for i in figure5b_cell_metadata['cluster_n']]
amacrine_expression = expression_log10.loc[figure5b_cell_metadata.index]
print(amacrine_expression.shape)
fig_code.clustermap(amacrine_expression, row_colors=amacrine_cell_colors)
csv = macosko2015.BASE_URL + 'differential_clusters_lowrank_tidy_metadata_amacrine.csv'
lowrank_tidy = pd.read_csv(csv)
print(lowrank_tidy.shape)
# Reshape the data to be a large 2d matrix
lowrank_tidy_2d = lowrank_tidy.pivot(index='barcode', columns='gene_symbol', values='expression_log')
# set minimum value shown to 0 because there's a bunch of small (e.g. -1.1) negative numbers in the lowrank data
fig_code.clustermap(lowrank_tidy_2d, row_colors=amacrine_cell_colors, vmin=0)
# Subset the genes on only figure 5b
rows = lowrank_tidy.gene_symbol.isin(figure5b_genes_upper)
lowrank_tidy_figure5b = lowrank_tidy.loc[rows]
print(lowrank_tidy_figure5b.shape)
lowrank_tidy_figure5b.head()
plotting_code.violinplot_grid(lowrank_tidy_figure5b, order=cluster_order, aspect=0.25)
rows = lowrank_tidy.gene_symbol.isin(figure5c_genes_upper)
lowrank_tidy_figure5c = lowrank_tidy.loc[rows]
print(lowrank_tidy_figure5c.shape)
lowrank_tidy_figure5c.head()
plotting_code.violinplot_grid(lowrank_tidy_figure5c, order=cluster_order, aspect=0.2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll import the macosko2015 package, which contains a URL pointing to where we've created clean data
Step2: We've created a subset of the data that contains all the cells from batch 1, and only the differentially expressed genes. This is still pretty big!!
Step3: For later, let's also make a logged version the expression matrix
Step4: Now let's read the cell metadata
Step5: Figure 5a
Step6: Exercise 1
Step7:
Step8: We will use a function called groupby to grab the cells from each cluster, and use .mean()
Step9: Tidy data
Step10: Now let's use the function reset_index, which will take everything that was an index and now make it a column
Step11: But our column names aren't so nice anymore ... let's create a dict to map the old column name to a new, nicer one.
Step12: We can use the dataframe function .rename() to rename our columns
Step13: Let's also add a log expression column, just in case
Step14: Exercise 2
Step15:
Step16: Bonus exercise (if you're feeling ahead)
Step17:
Step18: Now that we have the genes we want, let's get the cells we want!
Step19: Now this returns a "range object" which means Python is being lazy and not telling us what's inside. To force Python into action, we can use list
Step20: So this is getting us all numbers from 0 to 4 (not including 5). We can use this group of numbers to subset our cell metadata! Let's make a variable called rows that contains True/False values telling us whether the cells are in that cluster number
Step21: Now let's use our rows variable to subset cell_metadata
Step22: Let's make sure we only have the clusters we need
Step23: This is kinda out of order so let's sort it with the sorted function
Step24: Exercise 4
Step25:
Step26: Now we want to get only the cells from these clusters. To do that, we would use .index
Step27: Exercise 5
Step28:
Step29: Again, we'll have to make a tidy version of the data to be able to make the violinplots
Step30: If you want, you could also create a function to simplify the tidying and logging
Step31: Now that you have your tidy data, we need to add the cell metadata. We will use .join, and specify to use the "barcode" column of figure5b_tidy
Step32: We can make violinplots using seaborn's sns.violinplot, but that will show us the expression across all genes
Step33: The below command specifies "expression" as the x-axis value (first argument), and "cluster_id" as the y-axis value (second argument). Then we say that we want the program to look at the data in our dataframe called figure5b_tidy_clusters.
Step34: Using sns.FacetGrid to make multiple violinplots
Step35: Exercise 7
Step36:
Step37: I have no idea which gene is where .. so let's add some titles with the convenient function g.set_titles
Step38: Now let's add our violinplots, using map on the facetgrid. Again, we'll use "expression" as the x-value (first argument) and "cluster_id" as the second argument.
Step39: Hmm, all of these genes are on totally different scales .. how can we make it so that each gene is scaled to its own minimum and maximum?
Step40:
Step41: Okay these violinplots are still pretty weird looking. In the paper, they scale the violinplots to all be the same width, and the lines are much thinner.
Step42: Looks like we can set the scale variable to be "width" and let's try setting the linewidth to 1.
Step43: Much better! There's a few more things we need to tweak in sns.violinplot. Let's get rid of the dotted thing on the inside, and only show the data exactly where it's valued - the ends of the violins should be square, not pointy.
Step44:
Step45: Okay one more thing on the violinplots ... they had a different color for every cluster, so let's do the same thing too. Right now they're all blue but let's make them all a different color. Since we have so many categories (21), and ColorBrewer doesn't have setups for when there are more than 10 colors, we need to use a different set of colors. We'll use the "husl" colormap, which uses perception research to make colormaps where no one color is too bright or too dark. Read more about it here
Step46: Let's add palette="husl" to our violinplot command and see what it does
Step47: Now let's work on resizing the plots so they're each narrower. We'll add the following three options to `sns.FacetGrid to accomplish this
Step48: Hmm.. now we can see that the clusters aren't in numeric order. Is there an option in sns.violinplot that we can specify the order of the values?
Step49:
Step50: Okay one last thing .. let's turn off the "expression" label at the bottom and the value scales (since right now we're just looking comparatively) with
Step51: Exercise 11
Step52:
Step53: Since we worked so hard to get these lines, let's write them as a function to a file called plotting_code.py. We'll move all the options we fiddled with into the arguments of our violinplot_grid function.
Step54: We can cat (short for concatenate) the file, which means dump the contents out to the output
Step55: Now we see more of the "bimodality" they talk about in the paper
Step56:
Step57: Let's take a step back ... What does this all mean?
Step58: Cluster on Robust PCA'd amacrine cell expression (lowrank)
Step59: Figure 5b using Robust PCA data
Step60: Looks like a lot of the signal from the genes was recovered!
|
14,607 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_excel('http://cdn.sundog-soft.com/Udemy/DataScience/cars.xls')
df.head()
import statsmodels.api as sm
from sklearn.preprocessing import StandardScaler
scale = StandardScaler()
X = df[['Mileage', 'Cylinder', 'Doors']]
y = df['Price']
X[['Mileage', 'Cylinder', 'Doors']] = scale.fit_transform(X[['Mileage', 'Cylinder', 'Doors']].as_matrix())
print (X)
est = sm.OLS(y, X).fit()
est.summary()
y.groupby(df.Doors).mean()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can use pandas to split up this matrix into the feature vectors we're interested in, and the value we're trying to predict.
|
14,608 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.spatial.distance
example_array = np.array([[0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 2, 0, 2, 2, 0, 6, 0, 3, 3, 3],
[0, 0, 0, 0, 2, 2, 0, 0, 0, 3, 3, 3],
[0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 3, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3],
[1, 1, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3],
[1, 1, 1, 0, 0, 0, 3, 3, 3, 0, 0, 3],
[1, 1, 1, 0, 0, 0, 3, 3, 3, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 3, 3, 3, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 5, 5, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4]])
import itertools
n = example_array.max()+1
indexes = []
for k in range(1, n):
tmp = np.nonzero(example_array == k)
tmp = np.asarray(tmp).T
indexes.append(tmp)
result = np.zeros((n-1, n-1), dtype=float)
for i, j in itertools.combinations(range(n-1), 2):
d2 = scipy.spatial.distance.cdist(indexes[i], indexes[j], metric='minkowski', p=1)
result[i, j] = result[j, i] = d2.min()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,609 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), self.scaler(y)
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x
# Output layer, 32x32x3
logits =
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x =
logits =
out =
return out, logits
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Step6: Generator
Step7: Discriminator
Step9: Model Loss
Step11: Optimizers
Step12: Building the model
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
|
14,610 | <ASSISTANT_TASK:>
Python Code:
from deepchem.molnet.load_function import hiv_datasets
from deepchem.models import GraphConvModel
from deepchem.data import NumpyDataset
from sklearn.metrics import average_precision_score
import numpy as np
tasks, all_datasets, transformers = hiv_datasets.load_hiv(featurizer="GraphConv")
train, valid, test = [NumpyDataset.from_DiskDataset(x) for x in all_datasets]
model = GraphConvModel(1, mode="classification")
model.fit(train)
y_true = np.squeeze(valid.y)
y_pred = model.predict(valid)[:,0,1]
print("Average Precision Score:%s" % average_precision_score(y_true, y_pred))
sorted_results = sorted(zip(y_pred, y_true), reverse=True)
hit_rate_100 = sum(x[1] for x in sorted_results[:100]) / 100
print("Hit Rate Top 100: %s" % hit_rate_100)
tasks, all_datasets, transformers = hiv_datasets.load_hiv(featurizer="GraphConv", split=None)
model = GraphConvModel(1, mode="classification", model_dir="/tmp/zinc/screen_model")
model.fit(all_datasets[0])
import os
work_units = os.listdir('/tmp/zinc/screen')
with open('/tmp/zinc/work_queue.sh', 'w') as fout:
fout.write("#!/bin/bash\n")
for work_unit in work_units:
full_path = os.path.join('/tmp/zinc', work_unit)
fout.write("python inference.py %s" % full_path)
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from IPython.display import SVG
from rdkit.Chem.Draw import rdMolDraw2D
best_mols = [Chem.MolFromSmiles(x.strip().split()[0]) for x in open('/tmp/zinc/screen/top_100k.smi').readlines()[:100]]
best_scores = [x.strip().split()[2] for x in open('/tmp/zinc/screen/top_100k.smi').readlines()[:100]]
print(best_scores[0])
best_mols[0]
print(best_scores[0])
best_mols[1]
print(best_scores[0])
best_mols[2]
print(best_scores[0])
best_mols[3]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrain Model Over Full Dataset For The Screen
Step2: 2. Create Work-Units
Step3: 5. Consume work units from "distribution mechanism"
|
14,611 | <ASSISTANT_TASK:>
Python Code:
def f(x):
y = x**4 - 3*x
return y
def integrate_f(a, b, n):
dx = (b - a) / n
dx2 = dx / 2
s = f(a) * dx2
for i in range(1, n):
s += f(a + i * dx) * dx
s += f(b) * dx2
return s
%timeit integrate_f(-100, 100, int(1e5))
%load_ext cython
%%cython
def f2(x):
y = x**4 - 3*x
return y
def integrate_f2(a, b, n):
dx = (b - a) / n
dx2 = dx / 2
s = f2(a) * dx2
for i in range(1, n):
s += f2(a + i * dx) * dx
s += f2(b) * dx2
return s
%timeit integrate_f2(-100, 100, int(1e5))
%%cython
def f3(double x):
y = x**4 - 3*x
return y
def integrate_f3(double a, double b, int n):
dx = (b - a) / n
dx2 = dx / 2
s = f3(a) * dx2
for i in range(1, n):
s += f3(a + i * dx) * dx
s += f3(b) * dx2
return s
%timeit integrate_f3(-100, 100, int(1e5))
%%cython
def f4(double x):
y = x**4 - 3*x
return y
def integrate_f4(double a, double b, int n):
cdef:
double dx = (b - a) / n
double dx2 = dx / 2
double s = f4(a) * dx2
int i = 0
for i in range(1, n):
s += f4(a + i * dx) * dx
s += f4(b) * dx2
return s
%timeit integrate_f4(-100, 100, int(1e5))
%%cython -a
def f4(double x):
y = x**4 - 3*x
return y
def integrate_f4(double a, double b, int n):
cdef:
double dx = (b - a) / n
double dx2 = dx / 2
double s = f4(a) * dx2
int i = 0
for i in range(1, n):
s += f4(a + i * dx) * dx
s += f4(b) * dx2
return s
import numpy as np
def mean3filter(arr):
arr_out = np.empty_like(arr)
for i in range(1, arr.shape[0] - 1):
arr_out[i] = np.sum(arr[i-1 : i+1]) / 3
arr_out[0] = (arr[0] + arr[1]) / 2
arr_out[-1] = (arr[-1] + arr[-2]) / 2
return arr_out
%timeit mean3filter(np.random.rand(1e5))
%%cython
import cython
import numpy as np
@cython.boundscheck(False)
def mean3filter2(double[::1] arr):
cdef double[::1] arr_out = np.empty_like(arr)
cdef int i
for i in range(1, arr.shape[0]-1):
arr_out[i] = np.sum(arr[i-1 : i+1]) / 3
arr_out[0] = (arr[0] + arr[1]) / 2
arr_out[-1] = (arr[-1] + arr[-2]) / 2
return np.asarray(arr_out)
%timeit mean3filter2(np.random.rand(1e5))
%%cython -a
import cython
from cython.parallel import prange
import numpy as np
@cython.boundscheck(False)
def mean3filter3(double[::1] arr, double[::1] out):
cdef int i, j, k = arr.shape[0]-1
with nogil:
for i in prange(1, k-1, schedule='static',
chunksize=(k-2) // 2, num_threads=2):
for j in range(i-1, i+1):
out[i] += arr[j]
out[i] /= 3
out[0] = (arr[0] + arr[1]) / 2
out[-1] = (arr[-1] + arr[-2]) / 2
return np.asarray(out)
rin = np.random.rand(1e7)
rout = np.empty_like(rin)
%timeit mean3filter2(rin, rout)
%timeit mean3filter3(rin, rout)
%%cython -a
# distutils: language=c++
import cython
from libcpp.vector cimport vector
@cython.boundscheck(False)
def build_list_with_vector(double[::1] in_arr):
cdef vector[double] out
cdef int i
for i in range(in_arr.shape[0]):
out.push_back(in_arr[i])
return out
build_list_with_vector(np.random.rand(10))
%%cython -a
#distutils: language=c++
from cython.operator cimport dereference as deref, preincrement as inc
from libcpp.vector cimport vector
from libcpp.map cimport map as cppmap
cdef class Graph:
cdef cppmap[int, vector[int]] _adj
cpdef int has_node(self, int node):
return self._adj.find(node) != self._adj.end()
cdef void add_node(self, int new_node):
cdef vector[int] out
if not self.has_node(new_node):
self._adj[new_node] = out
def add_edge(self, int u, int v):
self.add_node(u)
self.add_node(v)
self._adj[u].push_back(v)
self._adj[v].push_back(u)
def __getitem__(self, int u):
return self._adj[u]
cdef vector[int] _degrees(self):
cdef vector[int] deg
cdef int first = 0
cdef vector[int] edges
cdef cppmap[int, vector[int]].iterator it = self._adj.begin()
while it != self._adj.end():
deg.push_back(deref(it).second.size())
it = inc(it)
return deg
def degrees(self):
return self._degrees()
g0 = Graph()
g0.add_edge(1, 5)
g0.add_edge(1, 6)
g0[1]
g0.has_node(1)
g0.degrees()
import networkx as nx
g = nx.barabasi_albert_graph(100000, 6)
with open('graph.txt', 'w') as fout:
for u, v in g.edges_iter():
fout.write('%i,%i\n' % (u, v))
%timeit list(g.degree())
myg = Graph()
def line2edges(line):
u, v = map(int, line.rstrip().split(','))
return u, v
edges = map(line2edges, open('graph.txt'))
for u, v in edges:
myg.add_edge(u, v)
%timeit mydeg = myg.degrees()
from mean3 import mean3filter
mean3filter(np.random.rand(10))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, let's time this
Step2: Not too bad, but this can add up. Let's see if Cython can do better
Step3: That's a little bit faster, which is nice since all we did was to call Cython on the exact same code. But can we do better?
Step4: The final bit of "easy" Cython optimization is "declaring" the variables inside the function
Step5: 4X speedup with so little effort is pretty nice. What else can we do?
Step6: That's a lot of yellow still! How do we reduce this?
Step7: Rubbish! How do we fix this?
Step8: Exercise (if time)
Step9: Example
Step10: Using Cython in production code
|
14,612 | <ASSISTANT_TASK:>
Python Code:
import rebound
import reboundx
import numpy as np
import astropy.units as u
import astropy.constants as constants
import matplotlib.pyplot as plt
%matplotlib inline
#Simulation begins here
sim = rebound.Simulation()
sim.units = ('yr', 'AU', 'Msun') #changes simulation and G to units of solar masses, years, and AU
sim.integrator = "whfast" #integrator for sim
sim.dt = .05 #timestep for sim
sim.add(m=1) #Adds Sun
sim.add(a=.5, f=0, Omega=0, omega=0, e=0, inc=0, m=0) #adds test particle
#Moves all particles to center of momentum frame
sim.move_to_com()
#Gives orbital information before the simulation begins
print("\n***INITIAL ORBITS:***")
for orbit in sim.calculate_orbits():
print(orbit)
density = (3000.0*u.kg/u.m**3).to(u.Msun/u.AU**3)
c = (constants.c).to(u.AU/u.yr) #speed of light
lstar = (3.828e26*u.kg*u.m**2/u.s**3).to(u.Msun*u.AU**2/u.yr**3) #luminosity of star
radius = (1000*u.m).to(u.AU) #radius of object
albedo = .017 #albedo of object
stef_boltz = constants.sigma_sb.to(u.Msun/u.yr**3/u.K**4) #Stefan-Boltzmann constant
emissivity = .9 #emissivity of object
k = .25 #constant between
Gamma = (310*u.kg/u.s**(5/2)).to(u.Msun/u.yr**(5/2)) #thermal inertia of object
rotation_period = (15470.9*u.s).to(u.yr) #rotation period of object
#Loads the effect into Rebound
rebx = reboundx.Extras(sim)
yark = rebx.load_force("yarkovsky_effect")
#Sets the parameters for the effect
yark.params["ye_c"] = c.value #set on the sim and not a particular particle
yark.params["ye_lstar"] = lstar.value #set on the sim and not a particular particle
yark.params["ye_stef_boltz"] = stef_boltz.value #set on the sim and not a particular particle
# Sets parameters for the particle
ps = sim.particles
ps[1].r = radius.value #remember radius is not inputed as a Rebx parameter - it's inputed on the particle in the Rebound sim
ps[1].params["ye_flag"] = 0 #setting this flag to 0 will give us the full version of the effect
ps[1].params["ye_body_density"] = density.value
ps[1].params["ye_albedo"] = albedo
ps[1].params["ye_emissivity"] = emissivity
ps[1].params["ye_k"] = k
ps[1].params["ye_thermal_inertia"] = Gamma.value
ps[1].params["ye_rotation_period"] = rotation_period.value
# For this example we assume the object has a spin axis perpendicular to the orbital plane: unit vector = (0,0,1)
ps[1].params["ye_spin_axis_x"] = 0
ps[1].params["ye_spin_axis_y"] = 0
ps[1].params["ye_spin_axis_z"] = 1
rebx.add_force(yark) #adds the force to the simulation
%%time
tmax=100000 # in yrs
Nout = 1000
times = np.linspace(0, tmax, Nout)
a_start = .5 #starting semi-major axis for the asteroid
a = np.zeros(Nout)
for i, time in enumerate(times):
a[i] = ps[1].a
sim.integrate(time)
a_final = ps[1].a #semi-major axis of asteroid after the sim
print("CHANGE IN SEMI-MAJOR AXIS:", a_final-a_start, "AU\n") #prints difference between the initial and final semi-major axes of asteroid
fig, ax = plt.subplots()
ax.plot(times, a-a_start, '.')
ax.set_xlabel('Time (yrs)')
ax.set_ylabel('Change in semimajor axis (AU)')
sim = rebound.Simulation()
sim.units = ('yr', 'AU', 'Msun') #changes simulation and G to units of solar masses, years, and AU
sim.integrator = "whfast" #integrator for sim
sim.dt = .05 #timestep for sim
sim.add(m=1) #Adds Sun
sim.add(a=.5, f=0, Omega=0, omega=0, e=0, inc=0, m=0) #adds test particle
sim.add(a=.75, f=0, Omega=0, omega=0, e=0, inc=0, m=0) #adds a second test particle
#Moves all particles to center of momentum frame
sim.move_to_com()
#Gives orbital information before the simulation begins
print("\n***INITIAL ORBITS:***")
for orbit in sim.calculate_orbits():
print(orbit)
#Loads the effect into Rebound
rebx = reboundx.Extras(sim)
yark = rebx.load_force("yarkovsky_effect")
#Sets the parameters for the effect
yark.params["ye_c"] = c.value
yark.params["ye_lstar"] = lstar.value
ps = sim.particles #simplifies way to access particles parameters
ps[1].params["ye_flag"] = 1 #setting this flag to 1 will give us the outward version of the effect
ps[1].params["ye_body_density"] = density.value
ps[1].params["ye_albedo"] = albedo
ps[1].r = radius.value #remember radius is not inputed as a Rebx parameter - it's inputed on the particle in the Rebound sim
ps[2].params["ye_flag"] = -1 #setting this flag to -1 will give us the inward version of the effect
ps[2].params["ye_body_density"] = density.value
ps[2].params["ye_albedo"] = albedo
ps[2].r = radius.value
rebx.add_force(yark) #adds the force to the simulation
%%time
tmax=100000 # in yrs
a_start_1 = .5 #starting semi-major axis for the 1st asteroid
a_start_2 = .75 #starting semi-major axis for the 2nd asteroid
a1, a2 = np.zeros(Nout), np.zeros(Nout)
for i, time in enumerate(times):
a1[i] = ps[1].a
a2[i] = ps[2].a
sim.integrate(time)
a_final_1 = ps[1].a #semi-major axis of 1st asteroid after the sim
a_final_2 = ps[2].a #semi-major axis of 2nd asteroid after the sim
print("CHANGE IN SEMI-MAJOR AXIS(Asteroid 1):", a_final_1-a_start_1, "AU\n")
print("CHANGE IN SEMI-MAJOR AXIS(Asteroid 2):", a_final_2-a_start_2, "AU\n")
fig, ax = plt.subplots()
ax.plot(times, a1-a_start_1, '.', label='Asteroid 1')
ax.plot(times, a2-a_start_2, '.', label='Asteroid 2')
ax.set_xlabel('Time (yrs)')
ax.set_ylabel('Change in semimajor axis (AU)')
ax.legend()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As with all REBOUNDx effects, the parameters must be inputed with the same units as the simulation (in this case it's AU/Msun/yr). We'll use the astropy units module to help avoid errors
Step2: We then add the Yarkovsky effect and the required parameters for this version. Importantly, we must set 'ye_flag' to 0 to get the Full Version. Physical constants and the stellar luminosity get added to the effect yark
Step3: Other parameters need to be added to each particle feeling the Yarkovsky effect
Step4: We integrate this system for 100,000 years and print out the difference between the particle's semi-major axis before and after the simulation.
Step5: Simple Version
Step6: We then add the Yarkovsky effect from Reboundx and the necesary parameters for this version. This time, we must make sure that 'ye_flag' is set to 1 or -1 to get the Simple Version of the effect. Setting it to 1 will push the asteroid outwards, while setting it to -1 will push it inwards. We'll push out our original asteroid and push in our new one. We use the same physical properties as in the example above
Step7: Now we run the sim for 100,000 years and print out the results for both asteroids. Note the difference in simulation times between the versions. Even with an extra particle, the simple version was faster than the full version.
|
14,613 | <ASSISTANT_TASK:>
Python Code:
# Load data sets
import pandas as pd
treeSourceUrl = './data/preds_yeastnet_no_gi_0.04_0.5.txt.propagate.small_parent_tree'
geneCountFile = './data/preds_yeastnet_no_gi_0.04_0.5.txt.propagate.term_sizes'
alignmentFile = './data/alignments_FDR_0.1_t_0.1'
geneAssignment = './data/preds_yeastnet_no_gi_0.04_0.5.txt.propagate.mapping'
# Load the tree data
treeColNames = ['parent', 'child', 'type', 'in_tree']
tree = pd.read_csv(treeSourceUrl, delimiter='\t', names=treeColNames)
tree.tail()
assignment = pd.read_csv(geneAssignment, sep='\t', names=['gene', 'clixo'])
print(assignment['clixo'].unique().shape)
assignment.head()
al = pd.read_csv(alignmentFile, sep='\t', names=['clixo', 'go', 'similarity', 'fdr', 'genes'])
al.head()
mapping = {}
for row in al.itertuples():
entry = {
'go': row[2],
'score': row[3],
'dfr': row[4]
}
mapping[str(row[1])] = entry
geneCounts = pd.read_csv(geneCountFile, names=['clixo', 'count'], sep='\t')
term2count = {}
for row in geneCounts.itertuples():
term2count[str(row[1])] = row[2].item()
# Get unique terms
clixo_terms = set()
for row in tree.itertuples():
etype = row[3]
if not etype.startswith('gene'):
clixo_terms.add(str(row[1]))
clixo_terms.add(str(row[2]))
print(len(clixo_terms))
import json
clixoTree = {
'data': {
'name': 'CLIXO Tree'
},
'elements': {
'nodes': [],
'edges': []
}
}
print(json.dumps(clixoTree, indent=4))
def get_node(id, count):
node = {
'data': {
'id': id,
'geneCount': count
}
}
return node
def get_edge(source, target):
edge = {
'data': {
'source': target,
'target': source
}
}
return edge
edges = []
PREFIX = 'CLIXO:'
for row in tree.itertuples():
etype = row[3]
in_tree = row[4]
if etype.startswith('gene') or in_tree == 'NOT_TREE':
continue
source = PREFIX + str(row[1])
child = PREFIX + str(row[2])
edges.append(get_edge(source, child))
print(len(edges))
nodes = []
for id in clixo_terms:
node = get_node(PREFIX + id, term2count[id])
nodes.append(node)
print(len(nodes))
clixoTree['elements']['nodes'] = nodes
clixoTree['elements']['edges'] = edges
with open('./data/clixo-tree.cyjs', 'w') as outfile:
json.dump(clixoTree, outfile)
import networkx as nx
DG=nx.DiGraph()
for node in nodes:
DG.add_node(node['data']['id'])
for edge in edges:
DG.add_edge(edge['data']['source'], edge['data']['target'])
import matplotlib.pyplot as plt
nx.draw_circular(DG)
# pos = nx.nx_pydot.pydot_layout(DG)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build Base CyJS Network
Step2: Layout with networkx
|
14,614 | <ASSISTANT_TASK:>
Python Code:
help([1, 2, 3])
dir([1, 2, 3])
sum??
all([1==1, True, 10, -1]), all([1==5, True, 10, -1])
any([False, True]), any([False, False])
bin(12), oct(12), hex(12), int('12'), float(12)
ord('A'), chr(65)
raw_input(u"Podaj liczbฤ: ")
zip([1,2,3], [2, 3, 4])
sorted([8, 3, 12, 9, 3]), reversed(range(10)), list(reversed(range(10)))
len([3, 2, 1]), len([[1, 2], [3, 4, 5]])
list(), dict(), set(), tuple()
A = (1, 2, 3)
B = [1, 2, 3]
A == B
A = set()
A.add(2)
A.add(3)
A.add(4)
A
A.add(3)
A
B = set((4, 5, 6))
A.difference(B)
A.symmetric_difference(B)
A.intersection(B)
A.union(B)
pow(2, 10), divmod(10, 3), sum([1, 2, 3])
round(0.5), round(0.2), round(0.9)
min([1, 2, 3]), max([1, 2, 3])
abs(10), abs(-10)
24 % 5, 24 % 2
f = lambda x: x+1
f(3)
f = lambda a, b: a+b**3
f(2, 3)
map(lambda x: x+10, [0, 2, 5, 234])
[x+10 for x in [0, 2]]
map(chr, [80, 121, 67, 105, 114, 99, 108, 101])
[chr(x) for x in [80, 121, 67, 105, 114, 99, 108, 101]]
filter(lambda x: x > 0, [-1, 0, 4, -3, 2])
[x for x in [-1, 0, 4, -3, 2] if x > 0]
reduce(lambda a, b: a - b, [2, 3, 4])
2 - 3 - 4
%ls -l
fp = open("pycircle.txt", "w")
%ls -l
fp.write("Hello world\n")
fp.close()
%cat pycircle.txt
with open("pycircle.txt") as fp:
print fp.read(),
def fun1(a):
a.append(9)
return a
def fun2(a=[]):
a.append(9)
return a
lista1 = [1, 2, 3]
lista2 = [3, 4, 5]
fun1(lista1), fun2(lista2)
def fun2(a=[]):
a.append(9)
return a
fun2()
fun2()
fun2()
def show_local():
x = 23
print("Local: %s" % x)
show_local()
def show_enclosing(a):
def enclosing():
print("Enclosing: %s" % a)
enclosing()
show_enclosing(5)
x = 43
def show_global():
print("Global %s" % x)
show_global()
def show_built():
print("Built-in: %s" % abs)
show_built()
x = 43
def what_x():
print(x)
x = 4
what_x()
x = 43
def encl_x():
x = 23
def enclosing():
print("Enclosing: %s" % x)
enclosing()
encl_x()
x = 43
def what_about_globals():
global x
x = 37
print("In function %s" % x)
what_about_globals()
print("After function %s" % x)
def f(x):
f.l += x
print "x: ", x
print "f.l: ", f.l
f.l = 10
f(2)
f(14)
def powerer(power):
def nested(number):
return number ** power
return nested
f = powerer(3)
f(2), f(10)
def licznik(start):
def nested(label):
print(label, nested.state)
nested.state += 1
nested.state = start
return nested
f = licznik(0)
f('a')
f('b')
f('c')
' '.join(['a', 'b', 'c'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Funkcje wbudowane
Step2: Tuple (krotka)
Step3: Czym siฤย rรณลผni krotka od listy?
Step4: Prosta matematyka
Step5: Trochฤย programowania funkcyjnego
Step6: Wiฤcej informacji temat funkcji wbudowanych na https
Step7: Funkcje
Step8: LEGB
Step9: Funkcje to teลผ obiekty!
Step10: Fabryki funkcji
Step11: Zadania 2
|
14,615 | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import fsic.data as data
import fsic.glo as glo
import fsic.indtest as it
import fsic.kernel as kernel
import fsic.plot as plot
import fsic.util as util
import scipy.stats as stats
plot.set_default_matplotlib_options()
# font options
font = {
#'family' : 'normal',
#'weight' : 'bold',
'size' : 18
}
plt.rc('font', **font)
def load_plot_vs_params(fname, h1_true=True, xlabel='Problem parameter', show_legend=True):
func_xvalues = lambda agg_results: agg_results['prob_params']
ex = 2
def func_title(agg_results):
repeats, _, n_methods = agg_results['job_results'].shape
alpha = agg_results['alpha']
test_size = (1.0 - agg_results['tr_proportion'])*agg_results['sample_size']
title = '%s. %d trials. test size: %d. $\\alpha$ = %.2g.'%\
( agg_results['prob_label'], repeats, test_size, alpha)
return title
#plt.figure(figsize=(10,5))
results = plot.plot_prob_reject(
ex, fname, h1_true, func_xvalues, xlabel=xlabel, func_title=func_title)
plt.title('')
plt.gca().legend(loc='best').set_visible(show_legend)
#plt.grid(True)
return results
def load_runtime_vs_params(fname, h1_true=True, xlabel='Problem parameter',
show_legend=True, xscale='linear', yscale='log'):
func_xvalues = lambda agg_results: agg_results['prob_params']
ex = 2
def func_title(agg_results):
repeats, _, n_methods = agg_results['job_results'].shape
alpha = agg_results['alpha']
title = '%s. %d trials. $\\alpha$ = %.2g.'%\
( agg_results['prob_label'], repeats, alpha)
return title
#plt.figure(figsize=(10,6))
results = plot.plot_runtime(ex, fname,
func_xvalues, xlabel=xlabel, func_title=func_title)
plt.title('')
plt.gca().legend(loc='best').set_visible(show_legend)
#plt.grid(True)
if xscale is not None:
plt.xscale(xscale)
if yscale is not None:
plt.yscale(yscale)
return results
# H0 true. Same Gaussian.
sg_fname = 'ex2-sg-me6_n4000_J1_rs300_pmi10.000_pma90.000_a0.050_trp0.50.p'
#sg_fname = 'ex2-sg-me5_n4000_J1_rs300_pmi10.000_pma90.000_a0.050_trp0.50.p'
#g_results = load_plot_vs_params(
# sg_fname, h1_true=False, xlabel='$d_x$ and $d_y$', show_legend=True)
#lt.ylim([0.03, 0.1])
#plt.savefig(gmd_fname.replace('.p', '.pdf', 1))
# H0 true. Same Gaussian. Large dimensions
#bsg_fname = 'ex2-bsg-me7_n4000_J1_rs300_pmi100.000_pma500.000_a0.050_trp0.50.p'
bsg_fname = 'ex2-bsg-me6_n4000_J1_rs300_pmi100.000_pma400.000_a0.050_trp0.50.p'
#bsg_results = load_plot_vs_params(bsg_fname, h1_true=False, xlabel='$d_x$ and $d_y$',
# show_legend=False)
#plt.ylim([0.03, 0.1])
#plt.savefig(bsg_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
# sin frequency problem
sin_fname = 'ex2-sin-me6_n4000_J1_rs300_pmi1.000_pma6.000_a0.050_trp0.50.p'
# sin_fname = 'ex2-sin-me6_n4000_J1_rs100_pmi1.000_pma6.000_a0.050_trp0.20.p'
#sin_fname = 'ex2-sin-me7_n4000_J1_rs300_pmi1.000_pma6.000_a0.050_trp0.50.p'
sin_results = load_plot_vs_params(
sin_fname, h1_true=True, xlabel=r'$\omega$ in $1+\sin(\omega x)\sin(\omega y)$',
show_legend=False)
plt.savefig(sin_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
# Gaussian sign problem
gsign_fname = 'ex2-gsign-me6_n4000_J1_rs300_pmi1.000_pma6.000_a0.050_trp0.50.p'
#gsign_fname = 'ex2-gsign-me7_n4000_J1_rs300_pmi1.000_pma6.000_a0.050_trp0.50.p'
#gsign_fname = 'ex2-gsign-me10_n4000_J1_rs100_pmi1.000_pma5.000_a0.050_trp0.50.p'
gsign_results = load_plot_vs_params(gsign_fname, h1_true=True,
xlabel='$d_x$', show_legend=False)
# plt.legend(bbox_to_anchor=(1.1, 1.05))
plt.savefig(gsign_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
# H0 true. Same Gaussian. medium-sized dimensions
#msg_fname = 'ex2-msg-me10_n4000_J1_rs100_pmi100.000_pma500.000_a0.050_trp0.50.p'
msg_fname = 'ex2-msg-me6_n4000_J1_rs300_pmi50.000_pma250.000_a0.050_trp0.50.p'
msg_results = load_plot_vs_params(msg_fname, h1_true=False, xlabel='$d_x$ and $d_y$',
show_legend=False)
plt.savefig(msg_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
#plt.ylim([0.03, 0.1])
load_runtime_vs_params(msg_fname, h1_true=False, show_legend=False,
yscale='log', xlabel='$d_x$ and $d_y$');
plt.savefig(msg_fname.replace('.p', '', 1)+'_time.pdf', bbox_inches='tight')
# pairwise sign problem
pws_fname = 'ex2-pwsign-me6_n4000_J1_rs200_pmi20.000_pma100.000_a0.050_trp0.50.p'
#pwd_results = load_plot_vs_params(
# pws_fname, h1_true=True, xlabel=r'$d$',
# show_legend=True)
#plt.ylim([0, 1.1])
# uniform rotate with noise dimensions
urot_noise_fname = 'ex2-urot_noise-me6_n4000_J1_rs200_pmi0.000_pma6.000_a0.050_trp0.50.p'
#urot_noise_results = load_plot_vs_params(
# urot_noise_fname, h1_true=True, xlabel='Noise dimensions for X and Y',
# show_legend=True)
# Vary the rotation angle
#u2drot_fname = 'ex2-u2drot-me8_n4000_J1_rs200_pmi0.000_pma10.000_a0.010_trp0.50.p'
u2drot_fname = 'ex2-u2drot-me6_n4000_J1_rs200_pmi0.000_pma10.000_a0.050_trp0.50.p'
#u2drot_fname = 'ex2-u2drot-me5_n4000_J1_rs300_pmi0.000_pma10.000_a0.050_trp0.50.p'
#u2drot_results = load_plot_vs_params(
# u2drot_fname, h1_true=True, xlabel='Rotation angle (in degrees)', show_legend=True)
#plt.ylim([0, 0.05])
#fname = 'sin-job_nfsicJ10_opt-n4000_J1_r220_p5.000_a0.010_trp0.50.p'
#fname = 'sg-job_nfsicJ10_perm_med-n4000_J1_r8_p50.000_a0.050_trp0.50.p'
#fpath = glo.ex_result_file(2, 'sg', fname)
#result = glo.pickle_load(fpath)
#fname = 'ex2-sin-me7_n4000_J1_rs200_pmi1.000_pma5.000_a0.010_trp0.50.p'
fname = 'ex2-sg-me6_n4000_J1_rs100_pmi10.000_pma90.000_a0.050_trp0.50.p'
#fname = 'ex2-u2drot-me7_n4000_J1_rs200_pmi0.000_pma10.000_a0.010_trp0.50.p'
fpath = glo.ex_result_file(2, fname)
result = glo.pickle_load(fpath)
def load_tpm_table(ex, fname, key):
Load a trials x parameters x methods numpy array of results.
The value to load is specified by the key.
results = glo.ex_load_result(ex, fname)
f_val = lambda job_results: job_results['test_result'][key]
vf_val = np.vectorize(f_val)
# results['job_results'] is a dictionary:
# {'test_result': (dict from running perform_test(te) '...':..., }
vals = vf_val(results['job_results'])
#repeats, _, n_methods = results['job_results'].shape
met_job_funcs = results['method_job_funcs']
return vals, met_job_funcs
sta, met_job_funcs = load_tpm_table(ex=2, fname=fname, key='test_stat')
sta.shape
met_job_funcs
nfsicJ10_stats = sta[:, :, 1]
plt.figure(figsize=(12, 5))
plt.imshow(nfsicJ10_stats.T, interpolation='none')
plt.colorbar(orientation='horizontal')
J = 10
thresh = stats.chi2.isf(0.05, df=J)
np.mean(nfsicJ10_stats > thresh, 0)
param_stats = nfsicJ10_stats[:, 3]
plt.hist(param_stats, normed=True)
dom = np.linspace(1e-1, np.max(param_stats)+2, 500)
chi2_den = stats.chi2.pdf(dom, df=J)
plt.plot(dom, chi2_den, '-')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A notebook to process experimental results of ex2_prob_params.py. p(reject) as problem parameters are varied.
Step2: A toy problem where X follows the standard multivariate Gaussian,
Step3: Examine a trial file
Step5: Examine a result file
|
14,616 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import skrf as rf
from skrf.media import CPW
rf.stylely()
import matplotlib.pyplot as plt
# base parameters
freq = rf.Frequency(1e-3,10,1001,'ghz')
cpw = CPW(freq, w=0.6e-3, s=0.25e-3, ep_r=10.6)
l1
0----+-=======-2
|
= c1
|
GND
l1
1----+-=======-3
|
= c1
|
GND
l1 = cpw.line(20, 'mm', z0=50, embed=True)
c1 = cpw.shunt_capacitor(C=0.15e-12, z0=50)
l1 = rf.connect(c1, 1, l1, 0)
li = rf.concat_ports([l1, l1], port_order='second')
Fix = li
Fix.name = 'Fix'
Fix.nudge(1e-4)
Left = Fix
# flip fixture for right side
Right = Fix.flipped()
l2
0-=======-2
l2
1-=======-3
l2 = cpw.line(50, 'mm', z0=50, embed=True)
DUT = rf.concat_ports([l2, l2], port_order='second')
DUT.name = 'DUT'
DUT.nudge(1e-5)
Left Meas Right
l1 l2 l1
0----+-=======-2 0-=======-2 0-=======-+----2
| |
= c1 = c1
| |
GND GND
l1 l2 l1
1----+-=======-3 1-=======-3 1-=======-+----3
| |
= c1 = c1
| |
GND GND
Meas = Left ** DUT ** Right
Meas.name = 'Meas'
Meas.add_noise_polar(1e-5, 2)
DUTd = Left.inv ** Meas ** Right.inv
DUTd.name = 'DUTd'
fig, axarr = plt.subplots(2,2, sharex=True, figsize=(10,6))
ax = axarr[0,0]
Meas.plot_s_db(m=0, n=0, ax=ax)
DUTd.plot_s_db(m=0, n=0, ax=ax)
DUT.plot_s_db(m=0, n=0, ax=ax, ls=':', color='0.0')
ax.set_title('Return Loss')
ax.legend(loc='lower center', ncol=3)
ax.grid(True)
ax = axarr[0,1]
Meas.plot_s_db(m=2, n=0, ax=ax)
DUTd.plot_s_db(m=2, n=0, ax=ax)
DUT.plot_s_db(m=2, n=0, ax=ax, ls=':', color='0.0')
ax.set_title('Insertion Loss')
ax.legend(loc='lower center', ncol=3)
ax.grid(True)
ax = axarr[1,0]
Meas.plot_s_db(m=1, n=0, ax=ax)
DUTd.plot_s_db(m=1, n=0, ax=ax)
DUT.plot_s_db(m=1, n=0, ax=ax, ls=':', color='0.0')
ax.set_title('Isolation')
ax.legend(loc='lower center', ncol=3)
ax.grid(True)
ax = axarr[1,1]
Meas.plot_s_deg(m=2, n=0, ax=ax)
DUTd.plot_s_deg(m=2, n=0, ax=ax, marker='o', markevery=25)
DUT.plot_s_deg(m=2, n=0, ax=ax, ls=':', color='0.0')
ax.set_title('Insertion Loss')
ax.legend(loc='lower center', ncol=3)
ax.grid(True)
fig.tight_layout()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Build fixture network
Step4: Build DUT network
Step6: Build the measurement
Step7: Perform de-embedding
Step8: Display results
|
14,617 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd # Cargamos pandas con el alias pd
dfl = pd.read_csv('data/perros_o_gatos.csv', index_col='observacion')
print('Estos datos han sido tomados del libro Mastering machine learning with scikit-learn de Gavin Hackeling, \
PACKT publishing open source, pp. 99')
dfl # En jupyter al escribir una variable sin mas, la celda nos devuelve su contenido.
dfl.describe()
dfl['juega al busca'].sum()
dfl.loc[dfl['especie']=='perro','juega al busca'].sum()
labels = dfl['especie']
df = dfl[['juega al busca', 'apatico', 'comida favorita']]
df
labels
df['comida favorita'].value_counts()
from sklearn.feature_extraction import DictVectorizer
vectorizer = DictVectorizer(sparse=False)
ab = vectorizer.fit_transform(df.to_dict(orient='records'))
dft = pd.DataFrame(ab, columns=vectorizer.get_feature_names())
dft.head()
from numpy import log2
def entropia_perro_gato(count_perro, count_gato):
prob_perro = count_perro / float(count_perro + count_gato)
prob_gato = count_gato / float(count_perro + count_gato)
return 0.0 if not count_perro or not count_gato else -(prob_perro*log2(prob_perro) + prob_gato*log2(prob_gato))
perro = dfl['especie']=='perro'
gato = dfl['especie']=='gato'
no_busca = dfl['juega al busca']==False
si_busca = dfl['juega al busca']==True
print('A %d perros y %d gatos sรญ les gusta jugar al busca. H=%0.4f' % (
dfl[perro]['juega al busca'].sum(),#podemos contar sumando el numero de True
len(dfl[gato & si_busca]),#o filtrando y contando cueantos valores quedan
entropia_perro_gato(4,1),
))
print('A %d perros y %d gatos no les gusta jugar al busca. H=%0.4f' % (
len(df[perro&no_busca]),
len(df[gato&no_busca]),
entropia_perro_gato(len(dfl[perro & no_busca]),
len(dfl[gato & no_busca])),
))
print(entropia_perro_gato(0,6))
print(entropia_perro_gato(6,2))
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion='entropy')
classifier.fit(dft, labels)
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
feat = pd.DataFrame(index=dft.keys(), data=classifier.feature_importances_, columns=['score'])
feat = feat.sort_values(by='score', ascending=False)
feat.plot(kind='bar',rot=85)
from sklearn.tree import export_graphviz
dotfile = open('perro_gato_tree.dot', 'w')
export_graphviz(
classifier,
out_file = dotfile,
filled=True,
feature_names = dft.columns,
class_names=list(labels),
rotate=True,
max_depth=None,
rounded=True,
)
dotfile.close()
!dot -Tpng perro_gato_tree.dot -o perro_gato_tree.png
from IPython.display import Image
Image('perro_gato_tree.png', width=1000)
import numpy as np
np.array(classifier.predict(dft))
np.array(labels)
print('Error rate %0.4f'%((np.array(classifier.predict(dft))==np.array(labels)).sum() / float(len(labels))))
test = pd.read_csv('data/perros_o_gatos_TEST.csv', index_col='observacion')
test
label_test = test['especie']
del test['especie']
ab = vectorizer.transform(test.to_dict(orient='records'))
dftest = pd.DataFrame(ab, columns=vectorizer.get_feature_names())
dftest.head()
list(classifier.predict(dftest))
list(label_test)
print('Error rate %0.4f'%((np.array(classifier.predict(dftest))==np.array(label_test)).sum() / float(len(label_test))))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Un problema de clasificaciรณn
Step2: Los datos se componen de observaciones numeradas del 1 al 14 y 3 features o caracterรญsticas representadas en las columnas (tambiรฉn se les conocen como inputs). La columna especie es la respuesta a nuestro problema, por lo que no representa un feature. Esto quiere decir que solo la usaremos para saber si el algoritmo de machine learning estรก haciendo una buena clasificaciรณn o no. A esta columna (especie) se la suele llamar target, label, output o y.
Step3: Suma, media, mediana y desviaciรณn estรกndard (sum, mean, median, std)
Step4: Filtros de pandas
Step5: 3. Separemos la columna especies para no confundirla
Step6: ยกLa variable comida favorita es del tipo categรณrica!
Step7: 4. Codificaciรณn de variables categรณricas
Step8: De esta forma, nuestro dataframe ya estรก preparado para ser utilizado por cualquiera de los algoritmos de clasificaciรณn de scikit-learn.
Step9: Evaluemos la pregunta si le gusta jugar al busca
Step10: ยฟy la comida de gato?
Step11: y no te olvides de la ganancia de informaciรณn (information gain)
Step12: 5.1 importancia de los features
Step13: 6. Visualizando el arbol, requiere graphviz
Step14: La celda anterior exportรณ el รกrbol de decisiรณn creado con sklearn y entrenado con nuestros datos a un archivo .dot
Step15: finalmente para cargar la imagen usamos
Step16: 7. Evaluaciรณn del modelo
Step17: Ahora evaluemos sobre datos nunca vistos por el modelo!!!!!
|
14,618 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
## Create an ontology factory in order to fetch GO
from ontobio.ontol_factory import OntologyFactory
ofactory = OntologyFactory()
## GOLR queries
from ontobio.golr.golr_query import GolrAssociationQuery
## rendering ontologies
from ontobio import GraphRenderer
## Load GO. Note the first time this runs Jupyter will show '*' - be patient
ont = ofactory.create("go")
term_id = "GO:0009070" ## serine family amino acid biosynthetic process
descendants = ont.descendants(term_id, reflexive=True, relations=['subClassOf', 'BFO:0000050'])
descendants
renderer = GraphRenderer.create('tree')
print(renderer.render_subgraph(ont, nodes=descendants))
DEFAULT_FACET_FIELDS = ['taxon_subset_closure_label', 'evidence_label', 'assigned_by']
def summarize(t: str,
evidence_closure='ECO:0000269', ## restrict to experimental
facet_fields=None) -> dict:
Summarize a term
if facet_fields == None:
facet_fields = DEFAULT_FACET_FIELDS
q = GolrAssociationQuery(object=t, rows=0, object_category='function',
fq={'evidence_closur'taxon_subset_closure_label'e_label':'experimental evidence'},
facet_fields=facet_fields)
#params = q.solr_params()
#print(params)
result = q.exec()
fc = result['facet_counts']
item = {'ALL': result['numFound']} ## make sure this is the first entry
for ff in facet_fields:
if ff in fc:
item.update(fc[ff])
return item
print(summarize(term_id))
def summarize_set(ids, facet_fields=None) -> pd.DataFrame:
Summarize a set of annotations, return a dataframe
items = []
for id in ids:
item = {'id': id, 'name:': ont.label(id)}
for k,v in summarize(id, facet_fields=facet_fields).items():
item[k] = v
items.append(item)
df = pd.DataFrame(items).fillna(0)
# sort using total number
df.sort_values('ALL', axis=0, ascending=False, inplace=True)
return df
pd.options.display.max_columns = None
df = summarize_set(descendants)
df
summarize_set(descendants, facet_fields=['assigned_by'])
summarize_set(descendants, facet_fields=['taxon_subset_closure_label'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Finding descendants
Step2: rendering subtrees
Step5: summarizing annotations
Step6: Summarize GO term and descendants
Step7: Summary by assigned by
Step8: Summarize by species
|
14,619 | <ASSISTANT_TASK:>
Python Code:
!pip install --pre deepchem
import deepchem as dc
dc.__version__
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
model = dc.models.GraphConvModel(n_tasks=1, mode='regression', dropout=0.2)
model.fit(train_dataset, nb_epoch=100)
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print("Training set score:", model.evaluate(train_dataset, [metric], transformers))
print("Test set score:", model.evaluate(test_dataset, [metric], transformers))
solubilities = model.predict_on_batch(test_dataset.X[:10])
for molecule, solubility, test_solubility in zip(test_dataset.ids, solubilities, test_dataset.y):
print(solubility, test_solubility, molecule)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can of course run this tutorial locally if you prefer. In this case, don't run the above cell since it will download and install Anaconda on your local machine. In either case, we can now import the deepchem package to play with.
Step2: Training a Model with DeepChem
Step3: I won't say too much about this code right now. We will see many similar examples in later tutorials. There are two details I do want to draw your attention to. First, notice the featurizer argument passed to the load_delaney() function. Molecules can be represented in many ways. We therefore tell it which representation we want to use, or in more technical language, how to "featurize" the data. Second, notice that we actually get three different data sets
Step4: Here again I will not say much about the code. Later tutorials will give lots more information about GraphConvModel, as well as other types of models provided by DeepChem.
Step5: If everything has gone well, we should now have a fully trained model! But do we? To find out, we must evaluate the model on the test set. We do that by selecting an evaluation metric and calling evaluate() on the model. For this example, let's use the Pearson correlation, also known as r<sup>2</sup>, as our metric. We can evaluate it on both the training set and test set.
Step6: Notice that it has a higher score on the training set than the test set. Models usually perform better on the particular data they were trained on than they do on similar but independent data. This is called "overfitting", and it is the reason it is essential to evaluate your model on an independent test set.
|
14,620 | <ASSISTANT_TASK:>
Python Code:
import graphlab
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
import time
from scipy.sparse import csr_matrix
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
wiki = graphlab.SFrame('people_wiki.gl/')
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
from em_utilities import sframe_to_scipy # converter
# This will take about a minute or two.
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
from sklearn.preprocessing import normalize
tf_idf = normalize(tf_idf)
def bipartition(cluster, maxiter=400, num_runs=4, seed=None):
'''cluster: should be a dictionary containing the following keys
* dataframe: original dataframe
* matrix: same data, in matrix format
* centroid: centroid for this particular cluster'''
data_matrix = cluster['matrix']
dataframe = cluster['dataframe']
# Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow.
kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=-1)
kmeans_model.fit(data_matrix)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
# Divide the data matrix into two parts using the cluster assignments.
data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \
data_matrix[cluster_assignment==1]
# Divide the dataframe into two parts, again using the cluster assignments.
cluster_assignment_sa = graphlab.SArray(cluster_assignment) # minor format conversion
dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \
dataframe[cluster_assignment_sa==1]
# Package relevant variables for the child clusters
cluster_left_child = {'matrix': data_matrix_left_child,
'dataframe': dataframe_left_child,
'centroid': centroids[0]}
cluster_right_child = {'matrix': data_matrix_right_child,
'dataframe': dataframe_right_child,
'centroid': centroids[1]}
return (cluster_left_child, cluster_right_child)
wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster
left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=8, seed=1)
left_child
right_child
def display_single_tf_idf_cluster(cluster, map_index_to_word):
'''map_index_to_word: SFrame specifying the mapping betweeen words and column indices'''
wiki_subset = cluster['dataframe']
tf_idf_subset = cluster['matrix']
centroid = cluster['centroid']
# Print top 5 words with largest TF-IDF weights in the cluster
idx = centroid.argsort()[::-1]
for i in xrange(5):
print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroid[idx[i]])),
print('')
# Compute distances from the centroid to all data points in the cluster.
distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten()
# compute nearest neighbors of the centroid within the cluster.
nearest_neighbors = distances.argsort()
# For 8 nearest neighbors, print the title as well as first 180 characters of text.
# Wrap the text at 80-character mark.
for i in xrange(8):
text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25])
print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'],
distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))
print('')
display_single_tf_idf_cluster(left_child, map_index_to_word)
display_single_tf_idf_cluster(right_child, map_index_to_word)
athletes = left_child
non_athletes = right_child
# Bipartition the cluster of athletes
left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=8, seed=1)
display_single_tf_idf_cluster(left_child_athletes, map_index_to_word)
display_single_tf_idf_cluster(right_child_athletes, map_index_to_word)
baseball = left_child_athletes
ice_hockey_football = right_child_athletes
left, right = bipartition(ice_hockey_football, maxiter=100, num_runs=8, seed=1)
display_single_tf_idf_cluster(left, map_index_to_word)
display_single_tf_idf_cluster(right, map_index_to_word)
# Bipartition the cluster of non-athletes
left_child_non_athletes, right_child_non_athletes = bipartition(non_athletes, maxiter=100, num_runs=8, seed=1)
display_single_tf_idf_cluster(left_child_non_athletes, map_index_to_word)
display_single_tf_idf_cluster(right_child_non_athletes, map_index_to_word)
scholars_politicians_etc = left_child_non_athletes
musicians_artists_etc = right_child_non_athletes
s_left, s_right = bipartition(scholars_politicians_etc, maxiter=100, num_runs=8, seed=1)
m_left, m_right = bipartition(musicians_artists_etc, maxiter=100, num_runs=8, seed=1)
display_single_tf_idf_cluster(s_left, map_index_to_word)
display_single_tf_idf_cluster(s_right, map_index_to_word)
display_single_tf_idf_cluster(m_left, map_index_to_word)
display_single_tf_idf_cluster(m_right, map_index_to_word)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Wikipedia dataset
Step2: As we did in previous assignments, let's extract the TF-IDF features
Step3: To run k-means on this dataset, we should convert the data matrix into a sparse matrix.
Step4: To be consistent with the k-means assignment, let's normalize all vectors to have unit norm.
Step5: Bipartition the Wikipedia dataset using k-means
Step6: The following cell performs bipartitioning of the Wikipedia dataset. Allow 20-60 seconds to finish.
Step7: Let's examine the contents of one of the two clusters, which we call the left_child, referring to the tree visualization above.
Step8: And here is the content of the other cluster we named right_child.
Step9: Visualize the bipartition
Step10: Let's visualize the two child clusters
Step11: The left cluster consists of athletes, whereas the right cluster consists of non-athletes. So far, we have a single-level hierarchy consisting of two clusters, as follows
Step12: Using the bipartition function, we produce two child clusters of the athlete cluster
Step13: The left child cluster mainly consists of baseball players
Step14: On the other hand, the right child cluster is a mix of football players and ice hockey players
Step15: Note. Concerning use of "football"
Step16: Cluster of ice hockey players and football players
Step17: Caution. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters.
Step18: The first cluster consists of scholars, politicians, and government officials whereas the second consists of musicians, artists, and actors. Run the following code cell to make convenient aliases for the clusters.
Step19: Quiz Question. Let us bipartition the clusters scholars_politicians_etc and musicians_artists_etc. Which diagram best describes the resulting hierarchy of clusters for the non-athletes? Refer to the quiz for the diagrams.
|
14,621 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([1, 2, 3])
print(a.shape)
print(a.size)
print(a.ndim)
x = np.arange(100)
print(x.shape)
print(x.size)
print(x.ndim)
y = np.random.rand(5, 80)
print(y.shape)
print(y.size)
print(y.ndim)
x.shape = (20, 5)
print(x)
y.shape = (4, 20, -1)
print(y.shape)
# Scalar Indexing
print(x[2])
# Slicing
print(x[2:5])
# Advanced slicing
print("First 5 rows\n", x[:5])
print("Row 18 to the end\n", x[18:])
print("Last 5 rows\n", x[-5:])
print("Reverse the rows\n", x[::-1])
# Boolean Indexing
print(x[(x % 2) == 0])
# Fancy Indexing -- Note the use of a list, not tuple!
print(x[[1, 3, 8, 9, 2]])
print("Shape of X:", x.shape)
print("Shape of Y:", y.shape)
a = x + y
print(a.shape)
b = x[np.newaxis, :, :] + y
print(b.shape)
c = np.tile(x, (4, 1, 1)) + y
print(c.shape)
print("Are a and b identical?", np.all(a == b))
print("Are a and c identical?", np.all(a == c))
x = np.arange(-5, 5, 0.1)
y = np.arange(-8, 8, 0.25)
print(x.shape, y.shape)
z = x[np.newaxis, :] * y[:, np.newaxis]
print(z.shape)
# More concisely
y, x = np.ogrid[-8:8:0.25, -5:5:0.1]
print(x.shape, y.shape)
z = x * y
print(z.shape)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Array Creation
Step2: Array Manipulation
Step3: NumPy can even automatically figure out the size of at most one dimension for you.
Step4: Array Indexing
Step5: Broadcasting
Step6: Now, here are three identical assignments. The first one takes full advantage of broadcasting by allowing NumPy to automatically add a new dimension to the left. The second explicitly adds that dimension with the special NumPy alias "np.newaxis". These first two creates a singleton dimension without any new arrays being created. That singleton dimension is then implicitly tiled, much like the third example to match with the RHS of the addition operator. However, unlike the third example, the broadcasting merely re-uses the existing data in memory.
Step7: Another example of broadcasting two 1-D arrays to make a 2-D array.
|
14,622 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import sklearn.cluster
simM = load_data()
model = sklearn.cluster.AgglomerativeClustering(affinity='precomputed', n_clusters=2, linkage='complete').fit(simM)
cluster_labels = model.labels_
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,623 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import time
import sys
import os
%matplotlib inline
# Change directory to the code folder
os.chdir('..//code')
# Functions to sample the diffusion-weighted gradient directions
from dipy.core.sphere import disperse_charges, HemiSphere
# Function to reconstruct the tables with the acquisition information
from dipy.core.gradients import gradient_table
# Functions to perform simulations based on multi-compartment models
from dipy.sims.voxel import multi_tensor
# Import Dipy's procedures to process diffusion tensor
import dipy.reconst.dti as dti
# Importing procedures to fit the free water elimination DTI model
from functions import (wls_fit_tensor, nls_fit_tensor)
# Sample the spherical cordinates of 32 random diffusion-weighted
# directions.
n_pts = 32
theta = np.pi * np.random.rand(n_pts)
phi = 2 * np.pi * np.random.rand(n_pts)
# Convert direction to cartesian coordinates. For this, Dipy's
# class object HemiSphere is used. Since diffusion possess central
# symmetric, this class object also projects the direction to an
# Hemisphere.
hsph_initial = HemiSphere(theta=theta, phi=phi)
# By using a electrostatic potential energy algorithm, the directions
# of the HemiSphere class object are moved util them are evenly
# distributed in the Hemi-sphere
hsph_updated, potential = disperse_charges(hsph_initial, 5000)
directions = hsph_updated.vertices
# Based on the evenly sampled directions, the acquistion parameters are
# simulated. Vector bvals containts the information of the b-values
# while matrix bvecs contains all gradient directions for all b-value repetitions.
bvals = np.hstack((np.zeros(6), 500 * np.ones(n_pts), 1500 * np.ones(n_pts)))
bvecs = np.vstack((np.zeros((6, 3)), directions, directions))
# bvals and bvecs are converted according to Dipy's accepted format using
# Dipy's function gradient_table
gtab = gradient_table(bvals, bvecs)
# Simulations are runned for the SNR defined according to Hoy et al, 2014
SNR = 40
# Setting the volume fraction (VF) to 100%.
VF = 100
# The value of free water diffusion is set to its known value
Dwater = 3e-3
# Simulations are repeated for 5 levels of fractional anisotropy
FA = np.array([0.71, 0.])
L1 = np.array([1.6e-3, 0.8e-03])
L2 = np.array([0.5e-3, 0.8e-03])
L3 = np.array([0.3e-3, 0.8e-03])
# According to Hoy et al., simulations are repeated for 120 different
# diffusion tensor directions (and each direction repeated 100 times).
nDTdirs = 120
nrep = 100
# These directions are sampled using the same procedure used
# to evenly sample the diffusion gradient directions
theta = np.pi * np.random.rand(nDTdirs)
phi = 2 * np.pi * np.random.rand(nDTdirs)
hsph_initial = HemiSphere(theta=theta, phi=phi)
hsph_updated, potential = disperse_charges(hsph_initial, 5000)
DTdirs = hsph_updated.vertices
# Initializing a matrix to save all synthetic diffusion-weighted
# signals. Each dimension of this matrix corresponds to the number
# of simulated FA levels, free water volume fractions,
# diffusion tensor directions, and diffusion-weighted signals
# of the given gradient table
DWI_simulates = np.empty((FA.size, 1, nrep * nDTdirs, bvals.size))
for fa_i in range(FA.size):
# selecting the diffusion eigenvalues for a given FA level
mevals = np.array([[L1[fa_i], L2[fa_i], L3[fa_i]],
[Dwater, Dwater, Dwater]])
# estimating volume fractions for both simulations
# compartments (in this case 0 and 100)
fractions = [100 - VF, VF]
for di in range(nDTdirs):
# Select a diffusion tensor direction
d = DTdirs[di]
# Repeat simulations for the given directions
for s_i in np.arange(di * nrep, (di+1) * nrep):
# Multi-compartmental simulations are done using
# Dipy's function multi_tensor
signal, sticks = multi_tensor(gtab, mevals,
S0=100,
angles=[d, (1, 0, 0)],
fractions=fractions,
snr=SNR)
DWI_simulates[fa_i, 0, s_i, :] = signal
prog = (fa_i+1.0) / FA.size * 100
time.sleep(1)
sys.stdout.write("\r%f%%" % prog)
sys.stdout.flush()
t0 = time.time()
fw_params = wls_fit_tensor(gtab, DWI_simulates, Diso=Dwater,
mdreg=None)
dt = time.time() - t0
print("This step took %f seconds to run" % dt)
fig, axs = plt.subplots(nrows=2, ncols=3, figsize=(15, 10))
fig.subplots_adjust(hspace=0.3, wspace=0.4)
# Compute the tissue's diffusion tensor mean diffusivity
# using the functions mean_diffusivity of Dipy's module dti
md = dti.mean_diffusivity(fw_params[..., :3])
# Extract the water volume fraction estimates from the fitted
# model parameters
f = fw_params[..., 12]
# Defining the colors of the figure
colors = {0: 'r', 1: 'g'}
# Plot figures for both FA extreme levels (0 and 0.71)
for fa_i in range(FA.size):
# Set histogram's number of bins
nbins = 100
# Plot tensor's mean diffusivity histograms
axs[fa_i, 0].hist(md[fa_i, 0, :], nbins)
axs[fa_i, 0].set_xlabel("Tensor's mean diffusivity ($mm^2/s$)")
axs[fa_i, 0].set_ylabel('Absolute frequencies')
# Plot water volume fraction histograms
axs[fa_i, 1].hist(f[fa_i, 0, :], nbins)
axs[fa_i, 1].set_xlabel('Free water estimates')
axs[fa_i, 1].set_ylabel('Absolute frequencies')
# Plot mean diffusivity as a function of f estimates
axs[fa_i, 2].plot(f[fa_i, 0, :].ravel(), md[fa_i, 0, :].ravel(), '.')
axs[fa_i, 2].set_xlabel('Free water estimates')
axs[fa_i, 2].set_ylabel("Tensor's mean diffusivity ($mm^2/s$)")
# Save Figure
fig.savefig('Pure_free_water_F_and_tensor_MD_estimates.png')
# Sampling the free water volume fraction between 70% and 100%.
VF = np.linspace(70, 100, 31)
# Initializing a matrix to save all synthetic diffusion-weighted
# signals. Each dimension of this matrix corresponds to the number
# of simulated FA levels, volume fractions, diffusion tensor
# directions, and diffusion-weighted signals of the given
# gradient table
DWI_simulates = np.empty((FA.size, VF.size, nrep * nDTdirs,
bvals.size))
for fa_i in range(FA.size):
# selecting the diffusion eigenvalues for a given FA level
mevals = np.array([[L1[fa_i], L2[fa_i], L3[fa_i]],
[Dwater, Dwater, Dwater]])
for vf_i in range(VF.size):
# estimating volume fractions for both simulations
# compartments
fractions = [100 - VF[vf_i], VF[vf_i]]
for di in range(nDTdirs):
# Select a diffusion tensor direction
d = DTdirs[di]
# Repeat simulations for the given directions
for s_i in np.arange(di * nrep, (di+1) * nrep):
# Multi-compartmental simulations are done using
# Dipy's function multi_tensor
signal, sticks = multi_tensor(gtab, mevals,
S0=100,
angles=[d, (1, 0, 0)],
fractions=fractions,
snr=SNR)
DWI_simulates[fa_i, vf_i, s_i, :] = signal
prog = (fa_i+1.0) * (vf_i+1.0) / (FA.size * VF.size) * 100
time.sleep(1)
sys.stdout.write("\r%f%%" % prog)
sys.stdout.flush()
t0 = time.time()
fw_params = wls_fit_tensor(gtab, DWI_simulates, Diso=Dwater,
mdreg=None)
dt = time.time() - t0
print("This step took %f seconds to run" % dt)
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(12, 5))
# Compute the tissue's compartment mean diffusivity
# using function mean_diffusivity of Dipy's module dti
md = dti.mean_diffusivity(fw_params[..., :3])
# Set the md threshold to classify overestimated values
# of md
md_th = 1.5e-3;
# Initializing vector to save the percentage of high
# md values
p_high_md = np.empty(VF.size)
# Position of bar
for fa_i in range(FA.size):
for vf_i in range(VF.size):
# Select the mean diffusivity values for the given
# water volume fraction and FA level
md_vector = md[fa_i, vf_i, :].ravel()
p_high_md[vf_i] = (sum(md_vector > md_th) * 100.0) / (nrep*nDTdirs)
# Plot FA statistics as a function of the ground truth
# water volume fraction. Note that position of bars are
# shifted by 0.5 so that centre of bars correspond to the
# ground truth volume fractions
axs[fa_i].bar(VF - 0.5, p_high_md)
# Adjust properties of panels
axs[fa_i].set_xlim([70, 100.5])
axs[fa_i].set_ylim([0, 100])
axs[fa_i].set_xlabel('Ground truth f-value')
axs[fa_i].set_ylabel('Percentage of high MD')
# Save figure
fig.savefig('Percentage_High_MD.png')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below we define the simulated acquisition parameters
Step2: Next the ground truth values of tissue and water diffusion are defined. Simulations are first run for a voxel containing only water for the extreme FA values.
Step3: Having the parameters set, the simulations are processed below
Step4: To analyse the issue before its correction, the small volume fractions estimates are analysed at the parameters initial estimation using the wls procedure and turning the parameter reinitialization off (mdreg=None).
Step5: Below the histograms of the tensor's mean diffusivity and volume fraction estimates are computed. Additionally, the estimates of tensor's mean diffusivity as a function of the estimated volume fraction is plotted. Upper and lower panels corresponds to the higher and lower FA value tested.
Step6: The above figure shows that pure free water signals produce tensor's mean diffusivity is almost always larger than $2.0 \times 10 ^{-3} mm^2/s$ (left panels). Therefore, the threshold of $1.5 \times 10 ^{-3} mm^2/s$ proposed by Hoy et al. (2014) seems more than adequate to identifying voxels that contain mostly free water. It is also important to note that this threshold should not affect plausible mean diffusivity estimates with small free water contaminations because typical tissue's mean diffusivity have values that are two times smaller than this threshold.
Step7: Simulations are rerun for the resampled ground truth free water volume fraction
Step8: The parameters initial estimation is processed for the ground trutg volume fraction from 0.7 to 1 using the wls procedure and turning the parameter reinitialization off (mdreg=None).
Step9: Below the percentage of parameter estimates with tensor's mean diffusivity higher than $2.0 \times 10 ^{-3} mm^2/s$ are plotted as a function of the f ground truth values for the higher (left panel) and lower (right panel) fractional anisotropy levels.
|
14,624 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
log = pd.read_csv("../dataset/git_log_intellij.csv.gz")
log.head()
log.info()
log['timestamp'] = pd.to_datetime(log['timestamp'])
log.head()
recent = log[log['timestamp'] > log['timestamp'].max() - pd.Timedelta('90 days')]
recent.head()
java = recent[recent['filename'].str.endswith(".java")]
java.head()
changes = java.groupby('filename')[['sha']].count()
changes.head()
loc = pd.read_csv("../dataset/cloc_intellij.csv.gz", index_col=1)
loc.head()
hotspots = changes.join(loc[['code']]).dropna(subset=["code"])
hotspots.head()
top10 = hotspots.sort_values(by="sha", ascending=False).head(10)
top10
# %load ugly_plotting_code.py
ax = top10.plot.scatter('sha', 'code');
for k, v in top10.iterrows():
ax.annotate(k.split("/")[-1], v)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wir erkunden die geladenen Daten.
Step2: <b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>6</b> Series (= Spalten), <b>1128819</b> Rows (= Eintrรคge)
Step3: Wir sehen uns nur die jรผngsten รnderungen an.
Step4: Wir wollen nur Java-Code verwenden.
Step5: III. Formale Modellierung
Step6: Wir holen Infos รผber die Code-Zeilen hinzu...
Step7: ...und verschneiden diese mit den vorhandenen Daten.
Step8: VI. Interpretation
Step9: V. Kommunikation
|
14,625 | <ASSISTANT_TASK:>
Python Code:
# coding: utf-8
from sklearn import datasets
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import Perceptron
from sklearn.metrics import accuracy_score
iris = datasets.load_iris() # ๅ ่ฝฝ้ธขๅฐพ่ฑๆฐๆฎๅบ
X = iris.data[:, [2, 3]] # ้ๆฉ่ฑ่ผ้ฟๅบฆใ่ฑ่ผๅฎฝๅบฆใ่ฑ็ฃ้ฟๅบฆใ่ฑ็ฃๅฎฝๅบฆ4ไธชๅฑๆงไธญ็ๅ2ไธช
y = iris.target
# ๅฐๆฐๆฎ้็30%็ฝฎไธบๆต่ฏ้๏ผ70%็ฝฎไธบ่ฎญ็ป้
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
sc = StandardScaler() # ๆ ๅๅๆฐๆฎ๏ผๅคงๅน
ๅบฆๆ้ซๅ็กฎๆง
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
ppn = Perceptron(tol=40, eta0=0.1, random_state=0) # ไฝฟ็จsklearnๆ็ฅๅจ่ฎญ็ปๆฐๆฎ
ppn.fit(X_train_std, y_train)
y_pred = ppn.predict(X_test_std)
print('Misclassified samples: %d' % (y_test != y_pred).sum()) # ่พๅบ่ฏฏๅคๆฐ
print('Accuracy: %.2f' % accuracy_score(y_test, y_pred)) # ่พๅบๅ็กฎ็
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot all samples
X_test, y_test = X[test_idx, :], y[test_idx]
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl)
# highlight test samples
if test_idx:
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0], X_test[:, 1], c='',
alpha=1.0, linewidth=1, marker='o', s=55, label='test set')
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X=X_combined_std, y=y_combined, classifier=ppn, test_idx=range(105,150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1000.0, random_state=0)
lr.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined, classifier=lr, test_idx=range(105,150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
from sklearn.svm import SVC
svm = SVC(kernel='linear', C=1.0, random_state=0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined, classifier=svm, test_idx=range(105,150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
np.random.seed(0)
X_xor = np.random.randn(200, 2)
y_xor = np.logical_xor(X_xor[:, 0] > 0, X_xor[:, 1] > 0)
y_xor = np.where(y_xor, 1, -1)
plt.scatter(X_xor[y_xor==1, 0], X_xor[y_xor==1, 1], c='b', marker='x', label='1')
plt.scatter(X_xor[y_xor==-1, 0], X_xor[y_xor==-1, 1], c='r', marker='s', label='-1')
plt.ylim(-3.0)
plt.legend()
plt.show()
svm = SVC(kernel='rbf', random_state=0, gamma=0.10, C=10.0)
svm.fit(X_xor, y_xor)
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.show()
svm = SVC(kernel='rbf', random_state=0, gamma=0.2, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined, classifier=svm, test_idx=range(105,150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
svm = SVC(kernel='rbf', random_state=0, gamma=100.0, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined, classifier=svm, test_idx=range(105,150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=0)
tree.fit(X_train, y_train)
X_combined = np.vstack((X_train, X_test))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X_combined, y_combined, classifier=tree, test_idx=range(105,150))
plt.xlabel('petal length [cm]')
plt.ylabel('petal width [cm]')
plt.legend(loc='upper left')
plt.show()
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file='tree.dot', feature_names=['petal length', 'petal width'])
from sklearn.tree import export_graphviz
from IPython.display import Image
from sklearn.externals.six import StringIO
import pydot
dot_data = StringIO()
export_graphviz(tree, out_file=dot_data, feature_names=['petal length', 'petal width'])
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph[0].create_png())
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1, n_jobs=2)
forest.fit(X_train, y_train)
plot_decision_regions(X_combined, y_combined, classifier=forest, test_idx=range(105,150))
plt.xlabel('petal length')
plt.ylabel('petal width')
plt.legend(loc='upper left')
plt.show()
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined, classifier=knn, test_idx=range(105,150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.show()
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium',
'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
print('Class labels', np.unique(df_wine['Class label']))
df_wine.head()
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test)
lr = LogisticRegression()
lr.fit(X_train_std, y_train)
print('Training accuracy:', lr.score(X_train_std, y_train))
print('Test accuracy:', lr.score(X_test_std, y_test))
lr1 = LogisticRegression(penalty='l1',C=5)
lr1.fit(X_train_std, y_train)
print('Training accuracy(With L1 regularzition):', lr1.score(X_train_std, y_train))
print('Test accuracy(With L1 regularzition):', lr1.score(X_test_std, y_test))
from sklearn.base import clone
from itertools import combinations
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
class SBS():
def __init__(self, estimator, k_features, scoring=accuracy_score, test_size=0.25, random_state=1):
self.scoring = scoring
self.estimator = clone(estimator)
self.k_features = k_features
self.test_size = test_size
self.random_state = random_state
def fit(self, X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=self.test_size, random_state=self.random_state)
dim = X_train.shape[1]
self.indices_ = tuple(range(dim))
self.subsets_ = [self.indices_]
score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_)
self.scores_ = [score]
while dim > self.k_features:
scores = []
subsets = []
for p in combinations(self.indices_, r=dim-1):
score = self._calc_score(X_train, y_train, X_test, y_test, p)
scores.append(score)
subsets.append(p)
best = np.argmax(scores)
self.indices_ = subsets[best]
self.subsets_.append(self.indices_)
dim -= 1
self.scores_.append(scores[best])
self.k_score_ = self.scores_[-1]
return self
def transform(self, X):
return X[:, self.indices_]
def _calc_score(self, X_train, y_train, X_test, y_test, indices):
self.estimator.fit(X_train[:, indices], y_train)
y_pred = self.estimator.predict(X_test[:, indices])
score = self.scoring(y_test, y_pred)
return score
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
knn = KNeighborsClassifier(n_neighbors=2)
sbs = SBS(knn, k_features=1)
sbs.fit(X_train_std, y_train)
k_feat = [len(k) for k in sbs.subsets_]
plt.plot(k_feat, sbs.scores_, marker='o')
plt.ylim([0.7, 1.1])
plt.ylabel('Accuracy')
plt.xlabel('Number of features')
plt.grid()
plt.show()
k5 = list(sbs.subsets_[8])
print(df_wine.columns[1:][k5])
knn.fit(X_train_std, y_train)
print('Training accuracy:', knn.score(X_train_std, y_train))
print('Test accuracy:', knn.score(X_test_std, y_test))
knn.fit(X_train_std[:, k5], y_train)
print('Training accuracy(select 5):', knn.score(X_train_std[:, k5], y_train))
print('Test accuracy(select 5):', knn.score(X_test_std[:, k5], y_test))
from sklearn.ensemble import RandomForestClassifier
feat_labels = df_wine.columns[1:]
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
forest.fit(X_train, y_train)
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30, feat_labels[f], importances[indices[f]]))
plt.title('Feature Importances')
plt.bar(range(X_train.shape[1]), importances[indices], color='lightblue', align='center')
plt.xticks(range(X_train.shape[1]), feat_labels, rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.tight_layout()
plt.show()
from matplotlib.colors import ListedColormap
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl)
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.fit_transform(X_test)
pca = PCA(n_components=2)
lr = LogisticRegression()
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC1 train')
plt.ylabel('PC2 train')
plt.legend(loc='lower left')
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC1 test')
plt.ylabel('PC2 test')
plt.legend(loc='lower left')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ๆบๅจๅญฆไน ๆจกๅ็ๅญฆไน ๆๆ่ฏไปทๅบไบๆต่ฏ้๏ผ่ไธไพ่ตไบ่ฎญ็ป้๏ผ่ฟๆๅ็ๅซไนๆฏๆจกๅๅฏไปฅๅพๅฅฝ็ๅน้
่ฎญ็ป้๏ผไฝๆฏๅฏนไบๆช็ฅ็่ฎญ็ป้ๆฐๆฎๆๆไธไฝณ๏ผไธ้ข็ไปฃ็ ๆฏไนๅ็ๅ็ฑปๅจๆจกๅๅฏ่งๅ๏ผ
Step2: ๆ็ฅๅจ็ฎๆณๅฏนไบๆ ๆณ็บฟๆงๅๅฒ็ๆฐๆฎ้๏ผๆฏไธๆถๆ็๏ผๅ ๆญคๅฎ้
ไธญๅพๅฐๅช็จๆ็ฅๅจ็ฎๆณใๅ้ขๅฐไผไป็ปๆดๅผบๅคง็็บฟๆงๅ็ฑปๅจ๏ผๅฏนๆ ๆณ็บฟๆงๅๅฒ็ๆฐๆฎ้ๅฏไปฅๆถๆๅฐๆไฝณ็จๅบฆ
Step3: ไฝฟ็จๆญฃ่งๅ่งฃๅณ่ฟๆๅ
Step4: ๆ ธๅฝๆฐSVM่งฃๅณ้็บฟๆงๅ็ฑป้ฎ้ข
Step5: ่ฐๆดgammaๅๆฐ
Step6: 1.5 ไฝฟ็จsklearnๅณ็ญๆ ๅฎ็ฐๅ็ฑปๅจ
Step7: 1.6 ไฝฟ็จ้ๆบๆฃฎๆๅฎ็ฐๅ็ฑปๅจ
Step8: 1.7 K่ฟ้ป็ฎๆณๅฎ็ฐๅ็ฑปๅจ
Step9: ๅๅฐ่ฟๆๅ
Step10: ไฝฟ็จSBSๅบๅๅ็นๅพ้ๆฉ
Step11: ๆไปฌๆ้5ไธช็นๅพๆฃๆฅๆฏๅฆๅธฆๆฅๆนๅ,ไป็ปๆๅฏไปฅ็ๅบ๏ผไฝฟ็จๆดๅฐ็ๅฑๆง๏ผๆต่ฏ้็ๅ็กฎ็ๆ้ซไบ2%
Step12: ไฝฟ็จ้ๆบๆฃฎๆ่ฏไผฐ็นๅพ้่ฆ็จๅบฆ
Step13: 2.1 ้่ฟ้็ปดๅ็ผฉๆฐๆฎ
|
14,626 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as py
#import scipy
# Make the graphs a bit prettier, and bigger
#pd.set_option('display.mpl_style', 'default')
#plt.rcParams['figure.figsize'] = (15, 5)
# This is necessary to show lots of columns in pandas 0.12.
# Not necessary in pandas 0.13.
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
pot="csv-datoteke/smucarji.csv"
smucarji = pd.read_csv(pot, index_col='id')
smucarji[:10]
pot_brem = "csv-datoteke/BREM Eva-Maria.csv"
brem = pd.read_csv(pot_brem, parse_dates=['datum'])
brem[:10]
pot1 ="csv-datoteke/vse.csv"
vse = pd.read_csv(pot1, parse_dates=['datum'])
vse[197:203]
def pretvori(bes):
if bes in ['DNQ1', 'DNF1', 'DSQ2', 'DNS1','DNF2','DSQ1','DNS2','DNQ2','DNF','DNC1','DSQ','DNS']:
return 0
else:
return int(bes)
tocke=[0,100,80,60,50,45,40,36,32,29,26,24,22,20,18,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1]
def pretvori_2(bes):
if bes in ["DNQ1", "DNF1", "DSQ2", "DNS1", "DNF2"]:
return 0
else:
if int(bes) > 30:
return 0
else:
return tocke[int(bes)];
vse['mesto1'] = vse['mesto'].map(pretvori)
brem['mesto1'] = brem['mesto'].map(pretvori)
vse['tocke'] = vse['mesto1'].map(pretvori_2)
brem['tocke'] = brem['mesto1'].map(pretvori_2)
brem[:5]
vse[2024:2028]
brem['disciplina'].value_counts()
brem['disciplina'].value_counts().plot(kind='pie', figsize=(6,6))
slalom = brem['disciplina'] == 'Slalom'
brem[slalom][:10]
veleslalom = brem['disciplina'] == 'Giant Slalom'
brem[veleslalom][:10]
brem[slalom]['mesto1'].value_counts().plot(kind='bar', title="Rezultati E. M. Brem v slalomu")
brem[veleslalom]['mesto1'].value_counts().plot(kind='bar', title='Rezultati E. M. Brem v veleslalomu')
brem.groupby(['disciplina'])['tocke'].sum() / brem['tocke'].sum()
prvi_del = brem[brem['datum'].dt.year == 2016]
drugi_del = brem[(brem['datum'].dt.month > 9) & (brem['datum'].dt.year == 2015)]
tabela = prvi_del.append(drugi_del)
tabela
tabela['mesto1'].value_counts().plot(kind='pie')
tabela[tabela['disciplina'] == 'Giant Slalom']['mesto1'].value_counts().plot(kind='pie')
po_krajih = brem.groupby(['kraj'])['tocke'].sum() / brem['tocke'].sum()
po_krajih.nlargest(7)
po_krajih1 = brem.groupby(['kraj'])['tocke'].mean()
po_krajih1.nlargest(7)
brem.groupby(['kraj']).size().sort_values(ascending = False)
sezona = vse[vse['datum'].dt.year == 2016]
drugi_del_vse = vse[(vse['datum'].dt.month > 9) & (vse['datum'].dt.year == 2015)]
sezona.append(drugi_del)[40:46]
sezona.groupby(['id']).size()[:10]
sezona.groupby(['id']).size().max()
nova = sezona.groupby(['id']).size()
nova.nlargest(6)
print(smucarji.get_value(index = 125871, col = 'ime'), ", ", smucarji.get_value(index = 127048, col = 'ime'))
sezona.groupby(['id']).agg({'tocke':sum}).nlargest(columns = 'tocke', n = 10)
naj = [106332,127048,154950,125871,104502,27657,30368,107164,109079,137380]
for i in naj:
print(i,': ', smucarji.get_value(index = i, col = 'ime'))
smucarji['drzava'].value_counts().head(10)
smucarji['drzava'].value_counts().plot(kind='bar', figsize = (12,6))
smucarji['smuci'].value_counts()
smucarji['smuci'].value_counts().plot(kind='pie', figsize=(6,6))
smucarji[smucarji['smuci'] == "Head"]['drzava'].value_counts().plot(kind='bar')
smucarji[smucarji['drzava'] == "AUT"]['smuci'].value_counts().plot(kind='bar')
oboje = smucarji.groupby(['smuci','drzava']).size()
priprava = oboje.unstack().plot(kind='barh', stacked=True, figsize=(20, 14), fontsize=12)
priprava.legend(loc=(0.02,1), ncol=6)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Najprej sem spletne strani FIS pobrala podatke o smuฤarjih in njihovih id ลกtevilkah na spletiลกฤu FIS. Id-je sem potrebovala za sestavljanje url naslovov posameznih ลกportnikov. Zbrane podatke sem nato spravila v datoteko smucarji.csv.
Step2: Tabela izgleda tako
Step3: Nato sem za vsakega od tekmovalcev s strani z njegovimi rezultati (npr. Eva-Maria Brem) pobrala podatke o vsaki tekmi
Step4: Tabela za Evo-Mario Brem
Step5: ลฝal si z datumi ne bomo mogli pomagati, saj so na spletni strani z rezultati zapisani v razliฤnih formatih. Kot primer si lahko ogledamo prvo in drugo vrstico zgornje tabele. Iz prve vrstice lahko sklepamo, da je zapis v obliki YYYY-MM-DD, kar pa bi pomenilo, da je bila tekma v drugi vrstici izvedena 3. julija 2016, kar pa iz oฤitnih razlogov ni res. Tega ลพal ne morem popraviti.
Step6: Tabela izgleda tako
Step7: V kasnejลกi analizi se pojavi teลพava, da so podatki o uvrstitvi lahko ลกtevilke ali besedilo (npr. DNQ1, DNF1, DSQ2 in DNS1), ki oznaฤuje odstope, diskvalifikacije in podobne anomalije.
Step8: ฤe bomo ลพeleli delati analizo skupnega seลกtevka, moramo pretvoriti mesto tudi v toฤke. Definiramo seznam 'tocke', v katerega na i-to mesto (i teฤe od 0 do 30) zapiลกemo, koliko toฤk tekmovalec dobi za osvojeno i-to mesto. Nato napiลกemo ลกe funkcijo, ki nam pretvori mesto v osvojene toฤke.
Step9: Tabelam zdaj dodamo stolpce
Step10: Zgornja tabela prikazuje predelano tabelo za Evo-Mario Brem. Enako smo naredili s tabelo z vsemi podatki
Step11: Analiza
Step12: Eva-Maria Brem je torej najpogosteje tekmuje v slalomu in veleslalomu. Ponazorimo to ลกe z grafom
Step13: ฤeprav najpogosteje tekmuje v slalomu in veleslalomu, pa to nista nujno disciplini, v katerih dosega najboljลกe rezultate. Najprej si poglejmo, kakลกni so njeni rezultati v slalomu in nato ลกe veleslalomu
Step14: Iz tabel je razvidno, da so njeni razultati v slalomu v vaฤini na repu trideseterice, med tem ko se v veleslalomu uvrลกฤa med 5 najboljลกih. To se ลกe lepลกe vidi z grafov
Step15: Poglejmo ลกe koliko toฤk je v povpreฤju osvojila pri posamezni disciplini, da doloฤimo njeno "paradno disciplino".
Step16: Veleslalom je torej precej oฤitno disciplina, ki ji prinaลกa najveฤ toฤk.
Step17: Ker si z datumi ne moremo pomagati, glejmo le doseลพena mesta
Step18: Podoben graf si poglejmo posebej ลกe za veleslalom
Step19: Opazimo, da ena uvrstitev odstopa
Step20: V Are-ju je osvojila kar 20 % vseh svojih toฤk. Ali tam dosega tudi najviลกja mesta?
Step21: Najviลกja mesta oฤitno dosega v Meribelu. Da rezultate ลกe bolje razloลพimo, poglejmo, koliko tekem se je Eva-Maria Brem udeleลพila v posameznem kraju.
Step22: V Are-ju se E. M. Brem je udeleลพila daleฤ najveฤ tekem, zato ni ฤudno, da je tam osvojila najveฤ toฤk. Prav tako ne moremo trditi, da je Meribel njeno "najboljลกe" prizoriลกฤe, saj je tam tekmovala na le eni tekmi.
Step23: Zanima nas, koliko nastopov so zbrali tekmovalci v letoลกnji sezoni. Prikazujemo ลกtevilo nastopov za 10 tekmovalcev.
Step24: Tekmovalec ali tekmovalka z najveฤ nastopi je nastopila 21-krat. Kdo pa je to?
Step25: To sta torej dva tekmovalca in sicer z id ลกtevilkama 125871 in 127048 - Lara Gut in Alexis Pinturault.
Step26: Kdo pa je osvojil najveฤ toฤk? Poglejmo 10 najboljลกih
Step27: ล e njihova imena
Step28: Analiza narodnosti
Step29: Analiza smuฤi
Step30: Poglejmo, predstavniki katerih drลพav uporabljajo smuฤi Head (in koliko jih je)
Step31: Podobno si lahko pogledamo, katerim proizvajalcem smuฤi najbolj zaupajo smuฤarji iz avstrije
Step32: Z analizo enega samega proizvajalca oz. ene same drลพave ne dobimo ลกirลกe slike, zato si oglejmo graf, ki za vsako drลพavo prikaลพe, smuฤi katerega proizvajalca uporabljajo njeni reprezentanti.
|
14,627 | <ASSISTANT_TASK:>
Python Code:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from NuPyCEE import omega
from NuPyCEE import sygma
# Run original OMEGA with 1000 timestesp (this may take a minute ..)
o_ori = omega.omega(galaxy='milky_way', special_timesteps=1000)
# Let's create the timestep template for each SSP
# Here, you can decide any type of timestep option
s_dt_template = sygma.sygma(special_timesteps=1000)
# Copy the SSP timestep array
dt_in_SSPs = s_dt_template.history.timesteps
# Let's pre-calculate the SSPs.
# Here I choose a very low number of OMEGA timesteps.
# I do this because I only want to use this instance
# to copy the SSPs, so I won't have to recalculate
# them each time I want to run an OMEGA simulation (in the fast version).
# You can ignore the Warning notice.
o_for_SSPs = omega.omega(special_timesteps=2, pre_calculate_SSPs=True, dt_in_SSPs=dt_in_SSPs)
# Let's copy the SSPs array
SSPs_in = [o_for_SSPs.ej_SSP, o_for_SSPs.ej_SSP_coef, o_for_SSPs.dt_ssp, o_for_SSPs.t_ssp]
# SSPs_in[0] --> Mass (in log) ejected for each isotope. It's an array in the form of [nb Z][nb SSP dt][isotope]
# SSPs_in[1] --> Interpolation coefficients of each isotope
# SSPs_in[2] --> List of timesteps
# SSPs_in[3] --> List of galactic ages
# Finally, let's run the fast version (1000 timesteps).
# This should be ~3 times faster
o_fast = omega.omega(galaxy='milky_way', special_timesteps=1000, pre_calculate_SSPs=True, SSPs_in=SSPs_in)
%matplotlib nbagg
o_ori.plot_spectro(color='b', label='Original')
o_fast.plot_spectro(color='g', shape='--', label='Fast')
plt.xscale('log')
%matplotlib nbagg
o_ori.plot_mass( specie='Mg', color='b', markevery=100000, label='Original')
o_fast.plot_mass(specie='Mg', color='g', markevery=100, shape='--', label='Fast')
plt.ylabel('Mass of Mg [Msun]')
%matplotlib nbagg
o_ori.plot_spectro( xaxis='[O/H]', yaxis='[Ca/O]', color='b', \
markevery=100000, label='Original')
o_fast.plot_spectro(xaxis='[O/H]', yaxis='[Ca/O]', color='g', \
shape='--', markevery=300, label='Fast')
plt.ylim(-2.0,1.0)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Original Version
Step2: Fast Version
Step3: By using the dt_in_SSPs array, the OMEGA timesteps can be different from the SSP timesteps. If dt_in_SSPs is not provided when running OMEGA, each SSP will have the same timsteps as OMEGA.
Step4: Comparison Between the Original and Fast Versions
|
14,628 | <ASSISTANT_TASK:>
Python Code:
import random
def genEven():
'''
Returns a random even number x, where 0 <= x < 100
'''
return random.randrange(0,100,2)
genEven()
def stochasticNumber():
'''
Stochastically generates and returns a uniformly distributed even
number between 9 and 21
'''
return random.randrange(10,21,2)
stochasticNumber()
stochasticNumber()
def deterministicNumber():
'''
Deterministically generates and returns an even number
between 9 and 21
'''
random.seed(0) # Fixed seed, always the same.
return random.randrange(10,21,2)
deterministicNumber()
deterministicNumber()
def rollDie():
returns a random int between 1 and 6
#return random.choice([1,2,3,4,5,6])
return random.randint(1,6)
rollDie()
def simMCdie(numTrials):
print ("Running ", numTrials)
counters = [0] * 6 # initialize the counters for each face
for i in range(numTrials):
roll = rollDie()
counters[roll-1] += 1
return counters
import matplotlib.pyplot as plt
results = simMCdie(10000)
results
plt.bar(['1','2','3','4','5','6'], results);
def rollNdice(n):
result = ''
for i in range(n):
result = result + str(rollDie())
return result
rollNdice(5)
def getTarget(goal):
# Roll dice until we get the goal
# goal: a string of N die results, for example 5 six: "66666"
numRolls = len(goal)
numTries = 0
while True:
numTries += 1
result = rollNdice(numRolls)
# if success then exit
if result == goal:
return numTries
def runDiceMC(goal, numTrials):
print ("Running ... trails: ", numTrials)
total = 0
for i in range(numTrials):
total += getTarget(goal)
print ('Average number of tries =', total/float(numTrials))
runDiceMC('66666', 100)
(35.0 / 36.0)**24
def checkPascalMC(numTrials, roll, numRolls = 24, goal = 6):
numSuccess = 0.0
for i in range(numTrials):
for j in range(numRolls):
die1 = roll()
die2 = roll()
if die1 == goal and die2 == goal:
numSuccess += 1
break
print ('Probability of losing =', 1.0 - numSuccess / numTrials)
checkPascalMC(10000, rollDie)
def rollLoadedDie():
if random.random() < 1.0/5.5:
return 6
else:
return random.choice([1,2,3,4,5])
checkPascalMC(10000, rollLoadedDie)
def atLeastOneOne(numRolls, numTrials):
numSuccess = 0
for i in range(numTrials):
rolls = rollNdice(numRolls)
if '1' in rolls:
numSuccess += 1
fracSuccess = numSuccess/float(numTrials)
print (fracSuccess) #?!
atLeastOneOne(10, 1000)
green = 3
red = 5
yellow = 7
balls = green+red+yellow
pGreen = green / balls
pGreen
1 - pGreen
# probability of choosing a green ball from the box on the first draw.
pGreen1 = green / balls
# probability of NOT choosing a green ball on the second draw without replacement.
pGreen2 = (red + yellow) / (green -1 + red + yellow)
# Calculate the probability that the first draw is green and the second draw is not green.
pGreen1 * pGreen2
# probability of choosing a green ball from the box on the first draw.
# same as above: pGreen1
# probability of NOT choosing a green ball on the second draw with replacement
pGreen2r = (red + yellow) / balls
# Calculate the probability that the first draw is cyan and the second draw is not cyan.
pGreen1 * pGreen2r
# probability that a yellow ball is drawn from the box.
pYellow = yellow / balls
# probability of drawing a yellow ball on the sixth draw.
pYellow
p_milan_wins = 0.6
# probability that the Milan team will win the first four games of the series:
p_milan_win4 = p_milan_wins**4
# probability that the M.Utd. wins at least one game in the first four games of the series.
1 - p_milan_win4
import numpy as np
def RealWinsOneMC(numTrials, nGames=4):
numSuccess = 0
for i in range(numTrials):
simulatedGames = np.random.choice(["lose","win"], size=nGames, replace=True, p=[0.6,0.4])
if 'win' in simulatedGames:
numSuccess += 1
return numSuccess / numTrials
print (RealWinsOneMC(1000))
# Create a list that contains all possible outcomes for the remaining games.
possibilities = [(g1,g2,g3,g4,g5,g6) for g1 in range(2) for g2 in range(2)
for g3 in range(2) for g4 in range(2) for g5 in range(2)
for g6 in range(2)]
# Create a list that indicates whether each row in 'possibilities'
# contains enough wins for the Cavs to win the series.
sums = [sum(tup) for tup in possibilities]
results = [s >= 4 for s in sums]
# Calculate the proportion of 'results' in which the Cavs win the series.
np.mean(results)
def MilanWinsSeriesMC(numTrials, nGames=6):
numSuccess = 0
for i in range(numTrials):
simulatedGames = np.random.choice([0,1], size=nGames, replace=True)
if sum(simulatedGames) >= 4:
numSuccess += 1
return numSuccess / numTrials
MilanWinsSeriesMC(100)
def noReplacementSimulation(numTrials):
'''
Runs numTrials trials of a Monte Carlo simulation
of drawing 3 balls out of a bucket containing
3 red and 3 green balls. Balls are not replaced once
drawn. Returns the a decimal - the fraction of times 3
balls of the same color were drawn.
'''
sameColor = 0
for i in range(numTrials):
red = 3.0
green = 3.0
for j in range(3):
if random.random() < red / (red + green):
# this is red
red -= 1
else:
green -= 1
if red == 0 or green == 0:
sameColor += 1
return float(sameColor) / numTrials
noReplacementSimulation(100)
def oneTrial():
'''
Simulates one trial of drawing 3 balls out of a bucket containing
3 red and 3 green balls. Balls are not replaced once
drawn. Returns True if all three balls are the same color,
False otherwise.
'''
balls = ['r', 'r', 'r', 'g', 'g', 'g']
chosenBalls = []
for t in range(3):
# For three trials, pick a ball
ball = random.choice(balls)
# Remove the chosen ball from the set of balls
balls.remove(ball)
# and add it to a list of balls we picked
chosenBalls.append(ball)
# If the first ball is the same as the second AND the second is the same as the third,
# we know all three must be the same color.
if chosenBalls[0] == chosenBalls[1] and chosenBalls[1] == chosenBalls[2]:
return True
return False
oneTrial()
def noReplacementSimulationProfessor(numTrials):
'''
Runs numTrials trials of a Monte Carlo simulation
of drawing 3 balls out of a bucket containing
3 red and 3 green balls. Balls are not replaced once
drawn. Returns the a decimal - the fraction of times 3
balls of the same color were drawn.
'''
numTrue = 0
for trial in range(numTrials):
if oneTrial():
numTrue += 1
return float(numTrue)/float(numTrials)
noReplacementSimulationProfessor(100)
def sampleQuizzes():
yes = 0.0
numTrials = 10000
for trial in range(numTrials):
midTerm1Vote = random.randint(50,80)
midTerm2Vote = random.randint(60,90)
finalExamVote = random.randint(55,95)
finalVote = midTerm1Vote*0.25 + midTerm2Vote*0.25 + finalExamVote*0.5
if finalVote >= 70 and finalVote <= 75:
yes += 1
return yes/numTrials
sampleQuizzes()
def throwNeedlesInCircle(numNeedles):
'''
Throw randomly <numNeedles> needles in a 2x2 square (area=4)
that has a circle inside of radius 1 (area = PI)
Count how many of those needles at the end landed inside the circle.
Return this estimated proportion: Circle Area / Square Area
'''
inCircle = 0 # number of needles inside the circle
for needle in range(1, numNeedles + 1):
x = random.random()
y = random.random()
if (x*x + y*y)**0.5 <= 1.0:
inCircle += 1
return (inCircle/float(numNeedles))
piEstimation = throwNeedlesInCircle(100) * 4
piEstimation
def getPiEstimate(numTrials, numNeedles):
print(("{t} trials, each has {n} Needles.")\
.format(t= numTrials, n=numNeedles))
estimates = []
for t in range(numTrials):
piGuess = 4*throwNeedlesInCircle(numNeedles)
estimates.append(piGuess)
stdDev = np.std(estimates)
curEst = sum(estimates)/len(estimates)
print ('PI Estimation = ' + str(curEst))
print ('Std. dev. = ' + str(round(stdDev, 5)))
return (curEst, stdDev)
getPiEstimate(20, 100);
def estimatePi(precision, numTrials, numNeedles = 1000):
sDev = precision
while sDev >= precision/2.0:
curEst, sDev = getPiEstimate(numTrials, numNeedles)
numNeedles *= 2
print("---")
return curEst
estimatePi(0.005, 100)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Again
Step2: On the other side, deterministic means that the outcome - given the same input - will always be the same. There is no unpredictability.
Step3: And again
Step5: The same number !!
Step6: Discrete probability
Step7: Independent events
Step8: These are indipendent events.
Step9: Remember that the theory says it will take on average 7776 tries.
Step10: it's very close!
Step11: In the function above, I am passing the function to roll a die as a parameter to show what can happen if the die has not the same faces but the face number six has a higher probability
Step12: A last one.
Step13: Sampling table
Step14: The population has size 15 and has therefore 15 possible samples of size 1; out of these 15 possible samples, only 3 of them will answer our question (ball is green).
Step15: Sampling without replacement - generalized
Step16: Sampling with replacement - generalized
Step17: Sampling with replacement - be careful
Step18: Yes, doesn't matter if all previous five draws were ALL yellow balls, the probability that the sixth ball is yellow is the same as for the first draw and all other draws. With replacement the population is always the same.
Step19: Here is the Monte Carlo simulation to confirm our answer to M.Utd. winning a game.
Step20: Winning a game - with MonteCarlo
Step21: Confirm the results of the previous question with a Monte Carlo simulation to estimate the probability of Milan winning the series.
Step22: A and B play a series
Step23: Write a function called sampleQuizzes() that implements a Monte Carlo simulation
Step24: Estimate PI
Step25: More needles you throw more precise will be the PI value.
Step26: We can do better and go on - increasing the number of needles at each trial - until we get the wished precision.
|
14,629 | <ASSISTANT_TASK:>
Python Code:
# Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
inv = make_inverse_operator(evoked.info, fwd, cov, loose=0., depth=0.8,
verbose=True)
snr = 3.0
lambda2 = 1.0 / snr ** 2
kwargs = dict(initial_time=0.08, hemi='both', subjects_dir=subjects_dir,
size=(600, 600))
stc = abs(apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True))
brain = stc.plot(figure=1, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
stc = abs(apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True))
brain = stc.plot(figure=2, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
stc = abs(apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True))
brain = stc.plot(figure=3, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
stc = abs(apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True))
brain = stc.plot(figure=4, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
inv = make_inverse_operator(evoked.info, fwd, cov, loose=1., depth=0.8,
verbose=True)
stc = apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True)
brain = stc.plot(figure=5, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True)
brain = stc.plot(figure=6, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
stc = apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True)
brain = stc.plot(figure=7, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
stc = apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True)
brain = stc.plot(figure=8, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fixed orientation
Step2: Let's look at the current estimates using MNE. We'll take the absolute
Step3: Next let's use the default noise normalization, dSPM
Step4: And sLORETA
Step5: And finally eLORETA
Step6: Free orientation
Step7: Let's look at the current estimates using MNE. We'll take the absolute
Step8: Next let's use the default noise normalization, dSPM
Step9: sLORETA
Step10: And finally eLORETA
|
14,630 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import nsfg
preg = nsfg.ReadFemPreg()
import thinkstats2
live = preg[preg.outcome == 1]
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
cdf = thinkstats2.Cdf(live.totalwgt_lb)
import thinkplot
thinkplot.Cdf(cdf, label='totalwgt_lb')
thinkplot.Show(loc='lower right')
cdf.Prob(8.4)
other_cdf = thinkstats2.Cdf(others.totalwgt_lb)
other_cdf.Prob(8.4)
cdf.PercentileRank(8.4)
cdf.Value(0.5)
cdf.Percentile(25), cdf.Percentile(75)
cdf.Random()
cdf.Sample(10)
t = [cdf.PercentileRank(x) for x in cdf.Sample(1000)]
cdf2 = thinkstats2.Cdf(t)
thinkplot.Cdf(cdf2)
thinkplot.Show(legend=False)
import random
t = [random.random() for _ in range(1000)]
pmf = thinkstats2.Pmf(t)
thinkplot.Pmf(pmf, linewidth=0.1)
thinkplot.Show()
cdf = thinkstats2.Cdf(t)
thinkplot.Cdf(cdf)
thinkplot.Show()
import scipy.stats
scipy.stats.norm.cdf(0)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
Step2: Display the CDF.
Step3: Find out how much you weighed at birth, if you can, and compute CDF(x).
Step4: If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children.
Step5: Compute the percentile rank of your birthweight
Step6: Compute the median birth weight by looking up the value associated with p=0.5.
Step7: Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75.
Step8: Make a random selection from <tt>cdf</tt>.
Step9: Draw a random sample from <tt>cdf</tt>.
Step10: Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks.
Step11: Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.
Step12: Assuming that the PMF doesn't work very well, try plotting the CDF instead.
|
14,631 | <ASSISTANT_TASK:>
Python Code:
# Authors: Adonay Nunes <adonay.s.nunes@gmail.com>
# Luke Bloy <luke.bloy@gmail.com>
# License: BSD (3-clause)
import os.path as op
import matplotlib.pyplot as plt
import numpy as np
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
from mne.preprocessing import annotate_muscle_zscore
# Load data
data_path = bst_auditory.data_path()
raw_fname = op.join(data_path, 'MEG', 'bst_auditory', 'S01_AEF_20131218_01.ds')
raw = read_raw_ctf(raw_fname, preload=False)
raw.crop(130, 160).load_data() # just use a fraction of data for speed here
raw.resample(300, npad="auto")
raw.notch_filter([50, 100])
# The threshold is data dependent, check the optimal threshold by plotting
# ``scores_muscle``.
threshold_muscle = 5 # z-score
# Choose one channel type, if there are axial gradiometers and magnetometers,
# select magnetometers as they are more sensitive to muscle activity.
annot_muscle, scores_muscle = annotate_muscle_zscore(
raw, ch_type="mag", threshold=threshold_muscle, min_length_good=0.2,
filter_freq=[110, 140])
fig, ax = plt.subplots()
ax.plot(raw.times, scores_muscle)
ax.axhline(y=threshold_muscle, color='r')
ax.set(xlabel='time, (s)', ylabel='zscore', title='Muscle activity')
order = np.arange(144, 164)
raw.set_annotations(annot_muscle)
raw.plot(start=5, duration=20, order=order)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notch filter the data
Step2: Plot muscle z-scores across recording
Step3: View the annotations
|
14,632 | <ASSISTANT_TASK:>
Python Code:
# Let's first define a broken function
def blah(a, b):
c = 10
return a/b - c
# call the function
# define some varables to pass to the function
aa = 5
bb = 10
print blah(aa, bb) # call the function
def blah(a, b):
c = 10
print "a: ", a
print "b: ", b
print "c: ", c
print "a/b = %d/%d = %f" %(a,b,a/b)
print "output:", a/b - c
return a/b - c
blah(aa, bb)
%pdb 0
% pdb off
% pdb on
%pdb
%pdb
import sys
a = [1,2,3]
print a
sys.exit()
b = 'hahaha'
print b
# Way to handle errors inside scripts
try:
# what we want the code to do
except: # when the above lines generate errors, will immediately jump to exception handler here, not finishing all the lines in try
# Do something else
# Some Example usage of tryโฆexcept:
# use default behavior if encounter IOError
try:
import astropy
except ImportError:
print("Astropy not installed...")
# Slightly more complex:
# Try, raise, except, else, finally
try:
print ('blah')
raise ValueError() # throws an error
except ValueError, Err: # only catches 0 division errors
print ("We caught an error! ")
else:
print ("here, if it didn't go through except...no errors are caught")
finally:
print ("literally, finally... Useful for cleaning files, or closing files.")
# If we didn't have an error...
#
try:
print ('blah')
# raise ValueError() # throws an error
except ValueError, Err: # only catches 0 division errors
print ("We caught an error! ")
else:
print ("here, if it didn't go through except... no errors are caught")
finally:
print ("literally, finally... Useful for cleaning files, or closing files.")
import numpy as np
mask = [True, True, False]
print np.sum(mask) # same as counting number where mask == True
debug = False
if debug:
print "..."
debug = True
if debug:
print "..."
# define a number
x = 33
# print it if it is greater than 30 but smaller than 50
if x > 30 and x < 50:
print x
# print if number not np.nan
if not np.isnan(x):
print x
# Introducing numpy.where()
import numpy as np
np.where?
# Example 1
a = [0.1, 1, 3, 10, 100]
a = np.array(a) # so we can use np.where
# one way..
conditionIdx = ((a<=10) & (a>=1))
print conditionIdx # boolean
new = a[conditionIdx]
# or directly
new_a = a[((a <= 10) & (a>=1))]
# you can also use np.where
new_a = a[np.where((a <= 10) & (a>=1))]
# Example 2 -- replacement using np.where
beam_ga = np.where(tz > 0, img[tx, ty, tz], 0)
# np.where(if condition is TRUE, then TRUE operation, else)
# Here, to mask out beam value for z<0
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As we know, 5/10 - 10 = -9.5 and not -10, so something must be wrong inside the function. In this simple example, it may be super obvious that we are dividing an integer with an integer, and will get back an integer. (Division between integers is defined as returning the integer part of the result, throwing away the remainder. The same division operator does real division when operating with floats, very confusing, right?).
Step2: From this, it's clear that a/b is the problem since a/b = 1/2 and not 0. And you can quickly go an fix that step. For example by using float(b), or by multiplying by 1. , the dot makes it a float.
Step3: After you've enabled pdb, type help to show available commands. Some commands are e.g. step, quit, restart.
Step4: Catching Errors
Step5: But sometimes you may want to use if... else instead of try...except.
|
14,633 | <ASSISTANT_TASK:>
Python Code:
from elasticsearch import Elasticsearch
es = Elasticsearch()
create_index = {
"settings": {
"analysis": {
"analyzer": {
"payload_analyzer": {
"type": "custom",
"tokenizer":"whitespace",
"filter":"delimited_payload_filter"
}
}
}
},
"mappings": {
"ratings": {
"properties": {
"timestamp": {
"type": "date"
},
"userId": {
"type": "string",
"index": "not_analyzed"
},
"movieId": {
"type": "string",
"index": "not_analyzed"
},
"rating": {
"type": "double"
}
}
},
"users": {
"properties": {
"name": {
"type": "string"
},
"@model": {
"properties": {
"factor": {
"type": "string",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
},
"movies": {
"properties": {
"genres": {
"type": "string"
},
"original_language": {
"type": "string",
"index": "not_analyzed"
},
"image_url": {
"type": "string",
"index": "not_analyzed"
},
"release_date": {
"type": "date"
},
"popularity": {
"type": "double"
},
"@model": {
"properties": {
"factor": {
"type": "string",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
# create index with the settings & mappings above
es.indices.create(index="demo", body=create_index)
user_df = sqlContext.read.format("es").load("demo/users")
user_df.printSchema()
user_df.select("userId", "name").show(5)
movie_df = sqlContext.read.format("es").load("demo/movies")
movie_df.printSchema()
movie_df.select("movieId", "title", "genres", "release_date", "popularity").show(5)
ratings_df = sqlContext.read.format("es").load("demo/ratings")
ratings_df.printSchema()
ratings_df.show(5)
from pyspark.ml.recommendation import ALS
als = ALS(userCol="userId", itemCol="movieId", ratingCol="rating", regParam=0.1, rank=20)
model = als.fit(ratings_df)
model.userFactors.show(5)
model.itemFactors.show(5)
from pyspark.sql.types import *
from pyspark.sql.functions import udf, lit
def convert_vector(x):
'''Convert a list or numpy array to delimited token filter format'''
return " ".join(["%s|%s" % (i, v) for i, v in enumerate(x)])
def reverse_convert(s):
'''Convert a delimited token filter format string back to list format'''
return [float(f.split("|")[1]) for f in s.split(" ")]
def vector_to_struct(x, version):
'''Convert a vector to a SparkSQL Struct with string-format vector and version fields'''
return (convert_vector(x), version)
vector_struct = udf(vector_to_struct, \
StructType([StructField("factor", StringType(), True), \
StructField("version", StringType(), True)]))
# test out the vector conversion function
test_vec = model.userFactors.select("features").first().features
print test_vec
print
print convert_vector(test_vec)
ver = model.uid
movie_vectors = model.itemFactors.select("id", vector_struct("features", lit(ver)).alias("@model"))
movie_vectors.select("id", "@model.factor", "@model.version").show(5)
user_vectors = model.userFactors.select("id", vector_struct("features", lit(ver)).alias("@model"))
user_vectors.select("id", "@model.factor", "@model.version").show(5)
# write data to ES, use:
# - "id" as the column to map to ES movie id
# - "update" write mode for ES
# - "append" write mode for Spark
movie_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("demo/movies", mode="append")
user_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("demo/users", mode="append")
es.search(index="demo", doc_type="movies", q="star wars force", size=1)
from IPython.display import Image, HTML, display
def fn_query(query_vec, q="*", cosine=False):
return {
"query": {
"function_score": {
"query" : {
"query_string": {
"query": q
}
},
"script_score": {
"script": "payload_vector_score",
"lang": "native",
"params": {
"field": "@model.factor",
"vector": query_vec,
"cosine" : cosine
}
},
"boost_mode": "replace"
}
}
}
def get_similar(the_id, q="*", num=10, index="demo", dt="movies"):
response = es.get(index=index, doc_type=dt, id=the_id)
src = response['_source']
if '@model' in src and 'factor' in src['@model']:
raw_vec = src['@model']['factor']
# our script actually uses the list form for the query vector and handles conversion internally
query_vec = reverse_convert(raw_vec)
q = fn_query(query_vec, q=q, cosine=True)
results = es.search(index, dt, body=q)
hits = results['hits']['hits']
return src, hits[1:num+1]
def display_similar(the_id, q="*", num=10, index="demo", dt="movies"):
movie, recs = get_similar(the_id, q, num, index, dt)
# display query
q_im_url = movie['image_url']
display(HTML("<h2>Get similar movies for:</h2>"))
display(Image(q_im_url, width=200))
display(HTML("<br>"))
display(HTML("<h2>Similar movies:</h2>"))
sim_html = "<table border=0><tr>"
i = 0
for rec in recs:
r_im_url = rec['_source']['image_url']
r_score = rec['_score']
sim_html += "<td><img src=%s width=200></img></td><td>%2.3f</td>" % (r_im_url, r_score)
i += 1
if i % 5 == 0:
sim_html += "</tr><tr>"
sim_html += "</tr></table>"
display(HTML(sim_html))
display_similar(122886, num=5)
display_similar(122886, num=5, q="title:(NOT trek)")
display_similar(6377, num=5, q="genres:children AND release_date:[now-2y/y TO now]")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load User, Movie and Ratings DataFrames from Elasticsearch
Step2: 2. Run ALS
Step3: 3. Write ALS user and item factors to Elasticsearch
Step4: Convert factor vectors to [factor, version] form and write to Elasticsearch
Step5: Check the data was written correctly
Step6: 4. Recommend using Elasticsearch!
|
14,634 | <ASSISTANT_TASK:>
Python Code:
from urllib import request
import zlib
import pandas
from bs4 import BeautifulSoup #para processar o HTML
import re #para processar o html
lista_datas = []
lista_sessoes = []
bytesTransferidos = 0
i = 0
for ano in range(1976,2016):
for mes in range(1,13):
print("Processando %04d/%02d - bytes transferidos = %d..."%(ano,mes,bytesTransferidos))
for dia in range(1,32): #para cada dia possรญvel, tenta transferiro ficheiro
url = "http://demo.cratica.org/sessoes/%04d/%02d/%02d/"%(ano,mes,dia)
#url = "http://localhost:7888/radio-parlamento/backup170221/sessoes/%04d/%02d/%02d/"%(ano,mes,dia)
#transfere a pagina usando urllib
r = request.Request(url)
try:
with request.urlopen(r) as f: #transfere o site
dados = f.read()
bytesTransferidos = bytesTransferidos + len(dados) #contabiliza quanto trafego estamos a usar
# os dados em HTML tรชm mais informaรงao do que queremos. vamos seleccionar apenas o elemento 'entre-content', que tem o texto do parlamento
dados = ''.join([str(x).strip() for x in BeautifulSoup(dados,'lxml').find('div', class_="entry-content").find_all()])
# vamos retirar as tags de paragrafos e corrigir as translineaรงรตes
dados = dados.replace('-</p><p>','').replace('</p>',' ').replace('<p>',' ')
# vamos retirar tags que faltem, pois nao interessam para a nossa anรกlise (usando uma expressao regular, e assumindo que o codigo รฉ html vรกlido)
dados = re.sub('<[^>]*>', '', dados)
# texto para lowercase, para as procuras de palavras funcionarem independentemente da capitalizaรงรฃo
dados = dados.lower()
if(len(dados) > 100): #se o texto nao for inexistente ou demasiado pequeno, adiciona รก lista de sessรตes (hรก pรกginas que existem, mas nรฃo tรชm texto)
lista_datas.append("%04d-%02d-%02d"%(ano,mes,dia))
lista_sessoes.append(dados)
except request.URLError: #ficheiro nao existe, passa ao prรณximo
pass
print('%d bytes transferidos, %d sessoes transferidas, entre %s e %s'%(bytesTransferidos,len(lista_sessoes), min(lista_datas).format(), max(lista_datas).format()))
dados = {'data': pandas.DatetimeIndex(lista_datas), 'sessao': lista_sessoes }
sessoes = pandas.DataFrame(dados, columns={'data','sessao'})
%matplotlib inline
import pylab
import matplotlib
#aplica a funcao len/length a cada elemento da series de dados,
# criando uma columna com o numero de bytes de cada sessao
sessoes['tamanho'] = sessoes['sessao'].map(len)
#representa num grafico
ax = sessoes.plot(x='data',y='tamanho',figsize=(15,6),linewidth=0.1,marker='.',markersize=0.5)
ax.set_xlabel('Data da sessรฃo')
ax.set_ylabel('Tamanho (bytes)')
sessoes.to_csv('sessoes_democratica_org.csv')
import os
print(str(os.path.getsize('sessoes_democratica_org.csv')) + ' bytes')
print(os.getcwd()+'/sessoes_democratica_org.csv')
%matplotlib inline
import pylab
import matplotlib
import pandas
import numpy
dateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d')
sessoes = pandas.read_csv('sessoes_democratica_org.csv',index_col=0,parse_dates=['data'], date_parser=dateparse)
sessoes
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Agora que temos os dados num dataframe podemos imediatamente tirar partido deles. Por exemplo representar o tamanho das sessoes em bytes ao longo do tempo รฉ straightforward
Step2: A sessรฃo mรฉdia รฉ mais ou menos constante da ordem dos 200 mil bytes, excepto algumas sessรตes que periodicamente tรชm mais conteรบdo (atรฉ >4x mais caracteres).
Step3: Verificando o nome e tamanho do ficheiro
Step4: Cรณdigo para carregar os dados
Step5: Vamos sรณ verificar se os dados ainda estรฃo perceptรญveis. Um ponto importante รฉ se os acentos e cedilhas estรฃo bem interpretados, pois gravar e abrir ficheiros pode confundir o python.
|
14,635 | <ASSISTANT_TASK:>
Python Code:
from hypothesis import find
import dit
from dit.abc import *
from dit.pid import *
from dit.utils.testing import distribution_structures
dit.ditParams['repr.print'] = dit.ditParams['print.exact'] = True
a = distribution_structures(size=3, alphabet=2)
a.example()
def pred(value):
return lambda d: dit.multivariate.coinformation(d) < value
ce = find(distribution_structures(3, 2), pred(-1e-5))
print(ce)
print("The coinformation is: {}".format(dit.multivariate.coinformation(ce)))
ce = find(distribution_structures(3, 2), pred(-0.5))
print(ce)
print("The coinformation is: {}".format(dit.multivariate.coinformation(ce)))
def b_lt_k(d):
k = dit.multivariate.gk_common_information(d)
b = dit.multivariate.dual_total_correlation(d)
return k > b
find(distribution_structures(size=3, alphabet=3, uniform=True), b_lt_k)
ce = find(distribution_structures(3, 2, True), lambda d: PID_BROJA(d) != PID_Proj(d))
ce
print(PID_BROJA(ce))
print(PID_Proj(ce))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To illustrate what the distribution source looks like, here we instantiate it with a size of 3 and an alphabet of 2
Step2: Negativity of co-information
Step3: The Gรกcs-Kรถrner common information is bound from above by the dual total correlation
Step4: BROJA is not Proj
|
14,636 | <ASSISTANT_TASK:>
Python Code:
from netpyne import specs, sim
netParams = specs.NetParams()
simConfig = specs.SimConfig()
netParams.cellParams['pyr'] = {}
netParams.cellParams['pyr']['secs'] = {}
netParams.cellParams['pyr']['secs']['soma'] = {}
netParams.cellParams['pyr']['secs']['soma']['geom'] = {
"diam": 12,
"L": 12,
"Ra": 100.0,
"cm": 1
}
netParams.cellParams['pyr']['secs']['soma']['mechs'] = {"hh": {
"gnabar": 0.12,
"gkbar": 0.036,
"gl": 0.0003,
"el": -54.3
}}
dend = {}
dend['geom'] = {"diam": 1.0,
"L": 200.0,
"Ra": 100.0,
"cm": 1,
}
dend['mechs'] = {"pas":
{"g": 0.001,
"e": -70}
}
dend['topol'] = {"parentSec": "soma",
"parentX": 1.0,
"childX": 0,
}
netParams.cellParams['pyr']['secs']['dend'] = dend
netParams.popParams['E'] = {
"cellType": "pyr",
"numCells": 40,
}
netParams.synMechParams['exc'] = {
"mod": "Exp2Syn",
"tau1": 0.1,
"tau2": 1.0,
"e": 0
}
netParams.connParams['E->E'] = {
"preConds": {"pop": "E"},
"postConds": {"pop": "E"},
"weight": 0.005,
"probability": 0.1,
"delay": 5.0,
"synMech": "exc",
"sec": "dend",
"loc": 1.0,
}
simConfig.filename = "netpyne_tut1"
simConfig.duration = 200.0
simConfig.dt = 0.1
simConfig.recordCells = [0]
simConfig.recordTraces = {
"V_soma": {
"sec": "soma",
"loc": 0.5,
"var": "v",
},
"V_dend": {
"sec": "dend",
"loc": 1.0,
"var": "v",
}
}
simConfig.analysis = {
"plotTraces": {
"include": [0],
"saveFig": True,
},
"plotRaster": {
"saveFig": True,
}
}
%matplotlib inline
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
fig, figData = sim.analysis.plotTraces(overlay=True)
fig, figData = sim.analysis.plot2Dnet()
fig, figData = sim.analysis.plotConn()
fig, figData = sim.analysis.plotConn(feature='weight', groupBy='cell')
netParams.stimSourceParams['IClamp1'] = {
"type": "IClamp",
"dur": 5,
"del": 20,
"amp": 0.1,
}
netParams.stimTargetParams['IClamp1->cell0'] = {
"source": "IClamp1",
"conds": {"cellList": [0]},
"sec": "dend",
"loc": 1.0,
}
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
fig, figData = sim.analysis.plotTraces(overlay=True)
fig, figData = sim.analysis.plotRaster(marker='o', markerSize=50)
fig, figData = sim.analysis.plotConn()
simConfig.recordTraces = {}
simConfig.analysis['plotRaster'] = False
simConfig.recordTraces['gNa'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gna'}
simConfig.recordTraces['gK'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gk'}
simConfig.recordTraces['gL'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gl'}
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
fig, figData = sim.analysis.plotTraces(timeRange=[90, 110], overlay=True)
simConfig.recordTraces = {}
simConfig.recordTraces['iSyn0'] = {'sec': 'dend', 'loc': 1.0, 'synMech': 'exc', 'var': 'i'}
simConfig.recordTraces['V_dend'] = {'sec': 'dend', 'loc': 1.0, 'var': 'v'}
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
sim.net.allCells[0].keys()
sim.net.allCells[0]['conns']
simConfig.recordTraces = {}
simConfig.recordTraces['V_soma'] = {'sec': 'soma', 'loc': 0.5, 'var': 'v'}
simConfig.recordTraces['V_dend'] = {'sec': 'dend', 'loc': 1.0, 'var': 'v'}
syn_plots = {}
for index, presyn in enumerate(sim.net.allCells[0]['conns']):
trace_name = 'i_syn_' + str(presyn['preGid'])
syn_plots[trace_name] = None
simConfig.recordTraces[trace_name] = {'sec': 'dend', 'loc': 1.0, 'synMech': 'exc', 'var': 'i', 'index': index}
print(simConfig.recordTraces)
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
sim.allSimData.keys()
sim.allSimData.V_soma.keys()
time = sim.allSimData['t']
v_soma = sim.allSimData['V_soma']['cell_0']
v_dend = sim.allSimData['V_dend']['cell_0']
for syn_plot in syn_plots:
syn_plots[syn_plot] = sim.allSimData[syn_plot]['cell_0']
import matplotlib.pyplot as plt
fig = plt.figure()
plt.subplot(211)
plt.plot(time, v_soma, label='v_soma')
plt.plot(time, v_dend, label='v_dend')
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Membrane potential (mV)')
plt.subplot(212)
for syn_plot in syn_plots:
plt.plot(time, syn_plots[syn_plot], label=syn_plot)
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Synaptic current (nA)')
plt.savefig('syn_currents.jpg', dpi=600)
%reset
from netpyne import specs, sim
netParams = specs.NetParams()
simConfig = specs.SimConfig()
# Create a cell type
# ------------------
netParams.cellParams['pyr'] = {}
netParams.cellParams['pyr']['secs'] = {}
# Add a soma section
netParams.cellParams['pyr']['secs']['soma'] = {}
netParams.cellParams['pyr']['secs']['soma']['geom'] = {
"diam": 12,
"L": 12,
"Ra": 100.0,
"cm": 1
}
# Add hh mechanism to soma
netParams.cellParams['pyr']['secs']['soma']['mechs'] = {"hh": {
"gnabar": 0.12,
"gkbar": 0.036,
"gl": 0.0003,
"el": -54.3
}}
# Add a dendrite section
dend = {}
dend['geom'] = {"diam": 1.0,
"L": 200.0,
"Ra": 100.0,
"cm": 1,
}
# Add pas mechanism to dendrite
dend['mechs'] = {"pas":
{"g": 0.001,
"e": -70}
}
# Connect the dendrite to the soma
dend['topol'] = {"parentSec": "soma",
"parentX": 1.0,
"childX": 0,
}
# Add the dend dictionary to the cell parameters dictionary
netParams.cellParams['pyr']['secs']['dend'] = dend
# Create a population of these cells
# ----------------------------------
netParams.popParams['E'] = {
"cellType": "pyr",
"numCells": 40,
}
# Add Exp2Syn synaptic mechanism
# ------------------------------
netParams.synMechParams['exc'] = {
"mod": "Exp2Syn",
"tau1": 0.1,
"tau2": 1.0,
"e": 0
}
# Define the connectivity
# -----------------------
netParams.connParams['E->E'] = {
"preConds": {"pop": "E"},
"postConds": {"pop": "E"},
"weight": 0.005,
"probability": 0.1,
"delay": 5.0,
"synMech": "exc",
"sec": "dend",
"loc": 1.0,
}
# Add a stimulation
# -----------------
netParams.stimSourceParams['IClamp1'] = {
"type": "IClamp",
"dur": 5,
"del": 20,
"amp": 0.1,
}
# Connect the stimulation
# -----------------------
netParams.stimTargetParams['IClamp1->cell0'] = {
"source": "IClamp1",
"conds": {"cellList": [0]},
"sec": "dend",
"loc": 1.0,
}
# Set up the simulation configuration
# -----------------------------------
simConfig.filename = "netpyne_tut1"
simConfig.duration = 200.0
simConfig.dt = 0.1
# Record from cell 0
simConfig.recordCells = [0]
# Record the voltage at the soma and the dendrite
simConfig.recordTraces = {
"V_soma": {
"sec": "soma",
"loc": 0.5,
"var": "v",
},
"V_dend": {
"sec": "dend",
"loc": 1.0,
"var": "v",
}
}
# Record somatic conductances
#simConfig.recordTraces['gNa'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gna'}
#simConfig.recordTraces['gK'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gk'}
#simConfig.recordTraces['gL'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gl'}
# Automatically generate some figures
simConfig.analysis = {
"plotTraces": {
"include": [0],
"saveFig": True,
"overlay": True,
},
"plotRaster": {
"saveFig": True,
"marker": "o",
"markerSize": 50,
},
"plotConn": {
"saveFig": True,
"feature": "weight",
"groupby": "cell",
"markerSize": 50,
},
"plot2Dnet": {
"saveFig": True,
},
}
# Create, simulate, and analyze the model
# ---------------------------------------
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
# Set up the recording for the synaptic current plots
syn_plots = {}
for index, presyn in enumerate(sim.net.allCells[0]['conns']):
trace_name = 'i_syn_' + str(presyn['preGid'])
syn_plots[trace_name] = None
simConfig.recordTraces[trace_name] = {'sec': 'dend', 'loc': 1.0, 'synMech': 'exc', 'var': 'i', 'index': index}
# Create, simulate, and analyze the model
# ---------------------------------------
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
# Extract the data
# ----------------
time = sim.allSimData['t']
v_soma = sim.allSimData['V_soma']['cell_0']
v_dend = sim.allSimData['V_dend']['cell_0']
for syn_plot in syn_plots:
syn_plots[syn_plot] = sim.allSimData[syn_plot]['cell_0']
# Plot our custom figure
# ----------------------
import matplotlib.pyplot as plt
fig = plt.figure()
plt.subplot(211)
plt.plot(time, v_soma, label='v_soma')
plt.plot(time, v_dend, label='v_dend')
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Membrane potential (mV)')
plt.subplot(212)
for syn_plot in syn_plots:
plt.plot(time, syn_plots[syn_plot], label=syn_plot)
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Synaptic current (nA)')
plt.savefig('syn_currents.jpg', dpi=600)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: These NetPyNE objects come with a lot of defaults set which you can explore with tab completion, but we'll focus on that more later.
Step2: Specify the soma compartment properties
Step3: The hh mechanism is builtin to NEURON, but you can see its .mod file here
Step4: The pas mechanim is a simple leakage channel and is builtin to NEURON. Its .mod file is available here
Step5: With our dend section dictionary complete, we must now add it to the pyr cell dictionary.
Step6: Our two-compartment cell model is now completely specified. Our next step is to create a population of these cells.
Step7: Create a synaptic model
Step8: Connect the cells
Step9: Set up the simulation configuration
Step10: We will record from from the first cell (0) and we will record the voltage in the middle of the soma and the end of the dendrite.
Step11: Finally we will set up some plots to be automatically generated and saved.
Step12: To see plots in the notebook, we first have to enter the following command.
Step13: Create, simulate, and analyze the model
Step14: We can see that there was no spiking in the network, and thus the spike raster was not plotted. But there should be one new file in your directory
Step15: Plot the 2D connectivity of the network
Step16: Plot the connectivity matrix
Step17: Not very interesting with just one population, but we can also look at the cellular level connectivity.
Step18: Add a stimulation
Step19: Now we need to add a target for our stimulation. We do that by adding a dictionary to the Stimulation Target Parameters dictionary (stimTargetParams). We'll call this connectivity rule IClamp1->cell0, because it will go from the source we just created (IClamp1) and the first cell in our population. The stimulation (current injection in this case) will occur in our dend section at the very tip (location of 1.0).
Step20: Create, simulate, and analyze the model
Step21: Now we see spiking in the network, and the raster plot appears. Let's improve the plots a little bit.
Step22: You can see all of the options available in plotTraces here
Step23: Record and plot a variety of traces
Step24: Record and plot the somatic conductances
Step25: Then we can re-run the simulation.
Step26: Let's zoom in on one spike and overylay the traces.
Step27: Record from synapses
Step28: That's the first synapse created in that location, but there are likely multiple synapses. Let's plot all the synaptic currents entering cell 0. First we need to see what they are. The network is defined in sim.net. Type in sim.net. and then push Tab to see what's available.
Step29: The connections coming onto the cell are in conns.
Step30: So we want to record six synaptic currents. Lets do that in a for loop at the same time creating a dictionary to hold the synaptic trace names as keys (and later the trace arrays as values).
Step31: Extracting recorded data
Step32: spkt is an array of the times of all spikes in the network. spkid is an array of the universal index (GID) of the cell spiking. t is an array of the time for traces. Our traces appear as we named them, and each is a dictionary with its key being cell_GID and its value being the array of the trace.
Step33: So let's extract our data.
Step34: And now we can make our custom plot.
Step35: Cleaning up our figure (reducing font size, etc.) will be left as an exercise. See the matplotlib users guide here
Step36: This tutorial in a single Python file
|
14,637 | <ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import math
import time
import torch
import random
import numpy as np
import pandas as pd
from datasets import load_dataset
from torch.utils.data import DataLoader
from tokenizers import ByteLevelBPETokenizer
%watermark -a 'Ethen' -d -t -v -p datasets,numpy,torch,tokenizers,transformers
import tarfile
import zipfile
import requests
import subprocess
from tqdm import tqdm
from urllib.parse import urlparse
def download_file(url: str, directory: str):
Download the file at ``url`` to ``directory``.
Extract to the file content ``directory`` if the original file
is a tar, tar.gz or zip file.
Parameters
----------
url : str
url of the file.
directory : str
Directory to download the file.
response = requests.get(url, stream=True)
response.raise_for_status()
content_len = response.headers.get('Content-Length')
total = int(content_len) if content_len is not None else 0
os.makedirs(directory, exist_ok=True)
file_name = get_file_name_from_url(url)
file_path = os.path.join(directory, file_name)
with tqdm(unit='B', total=total) as pbar, open(file_path, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
pbar.update(len(chunk))
f.write(chunk)
extract_compressed_file(file_path, directory)
def extract_compressed_file(compressed_file_path: str, directory: str):
Extract a compressed file to ``directory``. Supports zip, tar.gz, tgz,
tar extensions.
Parameters
----------
compressed_file_path : str
directory : str
File will to extracted to this directory.
basename = os.path.basename(compressed_file_path)
if 'zip' in basename:
with zipfile.ZipFile(compressed_file_path, "r") as zip_f:
zip_f.extractall(directory)
elif 'tar.gz' in basename or 'tgz' in basename:
with tarfile.open(compressed_file_path) as f:
f.extractall(directory)
def get_file_name_from_url(url: str) -> str:
Return the file_name from a URL
Parameters
----------
url : str
URL to extract file_name from
Returns
-------
file_name : str
parse = urlparse(url)
return os.path.basename(parse.path)
urls = [
'http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/training.tar.gz',
'http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/validation.tar.gz',
'http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/mmt16_task1_test.tar.gz'
]
directory = 'multi30k'
for url in urls:
download_file(url, directory)
!ls multi30k
!head multi30k/train.de
!head multi30k/train.en
def create_translation_data(
source_input_path: str,
target_input_path: str,
output_path: str,
delimiter: str = '\t',
encoding: str = 'utf-8'
):
Creates the paired source and target dataset from the separated ones.
e.g. creates `train.tsv` from `train.de` and `train.en`
with open(source_input_path, encoding=encoding) as f_source_in, \
open(target_input_path, encoding=encoding) as f_target_in, \
open(output_path, 'w', encoding=encoding) as f_out:
for source_raw in f_source_in:
source_raw = source_raw.strip()
target_raw = f_target_in.readline().strip()
if source_raw and target_raw:
output_line = source_raw + delimiter + target_raw + '\n'
f_out.write(output_line)
source_lang = 'de'
target_lang = 'en'
data_files = {}
for split in ['train', 'val', 'test']:
source_input_path = os.path.join(directory, f'{split}.{source_lang}')
target_input_path = os.path.join(directory, f'{split}.{target_lang}')
output_path = f'{split}.tsv'
create_translation_data(source_input_path, target_input_path, output_path)
data_files[split] = [output_path]
data_files
dataset_dict = load_dataset(
'csv',
delimiter='\t',
column_names=[source_lang, target_lang],
data_files=data_files
)
dataset_dict
dataset_dict['train'][0]
from transformers import AutoTokenizer
model_checkpoint = "Helsinki-NLP/opus-mt-de-en"
pretrained_tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
pretrained_tokenizer
pretrained_tokenizer(dataset_dict['train']['de'][0])
# notice the last token id is 0, the end of sentence special token
pretrained_tokenizer.convert_ids_to_tokens(0)
pretrained_tokenizer(dataset_dict['train']['de'][0:2])
max_source_length = 128
max_target_length = 128
source_lang = "de"
target_lang = "en"
def batch_tokenize_fn(examples):
Generate the input_ids and labels field for huggingface dataset/dataset dict.
Truncation is enabled, so we cap the sentence to the max length, padding will be done later
in a data collator, so pad examples to the longest length in the batch and not the whole dataset.
sources = examples[source_lang]
targets = examples[target_lang]
model_inputs = pretrained_tokenizer(sources, max_length=max_source_length, truncation=True)
# setup the tokenizer for targets,
# huggingface expects the target tokenized ids to be stored in the labels field
with pretrained_tokenizer.as_target_tokenizer():
labels = pretrained_tokenizer(targets, max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_dataset_dict = dataset_dict.map(batch_tokenize_fn, batched=True, num_proc=8)
tokenized_dataset_dict
# printing out the tokenized data, to check for the newly added fields
tokenized_dataset_dict['train'][0]
from transformers import AutoModelForSeq2SeqLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
pretrained_model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint).to(device)
print('# of parameters: ', pretrained_model.num_parameters())
pretrained_model
def generate_translation(model, tokenizer, example):
print out the source, target and predicted raw text.
source = example[source_lang]
target = example[target_lang]
input_ids = example['input_ids']
input_ids = torch.LongTensor(input_ids).view(1, -1).to(model.device)
generated_ids = model.generate(input_ids)
prediction = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
print('source: ', source)
print('target: ', target)
print('prediction: ', prediction)
example = tokenized_dataset_dict['train'][0]
generate_translation(pretrained_model, pretrained_tokenizer, example)
example = tokenized_dataset_dict['test'][0]
generate_translation(pretrained_model, pretrained_tokenizer, example)
from transformers import (
AutoConfig,
AutoModelForSeq2SeqLM,
DataCollatorForSeq2Seq,
EarlyStoppingCallback,
Seq2SeqTrainingArguments,
Seq2SeqTrainer
)
config_params = {
'd_model': 256,
'decoder_layers': 3,
'decoder_attention_heads': 8,
'decoder_ffn_dim': 512,
'encoder_layers': 6,
'encoder_attention_heads': 8,
'encoder_ffn_dim': 512,
'max_length': 128,
'max_position_embeddings': 128
}
model_checkpoint = "Helsinki-NLP/opus-mt-de-en"
config = AutoConfig.from_pretrained(model_checkpoint, **config_params)
config
model = AutoModelForSeq2SeqLM.from_config(config)
print('# of parameters: ', model.num_parameters())
model
batch_size = 128
args = Seq2SeqTrainingArguments(
output_dir="test-translation",
evaluation_strategy="epoch",
learning_rate=0.0005,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=20,
load_best_model_at_end=True,
predict_with_generate=True,
remove_unused_columns=True,
fp16=True
)
data_collator = DataCollatorForSeq2Seq(pretrained_tokenizer)
callbacks = [EarlyStoppingCallback(early_stopping_patience=3)]
trainer = Seq2SeqTrainer(
model,
args,
data_collator=data_collator,
train_dataset=tokenized_dataset_dict["train"],
eval_dataset=tokenized_dataset_dict["val"],
callbacks=callbacks
)
dataloader_train = trainer.get_train_dataloader()
batch = next(iter(dataloader_train))
batch
trainer_output = trainer.train()
trainer_output
def generate_translation(model, tokenizer, example):
print out the source, target and predicted raw text.
source = example[source_lang]
target = example[target_lang]
input_ids = tokenizer(source)['input_ids']
input_ids = torch.LongTensor(input_ids).view(1, -1).to(model.device)
generated_ids = model.generate(input_ids)
prediction = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
print('source: ', source)
print('target: ', target)
print('prediction: ', prediction)
example = dataset_dict['train'][0]
generate_translation(model, pretrained_tokenizer, example)
example = dataset_dict['test'][0]
generate_translation(model, pretrained_tokenizer, example)
# use only the training set to train our tokenizer
split = 'train'
source_input_path = os.path.join(directory, f'{split}.{source_lang}')
target_input_path = os.path.join(directory, f'{split}.{target_lang}')
print(source_input_path, target_input_path)
bos_token = '<s>'
unk_token = '<unk>'
eos_token = '</s>'
pad_token = '<pad>'
special_tokens = [unk_token, bos_token, eos_token, pad_token]
tokenizer_params = {
'min_frequency': 2,
'vocab_size': 5000,
'show_progress': False,
'special_tokens': special_tokens
}
start_time = time.time()
source_tokenizer = ByteLevelBPETokenizer(lowercase=True)
source_tokenizer.train(source_input_path, **tokenizer_params)
target_tokenizer = ByteLevelBPETokenizer(lowercase=True)
target_tokenizer.train(target_input_path, **tokenizer_params)
end_time = time.time()
print('elapsed: ', end_time - start_time)
print('source vocab size: ', source_tokenizer.get_vocab_size())
print('target vocab size: ', target_tokenizer.get_vocab_size())
pad_token_id = source_tokenizer.token_to_id(pad_token)
eos_token_id = source_tokenizer.token_to_id(eos_token)
def batch_encode_fn(examples):
sources = examples[source_lang]
targets = examples[target_lang]
input_ids = [encoding.ids + [eos_token_id] for encoding in source_tokenizer.encode_batch(sources)]
labels = [encoding.ids + [eos_token_id] for encoding in target_tokenizer.encode_batch(targets)]
examples['input_ids'] = input_ids
examples['labels'] = labels
return examples
dataset_dict_encoded = dataset_dict.map(batch_encode_fn, batched=True, num_proc=8)
dataset_dict_encoded
dataset_train = dataset_dict_encoded['train']
dataset_train[0]
class Seq2SeqDataCollator:
def __init__(
self,
max_length: int,
pad_token_id: int,
pad_label_token_id: int = -100
):
self.max_length = max_length
self.pad_token_id = pad_token_id
self.pad_label_token_id = pad_label_token_id
def __call__(self, batch):
source_batch = []
source_len = []
target_batch = []
target_len = []
for example in batch:
source = example['input_ids']
source_len.append(len(source))
source_batch.append(source)
target = example['labels']
target_len.append(len(target))
target_batch.append(target)
source_padded = self.process_encoded_text(source_batch, source_len, self.pad_token_id)
target_padded = self.process_encoded_text(target_batch, target_len, self.pad_label_token_id)
attention_mask = generate_attention_mask(source_padded, self.pad_token_id)
return {
'input_ids': source_padded,
'labels': target_padded,
'attention_mask': attention_mask
}
def process_encoded_text(self, sequences, sequences_len, pad_token_id):
sequences_max_len = np.max(sequences_len)
max_length = min(sequences_max_len, self.max_length)
padded_sequences = pad_sequences(sequences, max_length, pad_token_id)
return torch.LongTensor(padded_sequences)
def generate_attention_mask(input_ids, pad_token_id):
return (input_ids != pad_token_id).long()
def pad_sequences(sequences, max_length, pad_token_id):
Pad the list of sequences (numerical token ids) to the same length.
Sequence that are shorter than the specified ``max_len`` will be appended
with the specified ``pad_token_id``. Those that are longer will be truncated.
Parameters
----------
sequences : list[int]
List of numerical token ids.
max_length : int
Maximum length that all sequences will be truncated/padded to.
pad_token_id : int
Padding token index.
Returns
-------
padded_sequences : 1d ndarray
num_samples = len(sequences)
padded_sequences = np.full((num_samples, max_length), pad_token_id)
for i, sequence in enumerate(sequences):
sequence = np.array(sequence)[:max_length]
padded_sequences[i, :len(sequence)] = sequence
return padded_sequences
config_params = {
'd_model': 256,
'decoder_layers': 3,
'decoder_attention_heads': 8,
'decoder_ffn_dim': 512,
'encoder_layers': 6,
'encoder_attention_heads': 8,
'encoder_ffn_dim': 512,
'max_length': 128,
'max_position_embeddings': 128,
'eos_token_id': eos_token_id,
'pad_token_id': pad_token_id,
'decoder_start_token_id': pad_token_id,
"bad_words_ids": [
[
pad_token_id
]
],
'vocab_size': source_tokenizer.get_vocab_size()
}
model_config = AutoConfig.from_pretrained(model_checkpoint, **config_params)
model_config
transformers_model = AutoModelForSeq2SeqLM.from_config(model_config)
print('# of parameters: ', transformers_model.num_parameters())
transformers_model
batch_size = 128
args = Seq2SeqTrainingArguments(
output_dir="test-translation",
evaluation_strategy="epoch",
learning_rate=0.0005,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=20,
load_best_model_at_end=True,
predict_with_generate=True,
remove_unused_columns=True,
fp16=True
)
data_collator = Seq2SeqDataCollator(model_config.max_length, pad_token_id)
callbacks = [EarlyStoppingCallback(early_stopping_patience=3)]
trainer = Seq2SeqTrainer(
transformers_model,
args,
train_dataset=dataset_dict_encoded["train"],
eval_dataset=dataset_dict_encoded["val"],
data_collator=data_collator,
callbacks=callbacks
)
dataloader_train = trainer.get_train_dataloader()
next(iter(dataloader_train))
trainer_output = trainer.train()
trainer_output
def generate_translation(model, source_tokenizer, target_tokenizer, example):
source = example[source_lang]
target = example[target_lang]
input_ids = source_tokenizer.encode(source).ids
input_ids = torch.LongTensor(input_ids).view(1, -1).to(model.device)
generated_ids = model.generate(input_ids)
generated_ids = generated_ids[0].detach().cpu().numpy()
prediction = target_tokenizer.decode(generated_ids)
print('source: ', source)
print('target: ', target)
print('prediction: ', prediction)
example = dataset_dict['train'][0]
generate_translation(transformers_model, source_tokenizer, target_tokenizer, example)
example = dataset_dict['test'][0]
generate_translation(transformers_model, source_tokenizer, target_tokenizer, example)
model_checkpoint = 'transformers_model'
transformers_model.save_pretrained(model_checkpoint)
transformers_model_loaded = transformers_model.from_pretrained(model_checkpoint).to(device)
example = dataset_dict['test'][0]
generate_translation(transformers_model_loaded, source_tokenizer, target_tokenizer, example)
len(dataset_dict_encoded['test'])
# we use a different data collator then the one we used for training and evaluating model
# replace -100 in the labels with other special tokens during inferencing
# as we can't decode them.
data_collator = Seq2SeqDataCollator(model_config.max_length, pad_token_id, pad_token_id)
data_loader = DataLoader(dataset_dict_encoded['test'], collate_fn=data_collator, batch_size=64)
data_loader
start = time.time()
rows = []
for example in data_loader:
input_ids = example['input_ids']
generated_ids = transformers_model.generate(input_ids.to(transformers_model.device))
generated_ids = generated_ids.detach().cpu().numpy()
predictions = target_tokenizer.decode_batch(generated_ids)
labels = example['labels'].detach().cpu().numpy()
targets = target_tokenizer.decode_batch(labels)
sources = source_tokenizer.decode_batch(input_ids.detach().cpu().numpy())
for source, target, prediction in zip(sources, targets, predictions):
row = [source, target, prediction]
rows.append(row)
end = time.time()
print('elapsed: ', end - start)
df_rows = pd.DataFrame(rows, columns=['source', 'target', 'prediction'])
df_rows
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Machine Translation with Huggingface Transformer
Step5: We print out the content in the data directory and some sample data.
Step7: The original dataset is splits the source and the target language into two separate files (e.g. train.de, train.en are the training dataset for German and English). This type of format is useful when we wish to train a tokenizer on top of the source or target language as we'll soon see.
Step8: We can acsess the split, and each record/pair with the following syntax.
Step9: Pretrained Model
Step10: We can pass a single record, or a list of records to huggingface's tokenizer. Then depending on the model, we might see different keys in the dictionary returned. For example, here, we have
Step12: We can apply the tokenizers to our entire raw dataset, so this preprocessing will be a one time process. By passing the function to our dataset dict's map method, it will apply the same tokenizing step to all the splits in our data.
Step13: Having prepared our dataset, we'll load the pre-trained model. Similar to the tokenizer, we can use the .from_pretrained method, and specify a valid huggingface model.
Step15: We can directly use this model to generate the translations, and eyeball the results.
Step16: Training Model From Scratch
Step17: The huggingface library offers pre-built functionality to avoid writing the training logic from scratch. This step can be swapped out with other higher level trainer packages or even implementing our own logic. We setup the
Step18: We can take a look at the batched examples. Understanding the output can be beneficial if we wish to customize the data collate function later.
Step20: Similar to what we did before, we can use this model to generate the translations, and eyeball the results.
Step21: Training Tokenizer and Model From Scratch
Step22: We'll perform this tokenization step for all our dataset up front, so we can do as little preprocessing as possible while feeding our dataset to model. Note that we do not perform the padding step at this stage.
Step24: Given the custom tokenizer, we can also custom our data collate class that does the padding for input and labels.
Step25: Given that we are using our own tokenizer instead of the pre-trained ones, we need to update a couple of other parameters in our config. The one that's worth pointing out is that this model starts generating with pad_token_id, that's why the decoder_start_token_id is the same as the pad_token_id.
Step26: Confirming saving and loading the model gives us identical predictions.
Step27: As the last step, we'll write a inferencing function that performs batch scoring on a given dataset. Here we generate the predictions and save it in a pandas dataframe along with the source and the target.
|
14,638 | <ASSISTANT_TASK:>
Python Code:
lan = sns.factorplot('Lรคn', data=df, kind='count', size=8, aspect=2)
lan.set_xticklabels(rotation=45)
# Show the 10 contributors that contributed the most. (change value of .nlargest() to show more)
df['Observatรถr'].value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=True).nlargest(10)
# A little plot of the contributors
citizenscientists = sns.factorplot('Observatรถr', data=df, kind='count', size=8, aspect=2)
citizenscientists.set_xticklabels(rotation=45)
# Create a new column for the X and Y values, to be converted to latlong below
df['latlong'] = list(zip(df.X, df.Y))
map_1 = folium.Map(location=[62.128, 18.6435], zoom_start=4,
tiles='Mapbox Bright')
for index, row in df.iterrows():
# Transform X,Y to latlong by switching place and replacing comma with dot
longitude = row['latlong'][1].replace(',', '.')
latitude = row['latlong'][0].replace(',', '.')
#print(row)
vet = str(row['Vetenskapligt namn'])
svnamn = str(row['Svenskt namn']).capitalize()
observ = str(row['Observatรถr'])
lokal = str(row['Lokal'])
lan = str(row['Lรคn'])
koord = str(row['Koordinatnoggrannhet (m)'])
datum = str(row['Startdatum'])
datakalla = str(row['Datakรคlla'])
label = str(vet + " - " + svnamn + " | " + observ + " | " + lokal + ", "
+ lan + " | Koordinatnogrannhet (m): " + koord + " | " +
datum + " | " + datakalla + " | Latitud: "
+ latitude + " | Longitud: " + longitude)
folium.Marker([longitude, latitude], popup=label).add_to(map_1)
# Add to change colors: icon = folium.Icon(color ='blue')
map_1.save('mardhund.html')
map_1
# Create dataframe with DateTime as index. For time series.
ts = df.set_index(pd.DatetimeIndex(df['Startdatum'])) # Create a dataframe with datetime as index
print(ts.index) # make sure the dtype='datetime64[ns]'
print("Oldest value: " + ts['Startdatum'].min())
print("Newest value: " + ts['Startdatum'].max())
# Plotting time series of the coordinate accuracy. Has no real meaning, just for code.
ts['Koordinatnoggrannhet (m)'].plot()
ts
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contributors
Step2: Rubrik
Step3: Geographic visualization
Step4: Time series
|
14,639 | <ASSISTANT_TASK:>
Python Code:
!gsutil cp gs://cloud-samples-data/air/fruits360/fruits360-combined.zip .
!ls
!unzip -qn fruits360-combined.zip
import os
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import GlobalAveragePooling2D, Dense
from keras import Sequential, Model, Input
from keras.layers import Conv2D, Flatten, MaxPooling2D, Dense, Dropout, BatchNormalization, ReLU
from keras import Model, optimizers
from keras.models import load_model
from keras.utils import to_categorical
import keras.layers as layers
from sklearn.model_selection import train_test_split
import tensorflow as tf
import numpy as np
import cv2
def Fruits(root):
n_label = 0
images = []
labels = []
classes = {}
os.chdir(root)
classes_ = os.scandir('./')
for class_ in classes_:
print(class_.name)
os.chdir(class_.name)
classes[class_.name] = n_label
# Finer Level Subdirectories per Coarse Level
subclasses = os.scandir('./')
for subclass in subclasses:
os.chdir(subclass.name)
files = os.listdir('./')
for file in files:
image = cv2.imread(file)
images.append(image)
labels.append(n_label)
os.chdir('../')
os.chdir('../')
n_label += 1
os.chdir('../')
images = np.asarray(images)
images = (images / 255.0).astype(np.float32)
labels = to_categorical(labels, n_label)
print("Images", images.shape, "Labels", labels.shape, "Classes", classes)
# Split the processed image dataset into training and test data
x_train, x_test, y_train, y_test = train_test_split(images, labels, test_size=0.20, shuffle=True)
return x_train, x_test, y_train, y_test, classes
def Varieties(root):
''' Generate Cascade (Finer) Level Dataset for Fruit Varieties'''
datasets = {}
os.chdir(root)
fruits = os.scandir('./')
for fruit in fruits:
n_label = 0
images = []
labels = []
classes = {}
print('FRUIT', fruit.name)
os.chdir(fruit.name)
varieties = os.scandir('./')
for variety in varieties:
print('VARIETY', variety.name)
classes[variety.name] = n_label
os.chdir(variety.name)
files = os.listdir('./')
for file in files:
image = cv2.imread(file)
images.append(image)
labels.append(n_label)
os.chdir('../')
n_label += 1
images = np.asarray(images)
images = (images / 255.0).astype(np.float32)
labels = to_categorical(labels, n_label)
x_train, x_test, y_train, y_test = train_test_split(images, labels, test_size=0.20, shuffle=True)
datasets[fruit.name] = (x_train, x_test, y_train, y_test, classes)
os.chdir('../')
print("IMAGES", x_train.shape, y_train.shape, "CLASSES", classes)
os.chdir('../')
return datasets
!free -m
x_train, x_test, y_train, y_test, fruits_classes = Fruits('Training')
!free -m
# Split out 10% of Train to use for Validation
pivot = int(len(x_train) * 0.9)
x_val = x_train[pivot:]
y_val = y_train[pivot:]
x_train = x_train[:pivot]
y_train = y_train[:pivot]
print("train", x_train.shape, y_train.shape)
print("val ", x_val.shape, y_val.shape)
print("test ", x_test.shape, y_test.shape)
!free -m
def Feeder():
datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rotation_range=30)
return datagen
def Train(model, datagen, x_train, y_train, x_test, y_test, epochs=10, batch_size=32):
model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size, shuffle=True),
steps_per_epoch=len(x_train) / batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))
scores = model.evaluate(x_train, y_train, verbose=1)
print("Train", scores)
def ResNet(shape=(128, 128, 3), nclasses=47, optimizer='adam', weights=None):
base_model = ResNet50(weights=weights, include_top=False, input_shape=shape)
for i, layer in enumerate(base_model.layers):
# first: train only the top layers (which were randomly initialized) for Transfer Learning
if weights is not None:
layer.trainable = False
# label the last convolutional layer in the base model as the bottleneck
layer.name = 'bottleneck'
# Get the last convolutional layer of the ResNet base model
x = base_model.output
# add a global spatial average pooling layer
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
#x = Dense(1024, activation='relu')(x)
# and a logistic layer
predictions = Dense(nclasses, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# compile the model (should be done *after* setting layers to non-trainable)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
model.summary()
return model
def ConvNet(shape=(128, 128, 3), nclasses=47, optimizer='adam'):
model = Sequential()
# stem convolutional group
model.add(Conv2D(16, (3,3), padding='same', activation='relu', input_shape=shape))
# conv block - double filters
model.add(Conv2D(32, (3,3), padding='same'))
model.add(ReLU())
model.add(Dropout(0.50))
model.add(MaxPooling2D((2,2)))
# conv block - double filters
model.add(Conv2D(64, (3,3), padding='same'))
model.add(ReLU())
model.add(MaxPooling2D((2,2)))
# conv block - double filters + bottleneck layer
model.add(Conv2D(128, (3,3), padding='same', activation='relu'))
model.add(MaxPooling2D((2,2), name="bottleneck"))
# dense block
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.25))
# classifier
model.add(Dense(nclasses, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
model.summary()
return model
# Select the model for the stem convolutional group (shared layers)
stem = 'ConvNet'
if stem == 'ConvNet':
model = ConvNet(shape=(100, 100, 3))
elif stem == 'ResNet-imagenet':
model = ResNet(weights='imagenet', optimizer='adagrad')
elif stem == 'ResNet':
model = ResNet()
# load previously stored model
else:
model = load_model('model.h5')
datagen = Feeder()
Train(model, datagen, x_train, y_train, x_val, y_val, 5)
scores = model.evaluate(x_test, y_test, verbose=1)
print("Test", scores)
# Save the model and weights
model.save("model-coarse.h5")
def Bottleneck(model):
for layer in model.layers:
layer.trainable = False
if layer.name == 'bottleneck':
bottleneck = layer
print("BOTTLENECK", bottleneck.output.shape)
return bottleneck
# Converse memory by releasing training data for coarse model
import gc
x_train = y_train = x_val = y_val = x_test = y_test = None
gc.collect()
varieties_datasets = Varieties('Training')
for key, dataset in varieties_datasets.items():
_x_train, _x_test, _y_train, _y_test, classes = dataset
# Separate out 10% of train for validation
pivot = int(len(_x_train) * 0.9)
_x_val = _x_train[pivot:]
_y_val = _y_train[pivot:]
_x_train = _x_train[:pivot]
_y_train = _y_train[:pivot]
# save the dataset for this fruit (key)
varieties_datasets[key] = { 'classes': classes, 'train': (_x_train, _y_train), 'val': (_x_val, _y_val), 'test': (_x_test, _y_test) }
!free -m
bottleneck = Bottleneck(model)
cascades = []
for key, val in varieties_datasets.items():
classes = val['classes']
print("KEY", key, classes)
# if only one subclassifier, then skip (i.e., coarse == finer)
if len(classes) == 1:
continue
x = layers.Conv2D(128, (3,3), padding='same', activation='relu')(bottleneck.output)
x = BatchNormalization()(x)
x = MaxPooling2D((2,2))(x)
x = layers.Flatten()(bottleneck.output)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dense(len(classes), activation='softmax', name=key.replace(' ', ''))(x)
cascades.append(x)
classifiers = []
for cascade in cascades:
_model = Model(model.input, cascade)
_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
_model.summary()
classifiers.append(_model)
for classifier in classifiers:
# get the output layer for this subclassifier
last = classifier.layers[len(classifier.layers)-1]
print(last, last.name)
# find the corresponding variety dataset
for key, dataset in varieties_datasets.items():
if key == last.name:
x_train, y_train = dataset['train']
x_val, y_val = dataset['val']
datagen = Feeder()
Train(classifier, datagen, x_train, y_train, x_val, y_val, 5)
for classifier in classifiers:
# get the output layer for this subclassifier
last = classifier.layers[len(classifier.layers)-1]
print(last, last.name)
# find the corresponding variety dataset
for key, dataset in varieties_datasets.items():
if key == last.name:
x_test, y_test = dataset['test']
scores = classifier.evaluate(x_test, y_test, verbose=1)
print("Test", scores)
n = 0
for classifier in classifiers:
classifier.save('model-finer-' + str(n) + '.h5')
n += 1
import random
# Let's make a prediction for each type of fruit
for key, dataset in varieties_datasets.items():
# Get the variety test data for this type of fruit
x_test, y_test = dataset['test']
# pick a random image in the variety datast
index = random.randint(0, len(x_test))
# use the coarse model to predict the type of fruit
yhat = np.argmax( model.predict(x_test[index:index+1]) )
# let's find the class name (type of fruit) for this predicted label
for fruit, label in fruits_classes.items():
if label == yhat:
break
print("Yhat", yhat, "Coarse Prediction", key, "=", fruit)
# Prediction was correct
if key == fruit:
if len(dataset['classes']) == 1:
print("No Finer Classifier")
continue
# find the corresponding finer classifier for this type of fruit
for classifier in classifiers:
# get the output layer for this subclassifier
last = classifier.layers[len(classifier.layers)-1]
if last.name == fruit:
# use the finer model to predict the variety of this type of fruit
yhat = np.argmax(classifier.predict(x_test[index:index+1]))
for variety, value in dataset['classes'].items():
if value == np.argmax(y_test[index]):
break
for yhat_variety, value in dataset['classes'].items():
if value == yhat:
break
print("Yhat", yhat, "Finer Prediction", variety, "=", yhat_variety)
break
# extractfeatures = Model(input=model.input, output=model.get_layer('bottleneck').output)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting Started
Step2: Make Datasets
Step3: Make Finer Category Datasets
Step4: Generate the preprocessed Coarse Dataset
Step5: Split Coarse Dataset (by Fruit) into Train, Validation and Test
Step6: Make Trainers
Step7: Make Trainer
Step8: Make Model
Step9: Simple ConvNet
Step10: Start Training
Step11: Train the Coarse Model
Step12: Save the Coarse Model
Step13: Prepare Coarse CNN for cascade training
Step14: Generate the preprocessed Finer Datasets
Step15: Add Each Cascade (Finer) Classifier
Step16: Compile each finer classifier
Step17: Train the finer classifiers
Step18: Evaluate the Model
Step19: Save the Finer Models
Step20: Let's do some cascading predictions
Step21: End of Notebook
|
14,640 | <ASSISTANT_TASK:>
Python Code:
import dphox as dp
import numpy as np
import holoviews as hv
from trimesh.transformations import rotation_matrix
hv.extension('bokeh')
import warnings
warnings.filterwarnings('ignore') # ignore shapely warnings
dp.CommonLayer.RIDGE_SI
FABLESS = dp.Foundry(
stack=[
# 1. First define the photonic stack
dp.ProcessStep(dp.ProcessOp.GROW, 0.2, dp.SILICON, dp.CommonLayer.RIDGE_SI, (100, 0), 2),
dp.ProcessStep(dp.ProcessOp.DOPE, 0.1, dp.P_SILICON, dp.CommonLayer.P_SI, (400, 0)),
dp.ProcessStep(dp.ProcessOp.DOPE, 0.1, dp.N_SILICON, dp.CommonLayer.N_SI, (401, 0)),
dp.ProcessStep(dp.ProcessOp.DOPE, 0.1, dp.PP_SILICON, dp.CommonLayer.PP_SI, (402, 0)),
dp.ProcessStep(dp.ProcessOp.DOPE, 0.1, dp.NN_SILICON, dp.CommonLayer.NN_SI, (403, 0)),
dp.ProcessStep(dp.ProcessOp.DOPE, 0.1, dp.PPP_SILICON, dp.CommonLayer.PPP_SI, (404, 0)),
dp.ProcessStep(dp.ProcessOp.DOPE, 0.1, dp.NNN_SILICON, dp.CommonLayer.NNN_SI, (405, 0)),
dp.ProcessStep(dp.ProcessOp.GROW, 0.1, dp.SILICON, dp.CommonLayer.RIB_SI, (101, 0), 2),
dp.ProcessStep(dp.ProcessOp.GROW, 0.2, dp.NITRIDE, dp.CommonLayer.RIDGE_SIN, (300, 0), 2.5),
dp.ProcessStep(dp.ProcessOp.GROW, 0.1, dp.ALUMINA, dp.CommonLayer.ALUMINA, (200, 0), 2.5),
# 2. Then define the metal connections (zranges).
dp.ProcessStep(dp.ProcessOp.GROW, 1, dp.COPPER, dp.CommonLayer.VIA_SI_1, (500, 0), 2.2),
dp.ProcessStep(dp.ProcessOp.GROW, 0.2, dp.COPPER, dp.CommonLayer.METAL_1, (501, 0)),
dp.ProcessStep(dp.ProcessOp.GROW, 0.5, dp.COPPER, dp.CommonLayer.VIA_1_2, (502, 0)),
dp.ProcessStep(dp.ProcessOp.GROW, 0.2, dp.COPPER, dp.CommonLayer.METAL_2, (503, 0)),
dp.ProcessStep(dp.ProcessOp.GROW, 0.5, dp.ALUMINUM, dp.CommonLayer.VIA_2_PAD, (504, 0)),
# Note: negative means grow downwards (below the ceiling of the device).
dp.ProcessStep(dp.ProcessOp.GROW, -0.3, dp.ALUMINUM, dp.CommonLayer.PAD, (600, 0), 5),
dp.ProcessStep(dp.ProcessOp.GROW, 0.2, dp.HEATER, dp.CommonLayer.HEATER, (700, 0), 3.2),
dp.ProcessStep(dp.ProcessOp.GROW, 0.5, dp.ALUMINUM, dp.CommonLayer.VIA_HEATER_2, (505, 0)),
# 3. Finally specify the clearout (needed for MEMS).
dp.ProcessStep(dp.ProcessOp.SAC_ETCH, 4, dp.ETCH, dp.CommonLayer.CLEAROUT, (800, 0)),
dp.ProcessStep(dp.ProcessOp.DUMMY, 4, dp.DUMMY, dp.CommonLayer.TRENCH, (41, 0)),
dp.ProcessStep(dp.ProcessOp.DUMMY, 4, dp.DUMMY, dp.CommonLayer.PHOTONIC_KEEPOUT, (42, 0)),
dp.ProcessStep(dp.ProcessOp.DUMMY, 4, dp.DUMMY, dp.CommonLayer.METAL_KEEPOUT, (43, 0)),
dp.ProcessStep(dp.ProcessOp.DUMMY, 4, dp.DUMMY, dp.CommonLayer.BBOX, (44, 0)),
],
height=5
)
grating = dp.FocusingGrating(
n_env=dp.AIR.n,
n_core=dp.SILICON.n,
min_period=40,
num_periods=30,
wavelength=1.55,
fiber_angle=82,
duty_cycle=0.5,
waveguide_w=2
)
interposer = dp.Interposer(
waveguide_w=2,
n=6,
init_pitch=50,
final_pitch=127,
self_coupling_extension=50
).device().rotate(90) # to make it easier to see things
grating.hvplot()
interposer.hvplot()
for i in range(6):
interposer.place(grating, interposer.port[f'b{i}'], grating.port['a0'])
interposer.place(grating, interposer.port[f'l0'], grating.port['a0'])
interposer.place(grating, interposer.port[f'l1'], grating.port['a0'])
interposer.hvplot()
interposer.clear(grating)
interposer.hvplot()
via1 = dp.Via((2, 2), 0.2)
via2 = dp.Via((2, 2), 0.2, pitch=4, shape=(3, 3),
metal=[dp.CommonLayer.VIA_HEATER_2, dp.CommonLayer.METAL_2, dp.CommonLayer.PAD],
via=[dp.CommonLayer.VIA_HEATER_2, dp.CommonLayer.VIA_1_2, dp.CommonLayer.VIA_2_PAD])
via1.hvplot().opts(title='single via, single layer') + via2.hvplot().opts(title='array via, multilayer')
scene = via2.trimesh()
scene.apply_transform(rotation_matrix(-np.pi / 3, (1, 0, 0)))
scene.show()
grating = dp.FocusingGrating(
n_env=dp.AIR.n,
n_core=dp.SILICON.n,
min_period=40,
num_periods=30,
wavelength=1.55,
fiber_angle=82,
duty_cycle=0.5,
waveguide_w=2
)
scene = grating.trimesh()
# apply some settings to the scene to make the default view more palatable
scene.apply_transform(np.diag((1, 1, 5, 1))) # make it easier to see the grating lines by scaling up the z-axis by 5x
scene.apply_transform(rotation_matrix(-np.pi / 2.5, (1, 0, 0)))
scene.show()
core = dp.straight(length=10).path(0.5)
slab = dp.cubic_taper(init_w=0.5, change_w=0.5, length=10, taper_length=3)
dp.RibDevice(core, slab).hvplot()
ps = dp.ThermalPS(dp.straight(10).path(1), ps_w=2, via=dp.Via((0.4, 0.4), 0.1,
metal=[dp.CommonLayer.HEATER, dp.CommonLayer.METAL_2],
via=[dp.CommonLayer.VIA_HEATER_2]))
ps.hvplot()
spiral_ps = dp.ThermalPS(dp.spiral_delay(8, 1, 2).path(0.5),
ps_w=1, via=dp.Via((0.4, 0.4), 0.1,
metal=[dp.CommonLayer.HEATER, dp.CommonLayer.METAL_2], via=[dp.CommonLayer.VIA_HEATER_2]))
spiral_ps.hvplot()
scene = spiral_ps.trimesh()
scene.apply_transform(rotation_matrix(-np.pi / 2.5, (1, 0, 0)))
scene.show()
dc = dp.DC(waveguide_w=1, interaction_l=2, radius=2.5, interport_distance=10, gap_w=0.5)
mzi = dp.MZI(dc, top_internal=[ps.copy], bottom_internal=[ps.copy], top_external=[ps.copy], bottom_external=[ps.copy])
mzi.hvplot()
mzi = dp.MZI(dc, top_internal=[ps, dp.bent_trombone(4, 10).path(1)],
bottom_internal=[ps], top_external=[ps], bottom_external=[ps])
mzi.hvplot()
from dphox.demo import grating
dc = dp.DC(waveguide_w=0.5, interaction_l=10, radius=5, interport_distance=40, gap_w=0.3)
tap_dc = dp.TapDC(
dp.DC(waveguide_w=0.5, interaction_l=0, radius=2, interport_distance=5, gap_w=0.3), radius=2,
).with_gratings(grating)
mzi = dp.MZI(dc, top_internal=[spiral_ps, tap_dc, 5], bottom_internal=[spiral_ps, tap_dc])
for port in mzi.port.values():
mzi.place(grating, port)
mzi.hvplot()
mzi.path(flip=True).hvplot()
scene = mzi.trimesh()
# scene.apply_transform(np.diag((1, 1, 5, 1))) # make it easier to see the grating lines by scaling up the z-axis by 5x
scene.apply_transform(rotation_matrix(-np.pi / 2.5, (1, 0, 0)))
scene.show()
dp.LocalMesh(mzi, 8, triangular=False).hvplot()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Device
Step2: Foundry
Step3: place
Step4: Now let's see what happens after we add gratings to the interposer using place.
Step5: clear
Step6: Example devices and visualizations
Step7: FocusingGrating
Step8: RibDevice
Step9: ThermalPS
Step10: The thermal phase shifter can in a sense be also thought of as a cross section, since the phase shifter can be set above any desired path.
Step11: Visualize using trimesh.
Step12: MZI
Step13: Here are some MZIs that have a different number of components in each of the arms.
Step14: We also have option to ignore some of the devices.
Step15: Finally, let's look at our spiral phase shifter MZI in 3D.
Step16: LocalMesh
|
14,641 | <ASSISTANT_TASK:>
Python Code:
# library imports
import pandas as pd
import requests
import pytz
base_url = "http://0.0.0.0:8000"
headers = {"Authorization": "Bearer tokstr"}
url = base_url + "/api/v1/projects/"
projects = requests.get(url, headers=headers).json()
projects
url = base_url + "/api/v1/consumption_metadatas/?summary=True&projects={}".format(projects[0]['id'])
consumption_metadatas = requests.get(url, headers=headers).json()
consumption_metadatas[0]
url = base_url + "/api/v1/consumption_records/?metadata={}".format(consumption_metadatas[0]['id'])
consumption_records = requests.get(url, headers=headers).json()
consumption_records[:3]
url = base_url + "/api/v1/projects/{}/".format(projects[0]['id'])
requests.delete(url, headers=headers)
project_data = pd.read_csv('sample-project-data.csv',
parse_dates=['retrofit_start_date', 'retrofit_end_date']).iloc[0]
project_data
data = {
"project_id": project_data.project_id,
"zipcode": str(project_data.zipcode),
"baseline_period_end": pytz.UTC.localize(project_data.retrofit_start_date).isoformat(),
"reporting_period_start": pytz.UTC.localize(project_data.retrofit_end_date).isoformat(),
"project_owner": 1,
}
print(data)
url = base_url + "/api/v1/projects/"
new_project = requests.post(url, json=data, headers=headers).json()
new_project
url = base_url + "/api/v1/projects/"
requests.post(url, json=data, headers=headers).json()
data = [
{
"project_id": project_data.project_id,
"zipcode": str(project_data.zipcode),
"baseline_period_end": pytz.UTC.localize(project_data.retrofit_start_date).isoformat(),
"reporting_period_start": pytz.UTC.localize(project_data.retrofit_end_date).isoformat(),
"project_owner_id": 1,
}
]
print(data)
url = base_url + "/api/v1/projects/sync/"
requests.post(url, json=data, headers=headers).json()
energy_data = pd.read_csv('sample-energy-data_project-ABC_zipcode-50321.csv',
parse_dates=['date'], dtype={'zipcode': str})
energy_data.head()
interpretation_mapping = {"electricity": "E_C_S"}
data = [
{
"project_project_id": energy_data.iloc[0]["project_id"],
"interpretation": interpretation_mapping[energy_data.iloc[0]["fuel"]],
"unit": energy_data.iloc[0]["unit"].upper(),
"label": energy_data.iloc[0]["trace_id"].upper()
}
]
data
url = base_url + "/api/v1/consumption_metadatas/sync/"
consumption_metadatas = requests.post(url, json=data, headers=headers).json()
consumption_metadatas
data = [{
"metadata_id": consumption_metadatas[0]['id'],
"start": pytz.UTC.localize(row.date.to_datetime()).isoformat(),
"value": row.value,
"estimated": row.estimated,
} for _, row in energy_data.iterrows()]
data[:3]
url = base_url + "/api/v1/consumption_records/sync2/"
consumption_records = requests.post(url, json=data, headers=headers)
consumption_records.text
url = base_url + "/api/v1/consumption_records/?metadata={}".format(consumption_metadatas[0]['id'])
consumption_records = requests.get(url, json=data, headers=headers).json()
consumption_records[:3]
data = {
"project": new_project['id'],
"meter_class": "EnergyEfficiencyMeter",
"meter_settings": {}
}
data
url = base_url + "/api/v1/project_runs/"
project_run = requests.post(url, json=data, headers=headers).json()
project_run
url = base_url + "/api/v1/project_runs/{}/".format(project_run['id'])
project_runs = requests.get(url, headers=headers).json()
project_runs
url = base_url + "/api/v1/project_results/"
project_results = requests.get(url, headers=headers).json()
project_results
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you followed the datastore development setup instructions, you will
Step2: Let's test the API by requesting a list of projects in the datastore. Since the dev_seed command creates a sample project, this will return a response showing that project.
Step3: Although we'll delete this one in a moment, we can first explore a
Step4: We can also query for consumption records by metadata primary key.
Step5: Now we'll delete the project that was created by the dev_seed command and make one of our own.
Step6: If you try to post another project with the same project_id, you'll get an error message.
Step7: However, there is another endpoint you can hit to sync the project - update it if it exists, create it if it doesn't. This endpoint works almost the same way, but expects a list of data in a slightly different format
Step8: Now we can give this project some consumption data. Ene
Step9: Then we'll the sync endpoint for consumption metadata, which will create a new record or update an existing record. We have one trace here
Step10: Let's turn that CSV into records.
Step11: We can verify that these records were created by querying by consumption metadata id.
Step12: We now have a simple project with a single trace of data. Now we will move to running a meter on that project
Step13: This creates a task to run the meter on the indicated project.
Step14: If this project run succeeded, we can inspect its results.
|
14,642 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def mk_rot_mat(rad=np.pi / 4):
rot = np.array([[np.cos(rad),-np.sin(rad)], [np.sin(rad), np.cos(rad)]])
return rot
rot_mat = mk_rot_mat( np.pi / 4)
x = np.random.randn(100) * 5
y = np.random.randn(100)
points = np.vstack([y,x])
rotated = np.dot(points.T, rot_mat).T
outliers = np.tile([15,-10], 10).reshape((-1,2))
pts = np.vstack([rotated.T, outliers]).T
U,s,Vt = np.linalg.svd(rotated)
U_n,s_n,Vt_n = np.linalg.svd(pts)
plt.ylim([-20,20])
plt.xlim([-20,20])
plt.scatter(*pts)
pca_line = np.dot(U[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))
plt.plot(*pca_line)
rpca_line = np.dot(U_n[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))
plt.plot(*rpca_line, c='r')
import tga
reload(tga)
import logging
logger = logging.getLogger(tga.__name__)
logger.setLevel(logging.INFO)
X = pts.copy()
v = tga.tga(X.T, eps=1e-5, k=1, p=0.0)
plt.ylim([-20,20])
plt.xlim([-20,20])
plt.scatter(*pts)
tga_line = np.dot(v[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))
plt.plot(*tga_line)
#plt.scatter(*L, c='red')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make Some Toy Data
Step2: Add Some Outliers to Make Life Difficult
Step3: Compute SVD on both the clean data and the outliery data
Step4: Just 10 outliers can really screw up our line fit!
Step5: Now the robust pca version!
Step6: Factor the matrix into L (low rank) and S (sparse) parts
Step7: And have a look at this!
|
14,643 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import IFrame
IFrame('https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life',
width = 800, height = 500)
import numpy as np
%pylab inline
from JSAnimation.IPython_display import display_animation, anim_to_html
from matplotlib import animation
from random import randint
from copy import deepcopy
class ConwayGOLGrid():
Represents a grid in the Conway's Game of Life problem where
each of the cells contained in the grid may be either alive or
dead for any given state.
def __init__(self, width=100, height=100, startCells=[],
optimized=True, variant="B3/S23"):
Initializes a Grid as a 2D list and comprised of Cells.
Parameters
----------
width, height: size of the board
startCells: list of cells to start as alive.
If startCells is empty, cells will spawn as alive
at a rate of 30%.
startCells should be a list of coordinates, (x,y)
optimized: determines whether or not to use data structures
to improve overall run-time.
variant: defines variant of life played. Options as follows:
B3/S23: default (Born with 3, Survives with 2 or 3)
B6/S16
B1/S12
B36/S23: Highlife
B2/S3: Seeds
B2/S
self.width, self.height = width, height
self.__optimized = optimized
self.cells = []
self.__living = set()
if variant == "B3/S23":
self.__born = [3]
self.__survives = [2, 3]
elif variant == "B6/S16":
self.__born = [3]
self.__survives= [1, 6]
elif variant == "B1/S12":
self.__born = [1]
self.__survives = [1,2]
elif variant == "B36/S23":
self.__born = [3, 6]
self.__survives = [2, 3]
elif variant == "B2/S3":
self.__born = [2]
self.__survives = [3]
elif variant == "B2/S":
self.__born = [2]
self.__survives = []
else:
print variant, " is not a valid variant. Using B3/S23."
self.__born = [3]
self.__survives = [2,3]
for x in range(self.width):
# Create a new list for 2D structure
self.cells.append([])
for y in range(self.height):
# If no startCells provided, randomly init as alive
if len(startCells) == 0 and randint(0,100) < 30:
self.cells[x].append(ConwayGOLCell(x, y, True))
self.__living.add((x,y))
else:
self.cells[x].append(ConwayGOLCell(x,y))
# Give life to all cells in the startCells list
for cell in startCells:
self.cells[cell[0]][cell[1]].spawn()
self.__living.add((cell))
def update(self):
Updates the current state of the game using the standard
Game of Life rules.
Parameters
----------
None
Returns
-------
True if there are remaining alive cells.
False otherwise.
alive = False
if not self.__optimized:
# Deep copy the list to make sure the entire board updates correctly
tempGrid = deepcopy(self.cells)
# For every cell, check the neighbors.
for x in range(self.width):
for y in range(self.height):
neighbors = self.cells[x][y].num_neighbors(self)
# Living cells stay alive with _survives # of neighbors, else die
if self.cells[x][y].is_alive():
if not (neighbors in self.__survives):
tempGrid[x][y].die()
else:
alive = True
# Non living cells come alive with 3 neighbors
else:
if neighbors in self.__born:
tempGrid[x][y].spawn()
alive = True
# Deep copy the tempGrid to prevent losing the reference when function is over
self.cells = deepcopy(tempGrid)
else:
count = [[0 for y in range(self.height)] for x in range(self.width)]
to_check = set()
# For each cell that is alive...
for cell in self.__living:
x, y = cell
to_check.add(cell)
# Grab all of its neighbors
for neighbor in self.cells[x][y].neighbors:
n_x, n_y = neighbor
# If the neighbors are valid
if ( n_x >= 0 and n_y >= 0 and
n_x < self.width and n_y < self.height):
# Then increment their count and add them to a set
count[n_x][n_y] += 1
to_check.add(neighbor)
# Then start over living.
self.__living = set()
# Above, we add 1 to the count each time a cell is touched by an alive cell.
# So we know count contains the number of alive neighbors any given cell has.
# We use this to quickly check the rules of life and add cells to living array as needed.
for cell in to_check:
x, y = cell
if self.cells[x][y].is_alive():
if not count[x][y] in self.__survives:
self.cells[x][y].die()
else:
self.__living.add(cell)
alive = True
else:
if count[x][y] in self.__born:
self.cells[x][y].spawn()
self.__living.add(cell)
alive = True
return alive
def print_text_grid(self):
Prints the current state of the board using text.
Parameters
----------
None
Returns
-------
None
for y in range(self.height):
for x in range(self.width):
if self.cells[x][y].is_alive():
print "X" ,
else:
print "." ,
print "\n"
print "\n\n"
def conway_step_test(self, X):
Game of life step using generator expressions
nbrs_count = sum(np.roll(np.roll(X, i, 0), j, 1)
for i in (-1, 0, 1) for j in (-1, 0, 1)
if (i != 0 or j != 0))
return (nbrs_count == 3) | (X & (nbrs_count == 2))
def conway_animate(self, dpi=10, frames=10,
interval=300, mode='loop'):
Animate Conway's Game of Life
Parameters
----------
dpi: (int) number of dots/inch in animation (size of board)
frames: (int) number of frames for animation
interval: (float) time between frames (ms)
mode: (string) animation mode (options: 'loop','once','reflect')
# Replace this block with the conversion of our cell data
np.random.seed(0)
X_old = np.zeros((30, 40), dtype=bool)
r = np.random.random((10, 20))
X_old[10:20, 10:30] = (r > 0.75)
# Replace X_old with new transformed data
print X_old
X = np.asarray(X_old)
X = X.astype(bool)
fig = plt.figure(figsize=(X.shape[1] * 1. / dpi, X.shape[0] * 1. / dpi),
dpi=dpi)
ax = fig.add_axes([0,0,1,1], xticks=[], yticks=[], frameon=False)
#im = ax.imshow(X)
im = ax.imshow(X, cmap=plt.cm.binary, interpolation='nearest')
im.set_clim(-0.05, 1)
def animate(i):
im.set_data(animate.X)
# Replace with self.update()
animate.X = self.conway_step_test(animate.X)
return (im,)
animate.X = X
anim = animation.FuncAnimation(fig, animate,
frames=frames, interval=interval)
return display_animation(anim, default_mode=mode)
class ConwayGOLCell():
Represents a cell in the Conway's Game of Life problem where
a cell can either be alive or dead and the next state of the
cell is based on the states of the immediate (8) neighbors.
def __init__(self, x, y, alive=False):
Create information for the given cell including the x and
y coordinates of the cell, whether it is currently alive
or dead, it's neighbors, and its current color.
Parameters
----------
x, y: give the coordinate of the cell in grid
alive: gives current state of the cell
Returns
-------
None
self.x, self.y = x, y
self.alive = alive
self.neighbors = [(x-1,y-1), (x, y-1), (x+1, y-1),
(x-1,y ), (x+1, y ),
(x-1,y+1), (x, y+1), (x+1, y+1)]
self.color = (255,255,255)
def spawn(self):
Changes the state of a cell from dead to alive. Assumes
that the cell is dead to be changed to alive (no need to
modify if already alive).
Parameters
----------
None
Returns
-------
None
assert self.alive==False
self.alive = True
def die(self):
Changes the stat of a cell from alive to dead. Assumes
that the cell is alive to be changed to dead (no need to
modify if already dead).
Parameters
----------
None
Returns
-------
None
assert self.alive==True
self.alive = False
def is_alive(self):
Returns status of a cell.
Parameters
----------
None
Returns
-------
True if cell is alive.
return self.alive
def num_neighbors(self, grid):
Returns the number of neighbors of a cell.
Parameters
----------
grid: the ConwayGOLGrid object containing all cells
Returns
-------
number of alive neighbors
num_neighbors = 0
for cell in self.neighbors:
x,y = cell
if ( x >= 0 and x < grid.width and
y >= 0 and y < grid.height and
grid.cells[x][y].is_alive()):
num_neighbors += 1
return num_neighbors
test_game = ConwayGOLGrid(20,20, optimized=False, variant="B2/S")
test_game.print_text_grid()
count = 0
while count < 20 and test_game.update():
count += 1
test_game.print_text_grid()
'''
while test_game.update():
if count % 10 == 0:
print "Iteration ", count
test_game.print_grid()
if count > 100:
break
count += 1
'''
print "Finsihed after ", count, "iterations"
test_game2 = ConwayGOLGrid(20,20, optimized=True,variant="B2/S")
test_game.conway_animate(dpi=5, frames=20, mode='loop')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import necessary libraries
Step8: Conway Game of Life Grid Class
Step15: Conway Game of Life Cell Class
Step16: Test Text Grid
Step17: Test Animation Grid
|
14,644 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import LightSource
from sympy import *
from sympy import init_printing
%matplotlib notebook
x, y, z, t = symbols('x y z t')
u, v, a, b, R = symbols('u v a b R')
k, m, n = symbols('k m n', integer=True)
init_printing()
ellipse = Matrix([[0, a*cos(u), b*sin(u)]]).T
Qx = Matrix([[1, 0, 0],
[0, cos(n*v), -sin(n*v)],
[0, sin(n*v), cos(n*v)]])
Qx
Qz = Matrix([[cos(v), -sin(v), 0],
[sin(v), cos(v), 0],
[0, 0, 1]])
Qz
trans = Matrix([[0, R, 0]]).T
trans
torobius = Qz*(Qx*ellipse + trans)
torobius
x_num = lambdify((u, v, n, a, b, R), torobius[0], "numpy")
y_num = lambdify((u, v, n, a, b, R), torobius[1], "numpy")
z_num = lambdify((u, v, n, a, b, R), torobius[2], "numpy")
u_par, v_par = np.mgrid[0:2*np.pi:50j, 0:2*np.pi:50j]
X = x_num(u_par, v_par, 2, 0.5, 1., 5.)
Y = y_num(u_par, v_par, 2, 0.5, 1., 5.)
Z = z_num(u_par, v_par, 2, 0.5, 1., 5.)
fig = plt.figure(figsize=(6,5))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap="YlGnBu_r")
ax.set_zlim(-2, 2);
plt.show()
E = (torobius.diff(u).T * torobius.diff(u))[0]
F = (torobius.diff(u).T * torobius.diff(v))[0]
G = (torobius.diff(v).T * torobius.diff(v))[0]
def cross(A, B):
return Matrix([[A[1]*B[2] - A[2]*B[1]],
[A[2]*B[0] - A[0]*B[2]],
[A[0]*B[1] - A[1]*B[0]]])
n_vec = cross(torobius.diff(u).T, torobius.diff(v))
n_vec = simplify(n_vec/sqrt((n_vec.T * n_vec)[0]))
L = (torobius.diff(u, 2).T * n_vec)[0]
M = (torobius.diff(u, 1, v, 1).T * n_vec)[0]
N = (torobius.diff(v, 2).T * n_vec)[0]
gauss_curvature = (L*N - M**2)/(E*G - F**2)
mean_curvature = S(1)/2*(L + N)
gauss_num = lambdify((u, v, n, a, b, R), gauss_curvature, "numpy")
mean_num = lambdify((u, v, n, a, b, R), mean_curvature, "numpy")
fig = plt.figure(figsize=(6,5))
ax = fig.add_subplot(111, projection='3d')
gauss_K = gauss_num(u_par, v_par, 2, 0.5, 1., 5.)
vmax = gauss_K.max()
vmin = gauss_K.min()
FC = (gauss_K - vmin) / (vmax - vmin)
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, facecolors=plt.cm.YlGnBu(FC))
surf.set_edgecolors("k")
ax.set_title("Gaussian curvature", fontsize=18)
ax.set_zlim(-2, 2);
fig = plt.figure(figsize=(6,5))
ax = fig.add_subplot(111, projection='3d')
mean_H = mean_num(u_par, v_par, 2, 0.5, 1., 5.)
vmax = mean_H.max()
vmin = mean_H.min()
FC = (mean_H - vmin) / (vmax - vmin)
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, facecolors=plt.cm.YlGnBu(FC))
surf.set_edgecolors("k")
ax.set_title("Mean curvature", fontsize=18)
ax.set_zlim(-2, 2);
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Torobius
Step2: We can generate our surface as a composition of two rotations, one around the $z$-axis, and the other one with respect to an axis that is perpendicular to the ellipse.
Step3: And the rotation matrix around $z$ looks like
Step4: We need to translate the ellipse in the $y$ direction
Step5: The shape is then defined in parametric coordinates as
Step6: Curvatures
Step7: The second fundamental form gives the information about the curvatures (their eigenvalues are termed the principal curvatures).
Step8: And the coefficients are defined as
Step9: The most common measures of curvature are the mean and Gaussian curvatures.
|
14,645 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-3', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
14,646 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import thinkstats2
import thinkplot
import pandas as pd
import numpy as np
import math, random
mean, var = 163, 52.8
std = math.sqrt(var)
pdf = thinkstats2.NormalPdf(mean, std)
print "Density:",pdf.Density(mean + std)
thinkplot.Pdf(pdf, label='normal')
thinkplot.Show()
#by default, makes pmf stetching 3*sigma in either direction
pmf = pdf.MakePmf()
thinkplot.Pmf(pmf,label='normal')
thinkplot.Show()
sample = [random.gauss(mean, std) for i in range(500)]
sample_pdf = thinkstats2.EstimatedPdf(sample)
thinkplot.Pdf(sample_pdf, label='sample PDF made by KDE')
##Evaluates PDF at 101 points
pmf = sample_pdf.MakePmf()
thinkplot.Pmf(pmf, label='sample PMF')
thinkplot.Show()
def RawMoment(xs, k):
return sum(x**k for x in xs) / len(xs)
def CentralMoment(xs, k):
mean = RawMoment(xs, 1)
return sum((x - mean)**k for x in xs) / len(xs)
##normalized so there are no units
def StandardizedMoment(xs, k):
var = CentralMoment(xs, 2)
std = math.sqrt(var)
return CentralMoment(xs, k) / std**k
def Skewness(xs):
return StandardizedMoment(xs, 3)
def Median(xs):
cdf = thinkstats2.Cdf(xs)
return cdf.Value(0.5)
def PearsonMedianSkewness(xs):
median = Median(xs)
mean = RawMoment(xs, 1)
var = CentralMoment(xs, 2)
std = math.sqrt(var)
gp = 3 * (mean - median) / std
return gp
import hinc, hinc2
print "starting..."
df = hinc.ReadData()
log_sample = hinc2.InterpolateSample(df)
log_cdf = thinkstats2.Cdf(log_sample)
print "done"
# thinkplot.Cdf(log_cdf)
# thinkplot.Show(xlabel='household income',
# ylabel='CDF')
import density
sample = np.power(10,log_sample)
mean, median = density.Summarize(sample)
log_pdf = thinkstats2.EstimatedPdf(log_sample)
thinkplot.Pdf(log_pdf, label='KDE of income')
thinkplot.Show(xlabel='log10 $',
ylabel='PDF')
thinkplot.PrePlot(2, rows=2)
thinkplot.SubPlot(1)
sample_cdf = thinkstats2.Cdf(sample, label='SampleCdf')
thinkplot.Cdf(sample_cdf)
thinkplot.SubPlot(2)
sample_pdf = thinkstats2.EstimatedPdf(sample)
thinkplot.Pdf(sample_pdf)
pctBelowMean = sample_cdf.Prob(mean) * 100
print "%d%% of households report taxable incomes below the mean" % pctBelowMean
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kernel density estimation - an algorithm that takes a sampel and finds an approximately smooth PDF that fits the data.
Step2: Advantages of KDE
Step3: ...note that when k = 2, the second central moment is variance.
Step4: Pearson's median skewness coefficient is a measure of the skewness based on the difference between the sample mean and median
Step5: To summarize the Moments
Step6: Compute the mean, median, skewness, and Pearson's skewness. What fraction of households report a taxable income below the mean?
|
14,647 | <ASSISTANT_TASK:>
Python Code:
import math
def FindKthChar(Str , K , X ) :
ans = ' โ '
Sum = 0
for i in range(len(Str ) ) :
digit = ord(Str[i ] ) - 48
Range = int(math . pow(digit , X ) )
Sum += Range
if(K <= Sum ) :
ans = Str[i ]
break
return ans
Str = "123"
K = 9
X = 3
ans = FindKthChar(Str , K , X )
print(ans )
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,648 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
# Importa la librerรญa financiera.
#ย Solo es necesario ejecutar la importaciรณn una sola vez.
import cashflows as cf
costs = cf.cashflow(const_value=0, # valor 0 por defecto
periods=6, # compra + vida รบtil
start=2000,
freq='A')
costs[0] = 200
costs
life = cf.cashflow(const_value=0, # vida util = 0
periods=6, # compra + vida รบtil
start=2000,
freq='A')
life[0]=5 # vida รบtil del activo
life
cf.depreciation_sl(costs=costs, # costos de las inversiones
life=life) # vida รบtil de cada inversiรณn
costs = cf.cashflow(const_value=0, # valor por defecto
periods=20, # cantidad de perรญodos
start=2000,
freq='A')
costs['2001'] = 200
costs['2006'] = 300
costs
life = cf.cashflow(const_value=0, # valor por defecto
periods=20, # cantidad de perรญodos
start=2000,
freq='A')
life['2001'] = 5
life['2006'] = 10
life
cf.depreciation_sl(costs=costs, # inversiones
life=life) # vida รบtil
costs = cf.cashflow(const_value=[200]+[0]*5,
start=2000,
freq='A')
life = cf.cashflow(const_value=[5]+[0]*5,
start='2000',
freq='A')
cf.depreciation_soyd(costs=costs, life=life)
def compute_cf(ingreso_operativo,
tax_rate,
depreciacion):
utilidad_AI = ingreso_operativo - depreciacion
impuestos = cf.after_tax_cashflow(utilidad_AI, tax_rate=tax_rate)
utilidad_DI = utilidad_AI - impuestos
cashf = utilidad_DI + depreciacion
return impuestos, cashf
## impuesto de renta del 35%
tax_rate = cf.interest_rate(const_value=[35]*6, start=2018, freq='A')
## crea el flujo de caja
ingreso_operativo = cf.cashflow(const_value=[0]+[500]*5, start=2018, freq='A')
## activo depreciable
costs = cf.cashflow(const_value=[200]+[0]*5, start=2018, freq='A')
life = cf.cashflow(const_value=[5]+[0]*5, start=2018, freq='A')
depreciacion_1 = cf.cashflow(const_value=[0]*6, start=2018, freq='A')
impuesto_1, cf_1 = compute_cf(ingreso_operativo, tax_rate, depreciacion_1)
impuesto_1
cf_1
## considere un activo depreciable
depreciacion_2 = cf.depreciation_sl(costs=costs, life=life)['Depr']
depreciacion_2
impuesto_2, cf_2 = compute_cf(ingreso_operativo, tax_rate, depreciacion_2)
impuesto_2
cf_2
depreciacion_3 = cf.depreciation_soyd(costs=costs, life=life)['Depr']
depreciacion_3
impuesto_3, cf_3 = compute_cf(ingreso_operativo, tax_rate, depreciacion_3)
impuesto_3
cf_3
pd.DataFrame({'impuesto_1':impuesto_1, 'impuesto_2':impuesto_2, 'impuesto_3':impuesto_3}).round(2)
pd.DataFrame({'cf_1':cf_1, 'cf_2':cf_2, 'cf_3':cf_3}).round(2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Depreciaciรณn en lรญnea recta
Step2: Ejemplo.-- En el aรฑo 2001 se compra un activo por valor de $ 200 y en el aรฑo 2006 otro activo por valor de $ 300. Los activos tienen vidas รบtiles de 5 y 10 aรฑos respectivamente. Calcule la depreciaciรณn total anual para cada aรฑo. Haga sus cรกlculos iniciando en el aรฑo 2000.
Step3: Depreciaciรณn por suma de los dรญgitos de los aรฑos (o depreciaciรณn acelerada)
Step4: Efecto de la depreciaciรณn sobre el impuesto de renta y el flujo de caja
Step5: Caso 1
Step6: Caso 2
Step7: Note que en el resultado anterior el ingreso despuรฉs de impuestos es mayor que en el caso 1.
Step8: Comparaciรณn
|
14,649 | <ASSISTANT_TASK:>
Python Code:
import platform
platform.python_version()
r = 5
a = (r**2) * 3.141596
print a
color_list_1 = set(["White", "Black", "Red"])
color_list_2 = set(["Red", "Green"])
print color_list_1
print color_list_1 - color_list_2
# Resultado = []
# for i in color_list_1:
# if not color_list_1[i] in color_list_2:
# Resultado += color_list_1[i]
# else:
# pass
# print Resultado
import os
wkd = os.getcwd()
wkd.split("/")
my_list = [5,7,8,9,17]
print my_list
suma = 0
for i in my_list:
suma += i
print suma
elemento_a_insertar = 'E'
my_list = [1, 2, 3, 4]
print my_list
print elemento_a_insertar
my_list.insert(0, elemento_a_insertar)
my_list.insert(2, elemento_a_insertar)
my_list.insert(4, elemento_a_insertar)
my_list.insert(6, elemento_a_insertar)
print my_list
N = 3
my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']
new_list = [[] for _ in range(N)]
for i, item in enumerate(my_list):
new_list[i % N].append(item)
print new_list
list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]
print max(list_of_lists)
N = 5
Dict = {}
Dict[1] = 1**2
Dict[2] = 2**2
Dict[3] = 3**2
Dict[4] = 4**2
Dict[5] = 5**2
print Dict
dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}]
new_dic = {}
new_dic.update(dictionary_list[0])
new_dic.update(dictionary_list[1])
new_dic.update(dictionary_list[2])
print new_dic
dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}]
for i in range(0,len(dictionary_list)):
n = dictionary_list[i]['numero']
sqr = n**2
dictionary_list[i]['cuadrado'] = sqr
print dictionary_list
def loca(list1,list2):
print list1 - list2
loca(color_list_1, color_list_2)
def marx(lista):
return max(lista)
print marx(list_of_lists)
def dic(N):
Dict ={}
for i in range(1,N):
Dict[i] = i**2
return Dict
print dic(4)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Calcule el รกrea de un circulo de radio 5
Step2: 3. Escriba cรณdigo que imprima todos los colores de que estรกn en color_list_1 y no estan presentes en color_list_2
Step3: 4 Imprima una lรญnea por cada carpeta que compone el Path donde se esta ejecutando python
Step4: Manejo de Listas
Step5: 6. Inserte un elemento_a_insertar antes de cada elemento de my_list
Step6: La salida esperada es una lista asรญ
Step7: 7. Separe my_list en una lista de lista cada N elementos
Step8: Salida Epserada
Step9: 8. Encuentra la lista dentro de list_of_lists que la suma de sus elementos sea la mayor
Step10: Salida Esperada
Step11: Manejo de Diccionarios
Step12: Salida Esperada
Step13: 10. Concatene los diccionarios en dictionary_list para crear uno nuevo
Step14: Salida Esperada
Step15: 11. Aรฑada un nuevo valor "cuadrado" con el valor de "numero" de cada diccionario elevado al cuadrado
Step16: Salida Esperada
Step17: Manejo de Funciones
Step18: 13. Defina y llame una funciรณn que reciva de parametro una lista de listas y solucione el problema 8
Step19: 14. Defina y llame una funciรณn que reciva un parametro N y resuleva el problema 9
|
14,650 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import time
import itertools
import h5py
import numpy as np
from scipy.stats import norm
from scipy.stats import expon
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import seaborn as sns
sns.set(style="ticks", color_codes=True, font_scale=1.5)
sns.set_style({"xtick.direction": "in", "ytick.direction": "in"})
h5file = "data/cossio_kl0_Dx1_Dq1.h5"
f = h5py.File(h5file, 'r')
data = np.array(f['data'])
f.close()
fig, ax = plt.subplots(figsize=(12,3))
ax.plot(data[:,0],data[:,1],'.', markersize=1)
ax.set_ylim(-8,8)
ax.set_xlim(0,250000)
ax.set_ylabel('x')
ax.set_xlabel('time')
plt.tight_layout()
fig, ax = plt.subplots(figsize=(6,4))
hist, bin_edges = np.histogram(data[:,1], bins=np.linspace(-6.5,6.5,25),\
density=True)
bin_centers = [0.5*(bin_edges[i]+bin_edges[i+1]) \
for i in range(len(bin_edges)-1)]
ax.plot(bin_centers, -np.log(hist), lw=4)
ax.set_xlim(-7,7)
ax.set_ylim(0,8)
ax.set_xlabel('x')
_ = ax.set_ylabel('F ($k_BT$)')
plt.tight_layout()
assigned_trj = list(np.digitize(data[:,1],bins=bin_edges))
fig,ax=plt.subplots(2,1, sharex=True)
plt.subplots_adjust(wspace=0, hspace=0)
ax[0].plot(range(100,len(data[:,1][:300])),data[:,1][100:300], lw=2)
ax[1].step(range(100,len(assigned_trj[:300])),assigned_trj[100:300], color="g", lw=2)
ax[0].set_xlim(100,300)
ax[0].set_ylabel('x')
ax[1].set_ylabel("state")
ax[1].set_xlabel("time")
plt.tight_layout()
from mastermsm.trajectory import traj
distraj = traj.TimeSeries(distraj=assigned_trj, dt=1)
distraj.find_keys()
distraj.keys.sort()
from mastermsm.msm import msm
msm_1D=msm.SuperMSM([distraj], sym=True)
for lt in [1, 2, 5, 10, 20, 50, 100]:
msm_1D.do_msm(lt)
msm_1D.msms[lt].do_trans(evecs=True)
msm_1D.msms[lt].boots()
tau_vs_lagt = np.array([[x,msm_1D.msms[x].tauT[0], msm_1D.msms[x].tau_std[0]] \
for x in sorted(msm_1D.msms.keys())])
fig, ax = plt.subplots()
ax.errorbar(tau_vs_lagt[:,0],tau_vs_lagt[:,1],fmt='o-', yerr=tau_vs_lagt[:,2], markersize=10)
#ax.fill_between(10**np.arange(-0.2,3,0.2), 1e-1, 10**np.arange(-0.2,3,0.2), facecolor='lightgray')
ax.fill_between(tau_vs_lagt[:,0],tau_vs_lagt[:,1]+tau_vs_lagt[:,2], \
tau_vs_lagt[:,1]-tau_vs_lagt[:,2], alpha=0.1)
ax.set_xlabel(r'$\Delta$t', fontsize=16)
ax.set_ylabel(r'$\tau$', fontsize=16)
ax.set_xlim(0.8,120)
ax.set_ylim(2e2,500)
ax.set_yscale('log')
ax.set_xscale('log')
plt.tight_layout()
lt=1 # lag time
msm_1D.do_msm(lt)
msm_1D.msms[lt].do_trans(evecs=True)
msm_1D.msms[lt].boots()
plt.figure()
plt.imshow(np.log10(msm_1D.msms[lt].count), interpolation='none', \
cmap='viridis_r', origin='lower')
plt.ylabel('$\it{j}$')
plt.xlabel('$\it{i}$')
plt.title('Count matrix (log), $\mathbf{N}$')
plt.colorbar()
#plt.savefig("../../paper/figures/1d_count.png", dpi=300, transparent=True)
plt.figure()
plt.imshow(np.log10(msm_1D.msms[lt].trans), interpolation='none', \
cmap='viridis_r', vmin=-3, vmax=0, origin='lower')
plt.ylabel('$\it{j}$')
plt.xlabel('$\it{i}$')
plt.title('Transition matrix (log), $\mathbf{T}$')
_ = plt.colorbar()
msm_1D.do_lbrate()
plt.figure()
plt.imshow(msm_1D.lbrate, interpolation='none', \
cmap='viridis_r', origin='lower', vmin=-0.5, vmax=0.1)
plt.ylabel('$\it{j}$')
plt.xlabel('$\it{i}$')
plt.title('Rate matrix, $\mathbf{K}$')
plt.colorbar()
fig, ax = plt.subplots()
ax.errorbar(range(1,len(msm_1D.msms[lt].tauT)+1),msm_1D.msms[lt].tauT, fmt='o-', \
yerr= msm_1D.msms[lt].tau_std, ms=10)
ax.fill_between(range(1,len(msm_1D.msms[1].tauT)+1), \
np.array(msm_1D.msms[lt].tauT)+np.array(msm_1D.msms[lt].tau_std), \
np.array(msm_1D.msms[lt].tauT)-np.array(msm_1D.msms[lt].tau_std))
ax.set_xlabel('Eigenvalue')
ax.set_ylabel(r'$\tau_i$')
ax.set_yscale('log')
plt.tight_layout()
fig, ax = plt.subplots(2,1, sharex=True)
ax[0].plot(-msm_1D.msms[1].rvecsT[:,0])
ax[0].fill_between(range(len(msm_1D.msms[1].rvecsT[:,0])), \
-msm_1D.msms[1].rvecsT[:,0], 0, alpha=0.5)
#ax[0].set_ylim(0,0.43)
ax[1].plot(msm_1D.msms[1].rvecsT[:,1])
ax[1].axhline(0,0,25, c='k', ls='--', lw=1)
ax[1].fill_between(range(len(msm_1D.msms[1].rvecsT[:,1])), \
msm_1D.msms[1].rvecsT[:,1], 0, alpha=0.5)
ax[1].set_xlim(0,25)
ax[1].set_xlabel("state")
ax[0].set_ylabel("$\Psi^R_0$")
ax[1].set_ylabel("$\Psi^R_1$")
plt.tight_layout(h_pad=0)
msm_1D.msms[1].do_rate()
FF = list(range(19,22,1))
UU = list(range(4,7,1))
msm_1D.msms[1].do_pfold(FF=FF, UU=UU)
msm_1D.msms[1].peqT
fig, ax = plt.subplots()
ax.set_xlim(0,25)
axalt = ax.twinx()
axalt.plot(-np.log(msm_1D.msms[1].peqT), alpha=0.3, c='b')
axalt.fill_between(range(len(msm_1D.msms[1].rvecsT[:,1])), \
-np.log(msm_1D.msms[1].peqT), 0, alpha=0.25, color='b')
axalt.fill_between([FF[0], FF[-1]], \
10, 0, alpha=0.15, color='green')
axalt.set_ylim(0,10)
axalt.fill_between([UU[0], UU[-1]], \
10, 0, alpha=0.15, color='red')
ax.plot(msm_1D.msms[1].pfold, c='k', lw=3)
ax.set_ylabel('$\phi$')
axalt.set_ylabel(r'${\beta}G$', color='b')
ax.set_xlabel('state')
msm_1D.msms[1].do_pfold(FF=FF, UU=UU)
print (msm_1D.msms[1].kf)
msm_1D.msms[1].sensitivity(FF=FF, UU=UU)
plt.plot(msm_1D.msms[1].d_lnkf, 'k', lw=3)
plt.fill_between([FF[0], FF[-1]], \
0.2, -0.1, alpha=0.15, color='green')
plt.fill_between([UU[0], UU[-1]], \
0.2, -0.1, alpha=0.15, color='red')
plt.xlabel('state')
plt.ylabel(r'$\alpha$')
plt.xlim(0,25)
plt.ylim(-0.1,0.2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Discretization
Step2: Clearly the system interconverts between two states. We can obtain a potential of mean force from a Boltzmann inversion of the probability distribution.
Step3: Instead of defining two states using an arbitrary cutoff in our single dimension, we discretize the trajectory by assigning frames to microstates. In this case we use as microstates the indexes of a grid on x.
Step4: In this way, the continuous coordinate x is mapped onto a discrete microstate space.
Step5: We then pass the discrete trajectory to the traj module to generate an instance of the TimeSeries class. Using some of its methods, we are able to generate and sort the names of the microstates in the trajectory, which will be useful later.
Step6: Master Equation Model
Step7: First of all, we will create an instance of the SuperMSM class, which will be useful to produce and validate dynamical models.
Step8: For the simplest type of dynamical model validation, we carry out a convergence test to check that the relaxation times $\tau$ do not show a dependency on the lag time. We build the MSM at different lag times $\Delta$t.
Step9: We then check the dependence of the relaxation times of the system, $\tau$ with respect to the choice of lag time $\Delta t$. We find that they are very well converged even from the shortest value of $\Delta t$.
Step10: While this is not the most rigorous test we can do, it already gives some confidence on the dynamical model derived. We can inspect the count and transition matrices at even the shortest lag time
Step11: Analysis of the results
Step12: From the eigenvectors we can also retrieve valuable information. The zeroth eigenvector, $\Psi^R_0$, corresponds to the equilibrium distribution. The slowest mode in our model, captured by the first eigenvector $\Psi^R_1$, corresponds to the transition betweem the folded and unfolded states of the protein.
Step13: Calculation of committors
Step14: Sensitivity analysis
|
14,651 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
from matplotlib import pylab as plt
from mpl_toolkits import mplot3d
from canonical_gaussian import CanonicalGaussian as CG
from gaussian_mixture import GaussianMixtureModel as GMM
from calc_traj import calc_traj
from range_doppler import *
from util import *
np.set_printoptions(precision=2)
names, p, v, w = load_clubs('clubs.csv')
cpi = 40e-3
T = 12
t_sim = np.arange(0, T, cpi)
t1, p1, v1 = calc_traj(p[0, :], v[0, :], w[0, :], t_sim)
t2, p2, v2 = calc_traj(p[-1, :], v[-1, :], w[-1, :], t_sim)
sensor_locations = np.array([[-10, 28.5, 1], [-15, 30.3, 3],
[200, 30, 1.5], [220, -31, 2],
[-30, 0, 0.5], [150, 10, 0.6]])
rd_1 = range_doppler(sensor_locations, p1, v1)
pm_1 = multilateration(sensor_locations, rd_1[:, :, 1])
vm_1 = determine_velocity(t1, pm_1, rd_1[:, :, 0])
rd_2 = range_doppler(sensor_locations, p2, v2)
pm_2 = multilateration(sensor_locations, rd_2[:, :, 1])
vm_2 = determine_velocity(t2, pm_2, rd_2[:, :, 0])
N = 6
if pm_1.shape < pm_2.shape:
M, _ = pm_1.shape
pm_2 = pm_2[:M]
vm_2 = pm_2[:M]
else:
M, _ = pm_2.shape
pm_1 = pm_1[:M]
vm_1 = vm_2[:M]
print(M)
dt = cpi
g = 9.81
sigma_r = 2.5
sigma_q = 0.5
prior_var = 1
A = np.identity(N)
A[0, 3] = A[1, 4] = A[2, 5] = dt
B = np.zeros((N, N))
B[2, 2] = B[5, 5] = 1
R = np.identity(N)*sigma_r
C = np.identity(N)
Q = np.identity(N)*sigma_q
u = np.zeros((6, 1))
u[2] = -0.5*g*(dt**2)
u[5] = -g*dt
#Object 1
mu0_1 = np.zeros((N, 1))
mu0_1[:3, :] = p1[0, :].reshape(3, 1)
mu0_1[3:, :] = v[0, :].reshape(3, 1)
prec0_1 = np.linalg.inv(prior_var*np.identity(N))
h0_1 = (prec0_1)@(mu0_1)
g0_1 = -0.5*(mu0_1.T)@(prec0_1)@(mu0_1) -3*np.log(2*np.pi)
#Object 2
mu0_2 = np.zeros((N, 1))
mu0_2[:3, :] = p2[0, :].reshape(3, 1)
mu0_2[3:, :] = v2[0, :].reshape(3, 1)
prec0_2 = np.linalg.inv(prior_var*np.identity(N))
h0_2 = (prec0_2)@(mu0_2)
g0_2 = -0.5*(mu0_2.T)@(prec0_2)@(mu0_2) -3*np.log(2*np.pi)
print(h0_1)
z_t = np.empty((M, N))
z_t[:, :3] = pm_1
z_t[:, 3:] = vm_1
R_in = np.linalg.inv(R)
P_pred = np.bmat([[R_in, -(R_in)@(A)], [-(A.T)@(R_in), (A.T)@(R_in)@(A)]])
M_pred = np.zeros((2*N, 1))
M_pred[:N, :] = (B)@(u)
h_pred = (P_pred)@(M_pred)
g_pred = -0.5*(M_pred.T)@(P_pred)@(M_pred).flatten() -0.5*np.log( np.linalg.det(2*np.pi*R))
Q_in = np.linalg.inv(Q)
P_meas = np.bmat([[(C.T)@(Q_in)@(C), -(C.T)@(Q_in)], [-(Q_in)@(C), Q_in]])
h_meas = np.zeros((2*N, 1))
g_meas = -0.5*np.log( np.linalg.det(2*np.pi*Q))
L, _ = z_t.shape
X = np.arange(0, L)
Z = np.arange(L-1, 2*L-1)
C_X = [CG([X[0]], [N], h0_1, prec0_1, g0_1)]
C_Z = [CG([X[0]], [N], h0_1, prec0_1, g0_1)]
for i in np.arange(1, L):
C_X.append(CG([X[i], X[i-1]], [N, N], h_pred, P_pred, g_pred))
C_Z.append(CG([X[i], Z[i]], [N, N], h_meas, P_meas, g_meas))
message_out = [C_X[0]]
prediction = [C_X[0]]
mean = np.zeros((N, L))
for i in np.arange(1, L):
#Kalman Filter Algorithm
C_Z[i].introduce_evidence([Z[i]], z_t[i, :])
marg = (message_out[i-1]*C_X[i]).marginalize([X[i-1]])
message_out.append(marg*C_Z[i])
mean[:, i] = (np.linalg.inv(message_out[i]._prec)@(message_out[i]._info)).reshape((N, ))
#For plotting only
prediction.append(marg)
p_e = mean[:3, :]
fig = plt.figure(figsize=(25, 25))
ax = plt.axes(projection='3d')
ax.plot(p1[:, 0], p1[:, 1], p1[:, 2])
ax.plot(p_e[0, :], p_e[1, :], p_e[2, :], 'or')
ax.set_xlabel('x (m)', fontsize = '20')
ax.set_ylabel('y (m)', fontsize = '20')
ax.set_zlabel('z (m)', fontsize = '20')
ax.set_title('Kalman Filtering', fontsize = '20')
ax.set_ylim([-1, 1])
ax.legend(['Actual Trajectory', 'Estimated trajectory'])
plt.show()
D = 100
t = np.linspace(0, 2*np.pi, D)
xz = np.array([[np.cos(t)], [np.sin(t)]]).reshape((2, D))
gaussians = message_out + prediction + C_Z
ellipses = []
for g in gaussians:
g._vars = [1, 2, 3, 4]
g._dims = [1, 1, 1, 3]
c = g.marginalize([2, 4])
cov = np.linalg.inv(c._prec)
mu = (cov)@(c._info)
U, S, _ = np.linalg.svd(cov)
L = np.diag(np.sqrt(S))
ellipses.append(np.dot((U)@(L), xz) + mu)
for i in np.arange(0, M):
plt.figure(figsize= (15, 15))
message_out = ellipses[i]
prediction = ellipses[i+M]
measurement = ellipses[i+2*M]
plt.plot(p1[:, 0], p1[:, 2], 'k--', label='Trajectory')
plt.plot(message_out[0, :], message_out[1, :], 'r', label='After measurement update')
plt.plot(prediction[0, :], prediction[1, :], 'b', label = 'Recursive prediction')
plt.plot(measurement[0, :], measurement[1, :], 'g', label='Measurement')
plt.xlim([-3.5, 250])
plt.ylim([-3.5, 35])
plt.grid(True)
plt.xlabel('x (m)')
plt.ylabel('z (m)')
plt.legend(loc='upper left')
plt.title('x-z position for t = %d'%i)
plt.savefig('images/kalman/%d.png'%i, format = 'png')
plt.close()
fig = plt.figure(figsize=(25, 25))
ax = plt.axes(projection='3d')
ax.plot(p1[:, 0], p1[:, 1], p1[:, 2])
ax.plot(p2[:, 0], p2[:, 1], p2[:, 2], 'or')
ax.set_xlabel('x (m)', fontsize = '20')
ax.set_ylabel('y (m)', fontsize = '20')
ax.set_zlabel('z (m)', fontsize = '20')
ax.set_title('', fontsize = '20')
ax.set_ylim([-20, 20])
ax.legend(['Target 1', 'Target 2'])
plt.show()
L = 10
X_1 = np.arange(0, L).tolist()
X_2 = np.arange(L, 2*L).tolist()
Z_1 = np.arange(2*L, 3*L).tolist()
Z_2 = np.arange(3*L, 4*L).tolist()
z_1 = np.empty((M, N))
z_1[:, :3] = pm_1
z_1[:, 3:] = vm_1
z_2 = np.empty((M, N))
z_2[:, :3] = pm_2
z_2[:, 3:] = vm_2
C_X = [CG([X_1[0]], [N], h0_1, prec0_1, g0_1)*CG([X_2[0]], [N], h0_2, prec0_2, g0_2)]
for i in np.arange(1, L):
C_X.append(CG([X_1[i], X_1[i-1]], [N, N], h_pred, P_pred, g_pred)
*CG([X_2[i], X_2[i-1]], [N, N], h_pred, P_pred, g_pred))
C_Z = [None]
Z_11 = CG([X_1[1], Z_1[1]], [N, N], h_meas, P_meas, g_meas)
Z_11.introduce_evidence([Z_1[1]], z_1[1, :])
Z_22 = CG([X_2[1], Z_2[1]], [N, N], h_meas, P_meas, g_meas)
Z_22.introduce_evidence([Z_2[1]], z_2[1, :])
C_Z.append(Z_11*Z_22)
for i in np.arange(2, L):
Z_11 = CG([X_1[i], Z_1[i]], [N, N], h_meas, P_meas, g_meas)
Z_11.introduce_evidence([Z_1[i]], z_1[i, :])
Z_22 = CG([X_2[i], Z_2[i]], [N, N], h_meas, P_meas, g_meas)
Z_22.introduce_evidence([Z_2[i]], z_2[i, :])
Z_12 = CG([X_1[i], Z_2[i]], [N, N], h_meas, P_meas, g_meas)
Z_12.introduce_evidence([Z_2[i]] ,z_2[i, :])
Z_21 = CG([X_2[i], Z_1[i]], [N, N], h_meas, P_meas, g_meas)
Z_21.introduce_evidence([Z_1[i]], z_1[i, :])
C_Z.append(GMM([0.5*(Z_11*Z_22), 0.5*(Z_12*Z_21)]))
predict = [C_X[0]]
for i in np.arange(1, L):
marg = (C_X[i]*predict[i-1]).marginalize([X_1[i-1], X_2[i-1]])
predict.append(C_Z[i]*marg)
D = 100
t = np.linspace(0, 2*np.pi, D)
xz = np.array([[np.cos(t)], [np.sin(t)]]).reshape((2, D))
ellipses = []
norms = []
i = 0
for p in predict:
if isinstance(p, GMM):
mix = p._mix
else:
mix = [p]
time_step = []
for m in mix:
m._vars = [1, 2, 3, 4]
m._dims = [1, 1, 1, 9]
c = m.marginalize([2, 4])
cov = np.linalg.inv(c._prec)
mu = (cov)@(c._info)
if i == 0:
print(cov)
i = 1
U, S, _ = np.linalg.svd(cov)
lambda_ = np.diag(np.sqrt(S))
norms.append(c._norm)
time_step.append(np.dot((U)@(lambda_), xz) + mu)
ellipses.append(time_step)
for i in np.arange(0, L):
plt.figure(figsize= (15, 15))
plt.plot(p1[1:, 0], p1[1:, 2], 'or', label='Trajectory 1')
plt.plot(p2[1:, 0], p2[1:, 2], 'og', label='Trajectory 2')
for e in ellipses[i]:
plt.plot(e[0, :], e[1, :], 'b')
plt.xlim([-3.5, 25])
plt.ylim([-3.5, 15])
plt.grid(True)
plt.legend(loc='upper left')
plt.xlabel('x (m)')
plt.ylabel('z (m)')
plt.title('x-z position for t = %d'%(i))
plt.savefig('images/two_objects/%d.png'%i, format = 'png')
plt.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Target information
Step2: The Kalman Filter Model
Step3: Motion and measurement models
Step4: Priors
Step5: Linear Kalman Filtering
Step6: The Kalman Filter algorithm
Step7: <img src="images/kalman/kalman_hl.gif">
|
14,652 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import arrow # way better than datetime
import numpy as np
import random
import re
%run helper_functions.py
import string
new_df = unpickle_object("new_df.pkl") # this loads up the dataframe from our previous notebook
new_df.head() #sorted first on date and then time!
new_df.iloc[0, 3]
#we need to remove all links in a tweet!
regex = r"http\S+"
subset = ""
removed_links = list(map(lambda x: re.sub(regex, subset, x), list(new_df['tweet'])))
removed_links = list(map(str.strip, removed_links))
new_df['tweet'] = removed_links
new_df.iloc[0, 3] # we can see here that the link has been removed!
new_df.iloc[1047748, [1, 3]] #example of duplicate enttry - different handles, same tweets
new_df.iloc[1047749, [1, 3]]
#this illustrates only one example of duplicates in the data!
duplicate_indicies = []
for index, value in enumerate(new_df.index):
if "Multiplayer #Poker" in new_df.iloc[value, 3]:
duplicate_indicies.append(index)
new_df.iloc[duplicate_indicies, [1,3]]
tweet_list = list(new_df['tweet']) #lets first make a list of the tweets we need to remove duplicates from
string.punctuation
remove_punctuaton = '!"$%&\'()*+,-./:;<=>?@[\\]โโ^_`{|}~' # same as string.punctuation, but without # - I want hashtags!
set_list = []
clean_tweet_list = []
translator = str.maketrans('', '', remove_punctuaton) #very fast punctuation remover!
for word in tweet_list:
list_form = word.split() #turns the word into a list
to_process = [x for x in list_form if not x.startswith("@")] #removes handles
to_process_2 = [x for x in to_process if not x.startswith("RT")] #removed retweet indicator
string_form = " ".join(to_process_2) #back into a string
set_form = set(string_form.translate(translator).strip().lower().split()) #this is the magic!
clean_tweet_list.append(string_form.translate(translator).strip().lower())
set_list.append(tuple(set_form)) #need to make it a tuple so it's hashable!
new_df['tuple_version_tweet'] = set_list
new_df['clean_tweet_V1'] = clean_tweet_list
new_df.head()
new_df.iloc[1047748, 4] # we have extracted the core text from the tweets! YAY!
new_df.iloc[1047749, 4]
new_df.iloc[1047748, 4] == new_df.iloc[1047748, 4] #this is perfect!
new_df.shape #dimensions before duplicate removal!
test_df = new_df.drop_duplicates(subset='tuple_version_tweet', keep="first") #keep the first occurence
#otherwise drop rows that have matching tuples!
#lets use the example from before! - it only occurs once now!
for index, value in enumerate(test_df.iloc[:, 3]):
if "Multiplayer #Poker" in value:
print(test_df.iloc[index, [1,3]])
new_df.shape
test_df.shape
((612644-1049878)/1049878)*100 #41% reduction!
pickle_object(test_df, "no_duplicates_df")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We see from the above code, that I have removed duplicates by creating a tuple set of the words that are in the tweet after having removed the URL's, punctuation etc.
Step2: Note
|
14,653 | <ASSISTANT_TASK:>
Python Code:
raw_data = {'dt': ['2017-01-15 00:06:08',
'2017-01-15 01:09:08',
'2017-01-16 02:07:08',
'2017-01-16 02:07:09',
'2017-01-16 03:04:08',
'2017-01-16 03:04:09',
'2017-01-15 01:06:08'],
'type': ['VOLT',
'VOLT',
'PUMP',
'PUMP',
'PUMP',
'PUMP',
'VOLT'],
'value': [22.4,
34.3,
0.,
1.,
1.,
0.,
34.3]}
df = pd.DataFrame(raw_data, index=raw_data['dt'], columns = ['type', 'value'])
df
df.type = df.type.astype('category')
plt.figure()
df[df.type=='VOLT'].plot(rot=90,title='NoiseReading',style='o')
plt.show()
plt.savefig('DataFramePlotting01.png')
plt.figure()
df[df.type=='PUMP'].plot(rot=90,title='Pump State',style='-')
plt.show()
plt.savefig('DataFramePlotting02.png')
group = df.groupby(['type'])
group.plot()
plt.show()
plt.savefig('DataFramePlotting03.png')
fig, axs = plt.subplots(1,2,sharex=False)
group.get_group("PUMP").plot(ax=axs[0], y='value', rot=90,title='Pump State',style='-')
group.get_group("VOLT").plot(ax=axs[1], y='value', rot=90,title='Volt Noise',style='.')
plt.show()
plt.savefig('DataFramePlotting04.png')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convert the type column to a category (similar to factor in R)
Step2: Plot the noise readings as a point plot
Step3: Plot the pump state changes as a line plot.
Step4: from notes found here
|
14,654 | <ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
import pyNastran
from pyNastran.op2.op2 import read_op2
pkg_path = pyNastran.__path__[0]
model_path = os.path.join(pkg_path, '..', 'models')
solid_bending_op2 = os.path.join(model_path, 'solid_bending', 'solid_bending.op2')
solid_bending = read_op2(solid_bending_op2, combine=False, debug=False)
print(solid_bending.displacements.keys())
solid_bending_op2 = os.path.join(model_path, 'solid_bending', 'solid_bending.op2')
solid_bending2 = read_op2(solid_bending_op2, combine=True, debug=False)
print(solid_bending2.displacements.keys())
op2_filename = os.path.join(model_path, 'sol_101_elements', 'buckling_solid_shell_bar.op2')
model = read_op2(op2_filename, combine=True, debug=False, build_dataframe=True)
stress_keys = model.cquad4_stress.keys()
print (stress_keys)
# isubcase, analysis_code, sort_method, count, subtitle
key0 = (1, 1, 1, 0, 'DEFAULT1')
key1 = (1, 8, 1, 0, 'DEFAULT1')
stress_static = model.cquad4_stress[key0].data_frame
stress_transient = model.cquad4_stress[key1].data_frame
# The final calculated factor:
# Is it a None or not?
# This defines if it's static or transient
print('stress_static.nonlinear_factor = %s' % model.cquad4_stress[key0].nonlinear_factor)
print('stress_transient.nonlinear_factor = %s' % model.cquad4_stress[key1].nonlinear_factor)
print('data_names = %s' % model.cquad4_stress[key1].data_names)
print('loadsteps = %s' % model.cquad4_stress[key1].lsdvmns)
print('eigenvalues = %s' % model.cquad4_stress[key1].eigrs)
# Sets default precision of real numbers for pandas output\n"
pd.set_option('precision', 2)
stress_static.head(20)
# Sets default precision of real numbers for pandas output\n"
pd.set_option('precision', 3)
#import numpy as np
#np.set_printoptions(formatter={'all':lambda x: '%g'})
stress_transient.head(20)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Solid Bending
Step2: Single Subcase Buckling Example
Step3: Keys
Step4: Static Table
Step5: Transient Table
|
14,655 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 Franรงois Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
!pip install pyyaml h5py # HDF5 ํฌ๋งท์ผ๋ก ๋ชจ๋ธ์ ์ ์ฅํ๊ธฐ ์ํด์ ํ์ํฉ๋๋ค
import os
import tensorflow as tf
from tensorflow import keras
print(tf.version.VERSION)
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
# ๊ฐ๋จํ Sequential ๋ชจ๋ธ์ ์ ์ํฉ๋๋ค
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
# ๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ๋ง๋ญ๋๋ค
model = create_model()
# ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์ถ๋ ฅํฉ๋๋ค
model.summary()
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# ๋ชจ๋ธ์ ๊ฐ์ค์น๋ฅผ ์ ์ฅํ๋ ์ฝ๋ฐฑ ๋ง๋ค๊ธฐ
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)
# ์๋ก์ด ์ฝ๋ฐฑ์ผ๋ก ๋ชจ๋ธ ํ๋ จํ๊ธฐ
model.fit(train_images,
train_labels,
epochs=10,
validation_data=(test_images,test_labels),
callbacks=[cp_callback]) # ์ฝ๋ฐฑ์ ํ๋ จ์ ์ ๋ฌํฉ๋๋ค
# ์ตํฐ๋ง์ด์ ์ ์ํ๋ฅผ ์ ์ฅํ๋ ๊ฒ๊ณผ ๊ด๋ จ๋์ด ๊ฒฝ๊ณ ๊ฐ ๋ฐ์ํ ์ ์์ต๋๋ค.
# ์ด ๊ฒฝ๊ณ ๋ (๊ทธ๋ฆฌ๊ณ ์ด ๋
ธํธ๋ถ์ ๋ค๋ฅธ ๋น์ทํ ๊ฒฝ๊ณ ๋) ์ด์ ์ฌ์ฉ ๋ฐฉ์์ ๊ถ์ฅํ์ง ์๊ธฐ ์ํจ์ด๋ฉฐ ๋ฌด์ํด๋ ์ข์ต๋๋ค.
os.listdir(checkpoint_dir)
# ๊ธฐ๋ณธ ๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ๋ง๋ญ๋๋ค
model = create_model()
# ๋ชจ๋ธ์ ํ๊ฐํฉ๋๋ค
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("ํ๋ จ๋์ง ์์ ๋ชจ๋ธ์ ์ ํ๋: {:5.2f}%".format(100*acc))
# ๊ฐ์ค์น ๋ก๋
model.load_weights(checkpoint_path)
# ๋ชจ๋ธ ์ฌํ๊ฐ
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("๋ณต์๋ ๋ชจ๋ธ์ ์ ํ๋: {:5.2f}%".format(100*acc))
# ํ์ผ ์ด๋ฆ์ ์ํฌํฌ ๋ฒํธ๋ฅผ ํฌํจ์ํต๋๋ค(`str.format` ํฌ๋งท)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# ๋ค์ฏ ๋ฒ์งธ ์ํฌํฌ๋ง๋ค ๊ฐ์ค์น๋ฅผ ์ ์ฅํ๊ธฐ ์ํ ์ฝ๋ฐฑ์ ๋ง๋ญ๋๋ค
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
period=5)
# ์๋ก์ด ๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ๋ง๋ญ๋๋ค
model = create_model()
# `checkpoint_path` ํฌ๋งท์ ์ฌ์ฉํ๋ ๊ฐ์ค์น๋ฅผ ์ ์ฅํฉ๋๋ค
model.save_weights(checkpoint_path.format(epoch=0))
# ์๋ก์ด ์ฝ๋ฐฑ์ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ํ๋ จํฉ๋๋ค
model.fit(train_images,
train_labels,
epochs=50,
callbacks=[cp_callback],
validation_data=(test_images,test_labels),
verbose=0)
os.listdir(checkpoint_dir)
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
# ์๋ก์ด ๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ๋ง๋ญ๋๋ค
model = create_model()
# ์ด์ ์ ์ ์ฅํ ๊ฐ์ค์น๋ฅผ ๋ก๋ํฉ๋๋ค
model.load_weights(latest)
# ๋ชจ๋ธ์ ์ฌํ๊ฐํฉ๋๋ค
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("๋ณต์๋ ๋ชจ๋ธ์ ์ ํ๋: {:5.2f}%".format(100*acc))
# ๊ฐ์ค์น๋ฅผ ์ ์ฅํฉ๋๋ค
model.save_weights('./checkpoints/my_checkpoint')
# ์๋ก์ด ๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ๋ง๋ญ๋๋ค
model = create_model()
# ๊ฐ์ค์น๋ฅผ ๋ณต์ํฉ๋๋ค
model.load_weights('./checkpoints/my_checkpoint')
# ๋ชจ๋ธ์ ํ๊ฐํฉ๋๋ค
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("๋ณต์๋ ๋ชจ๋ธ์ ์ ํ๋: {:5.2f}%".format(100*acc))
# ์๋ก์ด ๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ๋ง๋ค๊ณ ํ๋ จํฉ๋๋ค
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# SavedModel๋ก ์ ์ฒด ๋ชจ๋ธ์ ์ ์ฅํฉ๋๋ค
!mkdir -p saved_model
model.save('saved_model/my_model')
# my_model ๋๋ ํ ๋ฆฌ
!ls saved_model
# assests ํด๋, saved_model.pb, variables ํด๋
!ls saved_model/my_model
new_model = tf.keras.models.load_model('saved_model/my_model')
# ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ํ์ธํฉ๋๋ค
new_model.summary()
# ๋ณต์๋ ๋ชจ๋ธ์ ํ๊ฐํฉ๋๋ค
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('๋ณต์๋ ๋ชจ๋ธ์ ์ ํ๋: {:5.2f}%'.format(100*acc))
print(new_model.predict(test_images).shape)
# ์๋ก์ด ๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ๋ง๋ค๊ณ ํ๋ จํฉ๋๋ค
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# ์ ์ฒด ๋ชจ๋ธ์ HDF5 ํ์ผ๋ก ์ ์ฅํฉ๋๋ค
# '.h5' ํ์ฅ์๋ ์ด ๋ชจ๋ธ์ด HDF5๋ก ์ ์ฅ๋์๋ค๋ ๊ฒ์ ๋ํ๋
๋๋ค
model.save('my_model.h5')
# ๊ฐ์ค์น์ ์ตํฐ๋ง์ด์ ๋ฅผ ํฌํจํ์ฌ ์ ํํ ๋์ผํ ๋ชจ๋ธ์ ๋ค์ ์์ฑํฉ๋๋ค
new_model = tf.keras.models.load_model('my_model.h5')
# ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์ถ๋ ฅํฉ๋๋ค
new_model.summary()
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('๋ณต์๋ ๋ชจ๋ธ์ ์ ํ๋: {:5.2f}%'.format(100*acc))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ๋ชจ๋ธ ์ ์ฅ๊ณผ ๋ณต์
Step2: ์์ ๋ฐ์ดํฐ์
๋ฐ๊ธฐ
Step3: ๋ชจ๋ธ ์ ์
Step4: ํ๋ จํ๋ ๋์ ์ฒดํฌํฌ์ธํธ ์ ์ฅํ๊ธฐ
Step5: ์ด ์ฝ๋๋ ํ
์ํ๋ก ์ฒดํฌํฌ์ธํธ ํ์ผ์ ๋ง๋ค๊ณ ์ํฌํฌ๊ฐ ์ข
๋ฃ๋ ๋๋ง๋ค ์
๋ฐ์ดํธํฉ๋๋ค
Step6: ๋ ๋ชจ๋ธ์ด ๋์ผํ ์ํคํ
์ฒ๋ฅผ ๊ณต์ ํ๊ธฐ๋ง ํ๋ค๋ฉด ๋ ๋ชจ๋ธ ๊ฐ์ ๊ฐ์ค์น๋ฅผ ๊ณต์ ํ ์ ์์ต๋๋ค. ๋ฐ๋ผ์ ๊ฐ์ค์น ์ ์ฉ์์ ๋ชจ๋ธ์ ๋ณต์ํ ๋ ์๋ ๋ชจ๋ธ๊ณผ ๋์ผํ ์ํคํ
์ฒ๋ก ๋ชจ๋ธ์ ๋ง๋ ๋ค์ ๊ฐ์ค์น๋ฅผ ์ค์ ํฉ๋๋ค.
Step7: ์ฒดํฌํฌ์ธํธ์์ ๊ฐ์ค์น๋ฅผ ๋ก๋ํ๊ณ ๋ค์ ํ๊ฐํด ๋ณด์ฃ
Step8: ์ฒดํฌํฌ์ธํธ ์ฝ๋ฐฑ ๋งค๊ฐ๋ณ์
Step9: ๋ง๋ค์ด์ง ์ฒดํฌํฌ์ธํธ๋ฅผ ํ์ธํด ๋ณด๊ณ ๋ง์ง๋ง ์ฒดํฌํฌ์ธํธ๋ฅผ ์ ํํด ๋ณด๊ฒ ์ต๋๋ค
Step10: ์ฐธ๊ณ
Step11: ์ด ํ์ผ๋ค์ ๋ฌด์์ธ๊ฐ์?
Step12: ์ ์ฒด ๋ชจ๋ธ ์ ์ฅํ๊ธฐ
Step13: SavedModel ํ์์ protobuf ๋ฐ์ด๋๋ฆฌ์ TensorFlow ์ฒดํฌํฌ์ธํธ๋ฅผ ํฌํจํ๋ ๋๋ ํ ๋ฆฌ์
๋๋ค. ์ ์ฅ๋ ๋ชจ๋ธ ๋๋ ํ ๋ฆฌ๋ฅผ ๊ฒ์ฌํฉ๋๋ค.
Step14: ์ ์ฅ๋ ๋ชจ๋ธ๋ก๋ถํฐ ์๋ก์ด ์ผ๋ผ์ค ๋ชจ๋ธ์ ๋ก๋ํฉ๋๋ค
Step15: ๋ณต์๋ ๋ชจ๋ธ์ ์๋ณธ ๋ชจ๋ธ๊ณผ ๋์ผํ ๋งค๊ฐ๋ณ์๋ก ์ปดํ์ผ๋์ด ์์ต๋๋ค. ์ด ๋ชจ๋ธ์ ํ๊ฐํ๊ณ ์์ธก์ ์ฌ์ฉํด ๋ณด์ฃ
Step16: HDF5 ํ์ผ๋ก ์ ์ฅํ๊ธฐ
Step17: ์ด์ ์ด ํ์ผ๋ก๋ถํฐ ๋ชจ๋ธ์ ๋ค์ ๋ง๋ค์ด ๋ณด์ฃ
Step18: ์ ํ๋๋ฅผ ํ์ธํด ๋ณด๊ฒ ์ต๋๋ค
|
14,656 | <ASSISTANT_TASK:>
Python Code:
import torch
import numpy as np
from IPython import embed
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
np.random.seed(1)
def f(x):
"A function to predict"
return x*np.sin(x)
#X = [1., 3., 5., 6., 7.] # create 2D array shape: [1,n]
X = np.atleast_2d([1., 3., 5., 6., 7., 8.]).T # shape: 2D
Y = f(X).ravel() # shape 1D
# data
n = 1000
# Compute random values of x
# why would choosing x as random numbers mess things up?
# When predicting the GP, the order of points should be irrelevant
x = 10*np.random.uniform(size=n)
# Sort x, or else the plotting will not work
#print("x unsorted: ", x[0:10], len(x))
x.sort()
x_test = x.reshape(-1,1) # shape 2D
#print("x sorted: ", x[0:10], len(x))
# Instantiate a Gaussian Process
kernel = C(1., (1.e-3,1.e3)) * RBF(10, (1.e-2, 1.e2))
gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9)
# Fit Gaussian Process (careful with array shape)
gp.fit(X, Y)
# Compute GP at n points. Y_pred: shape 1D
y_pred, sigma = gp.predict(x_test, return_std=True)
# Plot the Gaussian Process
# Use subplots() so that figure does not disappear when rerunning the cell
# See https://github.com/matplotlib/jupyter-matplotlib/issues/60
fig, ax = plt.subplots()
ax.plot(x, f(x), 'r:', label=r'$f(x)=x\,\sin(x)$')
ax.plot(X, Y, 'r.', markersize=10, label='Observations')
ax.plot(x, y_pred, 'r.', markersize=1)
xpoly = np.concatenate([x, x[::-1]])
ypoly = np.concatenate([y_pred-1.96*sigma, (y_pred+1.96*sigma)[::-1]])
ax.fill(xpoly, ypoly, alpha=.5, fc='b',
ec='None', label='95% Confidence Level')
ax.set_xlabel('$x$')
ax.set_ylabel('$f(x)$')
ax.set_ylim(-10,20)
ax.legend(loc='upper left')
plt.show()
# From Nathan Crock
class P():
def __init__(self, m, s):
self.m = m # Slope of line
self.s = s # Standard deviation of injected noise
def sample(self, size):
x = np.random.uniform(size=size)
y = []
for xi in x:
y.append(np.random.normal(self.m*xi, self.s))
return(x, y)
m = 2.7
s = 0.2
p = P(m,s)
x,y = p.sample(20)
# Make sure I can execute every cell multiple times in a row. Sometimes,
# if one reuses a variable defined in an earlier cell, variables can be overwritten
# which can lead to problems.
from sklearn.model_selection import train_test_split
# The test+train dataset was created using a Gaussian Process
xdata = torch.tensor(x_test) # (n, 1)
#print(xdata[0:20]) ;
ydata = torch.tensor(y_pred.copy()).reshape(-1,1) # (n,)
#print(xdata)
train_size = .01
# Data is shuffled by default
xtrain, xtest, ytrain, ytest = train_test_split(xdata.numpy(), ydata.numpy(), train_size=train_size)
#x_train = torch.tensor(x_train, dtype=torch.float)
xtrain = torch.from_numpy(xtrain).float()
xtest = torch.from_numpy(xtest).float()
ytrain = torch.from_numpy(ytrain).float()
ytest = torch.from_numpy(ytest).float()
#print(type(xtrain))
xtrain = xtrain.reshape(-1,1)
ytrain = ytrain.reshape(-1,1)
#print(xtrain.shape, ytrain.shape);
from torch.autograd import Variable
import torch.nn as nn
class linearRegression(torch.nn.Module):
def __init__(self, inputSize, hiddenSize, outputSize):
super(linearRegression, self).__init__()
self.input_linear1 = torch.nn.Linear(inputSize, hiddenSize)
self.input_linear2 = torch.nn.Linear(hiddenSize, outputSize)
self.sigmoid = nn.Tanh()
def forward(self, x):
x = self.input_linear1(x)
x = self.sigmoid(x) # I should be able to drive loss to zero
x = self.input_linear2(x)
return x
inputDim = 1
hiddenDim = 10
outputDim = 1
learningRate = .1
epochs = 500
model = linearRegression(inputDim, hiddenDim, outputDim)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learningRate)
#print(model.state_dict())
losses = []
for epoch in range(epochs):
# Converting inputs and labels to Variable
if torch.cuda.is_available():
inputs = Variable(xtrain)
labels = Variable(ytrain)
else:
inputs = Variable(xtrain)
labels = Variable(ytrain)
# Clear gradient buffers because we don't want any gradient from previous epoch to carry forward, dont want to cummulate gradients
optimizer.zero_grad()
# get output from the model, given the inputs
model.train()
outputs = model(inputs)
# get loss for the predicted output
loss = criterion(outputs, labels)
losses.append(loss.item())
#print(loss.item(), epoch)
# get gradients w.r.t to parameters
loss.backward()
# update parameters
optimizer.step()
#print('epoch {}, loss {}'.format(epoch, loss.item()))
#print(model.state_dict())
#print("--------------------")
# test the model
with torch.no_grad():
predicted = model(Variable(xtest))
plt.clf()
fig, axes = plt.subplots(2,1, figsize=(10,8))
ax = axes[0]
ax.plot(xtrain, ytrain, 'go', markersize=5, label='Train data', alpha=0.5)
ax.plot(xtest, predicted, '.', markersize=1, label='Predictions', alpha=0.5)
# plot base curve
ax.plot(xdata, ydata, 'b--', lw=1, label='GP mean curve')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend(loc='best')
ax.set_ylim(-10,20)
ax = axes[1]
xl = np.linspace(0, len(losses)-1, len(losses))
ax.plot(xl, losses)
ax.set_ylabel('Loss')
plt.show()
#print(losses)
print("train_data len: ", len(xtrain))
print("test_data len: ", len(xtest))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a Gaussian process with a small amount of training points.
Step2: Construct a Neural network to do regression using Pytorch
|
14,657 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
learning_rate = 0.1
training_epochs = 2000
x1_label1 = np.random.normal(3, 1, 1000)
x2_label1 = np.random.normal(2, 1, 1000)
x1_label2 = np.random.normal(7, 1, 1000)
x2_label2 = np.random.normal(6, 1, 1000)
x1s = np.append(x1_label1, x1_label2)
x2s = np.append(x2_label1, x2_label2)
ys = np.asarray([0.] * len(x1_label1) + [1.] * len(x1_label2))
X1 = tf.placeholder(tf.float32, shape=(None,), name="x1")
X2 = tf.placeholder(tf.float32, shape=(None,), name="x2")
Y = tf.placeholder(tf.float32, shape=(None,), name="y")
w = tf.Variable([0., 0., 0.], name="w", trainable=True)
y_model = tf.sigmoid(-(w[2] * X2 + w[1] * X1 + w[0]))
cost = tf.reduce_mean(-tf.log(y_model * Y + (1 - y_model) * (1 - Y)))
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
prev_err = 0
for epoch in range(training_epochs):
err, _ = sess.run([cost, train_op], {X1: x1s, X2: x2s, Y: ys})
if epoch % 100 == 0:
print(epoch, err)
if abs(prev_err - err) < 0.0001:
break
prev_err = err
w_val = sess.run(w, {X1: x1s, X2: x2s, Y: ys})
x1_boundary, x2_boundary = [], []
with tf.Session() as sess:
for x1_test in np.linspace(0, 10, 20):
for x2_test in np.linspace(0, 10, 20):
z = sess.run(tf.sigmoid(-x2_test*w_val[2] - x1_test*w_val[1] - w_val[0]))
if abs(z - 0.5) < 0.05:
x1_boundary.append(x1_test)
x2_boundary.append(x2_test)
plt.scatter(x1_boundary, x2_boundary, c='b', marker='o', s=20)
plt.scatter(x1_label1, x2_label1, c='r', marker='x', s=20)
plt.scatter(x1_label2, x2_label2, c='g', marker='1', s=20)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define positive and negative to classify 2D data points
Step2: Define placeholders, variables, model, and the training op
Step3: Train the model on the data in a session
Step4: Here's one hacky, but simple, way to figure out the decision boundary of the classifier
Step5: Ok, enough code. Let's see some a pretty plot
|
14,658 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
tf.executing_eagerly()
import numpy as np
from matplotlib.pyplot import *
%config InlineBackend.figure_format = 'retina'
matplotlib.pyplot.style.use("dark_background")
import jax
from jax import random
from jax import numpy as jnp
from colabtools import adhoc_import
# import tensforflow_datasets
from inference_gym import using_jax as gym
# import tensorflow as tf
from tensorflow_probability.python.internal import prefer_static as ps
from tensorflow_probability.python.internal import unnest
import tensorflow_probability as _tfp
tfp = _tfp.substrates.jax
tfd = tfp.distributions
tfb = tfp.bijectors
tfp_np = _tfp.substrates.numpy
tfd_np = tfp_np.distributions
from jax.experimental.ode import odeint
from jax import vmap
import arviz as az
from tensorflow_probability.python.internal.unnest import get_innermost
# Define nested Rhat for one parameter.
# Assume for now the indexed parameter is a scalar.
def nested_rhat(result_state, num_super_chains, index_param, num_samples,
warmup_length = 0):
state_param = result_state[index_param][
warmup_length:(warmup_length + num_samples), :, :]
num_samples = state_param.shape[0]
num_chains = state_param.shape[1]
num_sub_chains = num_chains // num_super_chains
state_param = state_param.reshape(num_samples, -1, num_sub_chains, 1)
mean_chain = np.mean(state_param, axis = (0, 3))
between_chain_var = np.var(mean_chain, axis = 1, ddof = 1)
within_chain_var = np.var(state_param, axis = (0, 3), ddof = 1)
total_chain_var = between_chain_var + np.mean(within_chain_var, axis = 1)
mean_super_chain = np.mean(state_param, axis = (0, 1, 3))
between_super_chain_var = np.var(mean_super_chain, ddof = 1)
return np.sqrt(1 + between_super_chain_var / np.mean(total_chain_var))
# WARNING: this is a very poor estimate for ESS, and we shoud note
# W / B isn't typically used to estimate ESS.
def ess_per_super_chain(nRhat):
return 1 / (np.square(nRhat) - 1)
# NOTE: need to pass the initial time as the first element of t.
t = np.array([0., 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 4, 5, 6])
y0 = np.array([100.0, 0.0])
theta = np.array([1.5, 0.25])
def system(state, time, theta):
k1 = theta[0]
k2 = theta[1]
return jnp.array([
- k1 * state[0] ,
k1 * state[0] - k2 * state[1]
])
use_analytical_sln = True
if (use_analytical_sln):
def ode_map(k1, k2):
sln = jnp.exp(- k2 * t) / (k1 - k2) * (y0[0] * k1 * (1 - jnp.exp((k2 - k1) * t)) + (k1 - k2) * y0[1])
return sln[1:]
else:
def ode_map(k1, k2):
theta = jnp.array([k1, k2])
return odeint(system, y0, t, theta, mxstep = 1e6)[1:, 1]
states = ode_map(k1 = theta[0], k2 = theta[1])
random.normal(random.PRNGKey(37272710), (states.shape[0],))
jnp.log(states)
# Simulate data
states = ode_map(k1 = theta[0], k2 = theta[1])
sigma = 0.1
log_y = sigma * random.normal(random.PRNGKey(37272710), (states.shape[0],)) \
+ jnp.log(states)
y = jnp.exp(log_y)
# print(y)
figure(figsize = [6, 6])
plot(t[1:], states)
plot(t[1:], y, 'o')
show()
model = tfd.JointDistributionSequentialAutoBatched([
# Priors
tfd.LogNormal(loc = jnp.log(1.), scale = 0.5, name = "k1"),
tfd.LogNormal(loc = jnp.log(1.), scale = 0.5, name = "k2"),
tfd.HalfNormal(scale = 1., name = "sigma"),
lambda sigma, k2, k1: (
tfd.LogNormal(loc = jnp.log(ode_map(k1, k2)),
scale = sigma[..., jnp.newaxis], name = "y"))
])
def target_log_prob_fn(k1, k2, sigma):
return model.log_prob((k1, k2, sigma, y))
num_dimensions = 3
def initialize (shape, key = random.PRNGKey(37272709)):
prior_location = jnp.log(jnp.array([1., 1., 1.]))
prior_scale = jnp.array([0.5, 0.5, 0.5])
return jnp.exp(prior_scale * random.normal(key, shape + (num_dimensions,)) + prior_location)
# initial_state = initialize((4, ), key = random.PRNGKey(1954))
initial_state = model.sample(sample_shape = (4, 1), seed = random.PRNGKey(1954))[:3]
x = jnp.array(initial_state).reshape(3, 4)
print(x[0, :])
# TODO: find a wat to do this when the init is a list!!
# Check call to target_log_prob_fn works
# target = target_log_prob_fn(initial_state)
# print(target)
# Prior predictive checks
num_prior_samples = 1000
*prior_samples, prior_predictive = model.sample(1000, seed = random.PRNGKey(37272709))
figure(figsize = [6, 6])
plot(t[1:], y, 'o')
plot(t[1:], np.median(prior_predictive, axis = 0), color = 'yellow')
plot(t[1:], np.quantile(prior_predictive, q = 0.95, axis = 0), linestyle = ':', color = 'yellow')
plot(t[1:], np.quantile(prior_predictive, q = 0.05, axis = 0), linestyle = ':', color = 'yellow')
show()
# Implement ChEES transition kernel.
init_step_size = 1
warmup_length = 1000
kernel = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel, warmup_length)
kernel = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel, warmup_length, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
def trace_fn(current_state, pkr):
return (
# proxy for divergent transitions
get_innermost(pkr, 'log_accept_ratio') < -1000
)
num_chains = 4
mcmc_states, diverged = tfp.mcmc.sample_chain(
num_results = 2000,
current_state = initial_state,
kernel = kernel,
trace_fn = trace_fn,
seed = random.PRNGKey(1954))
# remove warmup samples
for i in range(0, len(mcmc_states)):
mcmc_states[i] = mcmc_states[i][1000:]
# get draws for posterior predictive checks
*_, posterior_predictive = model.sample(value = mcmc_states,
seed = random.PRNGKey(37272709))
print("Divergent transition(s):", np.sum(diverged[1000:]))
parameter_names = model._flat_resolve_names()
az_states = az.from_dict(
prior = {k: v[tf.newaxis, ...] for k, v in zip(parameter_names, prior_samples)},
posterior={
k: np.swapaxes(v, 0, 1) for k, v in zip(parameter_names, mcmc_states)
},
)
print(az.summary(az_states).filter(items=["mean", "sd", "mcse_sd", "hdi_5%",
"hdi_95%", "ess_bulk", "ess_tail",
"r_hat"]))
axs = az.plot_trace(az_states, combined = False, compact = False)
# TODO: include potential divergent transitions.
az.plot_pair(az_states, figsize = (6, 6), kind = 'hexbin', divergences = True);
ppc_data = posterior_predictive.reshape((4000, 10))
figure(figsize = [6, 6])
plot(t[1:], y, 'o')
plot(t[1:], np.median(ppc_data, axis = 0), color = 'yellow')
plot(t[1:], np.quantile(ppc_data, q = 0.95, axis = 0), linestyle = ':', color = 'yellow')
plot(t[1:], np.quantile(ppc_data, q = 0.05, axis = 0), linestyle = ':', color = 'yellow')
show()
# Construct event times, and identify dosing times (all other times correspond
# to measurement events).
time_after_dose = np.array([0.083, 0.167, 0.25, 0.5, 0.75, 1, 1.5, 2, 3, 4, 6, 8])
t = np.append(
np.append(np.append(np.append(0., time_after_dose),
np.append(12., time_after_dose + 12)),
np.linspace(start = 24, stop = 156, num = 12)),
np.append(jnp.append(168., 168. + time_after_dose),
np.array([180, 192])))
start_event = np.array([], dtype = int)
dosing_time = range(0, 192, 12)
# Use dosing events to determine times of integration between
# exterior interventions on the system.
eps = 1e-4 # hack to deal with some t being slightly offset.
for t_dose in dosing_time:
start_event = np.append(start_event, np.where(abs(t - t_dose) <= eps))
amt = jnp.array([1000., 0.])
n_dose = start_event.shape[0]
start_event = np.append(start_event, t.shape[0] - 1)
def ode_map (theta, dt, current_state):
k1 = theta[0]
k2 = theta[1]
y0_hat = jnp.exp(- k1 * dt) * current_state[0]
y1_hat = jnp.exp(- k2 * dt) / (k1 - k2) * (current_state[0] * k1 *\
(1 - jnp.exp((k2 - k1) * dt)) + (k1 - k2) * current_state[1])
return jnp.array([y0_hat, y1_hat])
ode_map(theta, np.array([1, 2, 3]), y0)[1, :]
def ode_map_event (theta):
'''
Wrapper around the ODE solver, based on the event schedule.
NOTE: if using the ode integrator, need to adjust the shape of mass.
'''
y_hat = jnp.array([])
current_state = amt
for i in range(0, n_dose):
t_integration = jax.lax.dynamic_slice(t, (start_event[i], ),
(start_event[i + 1] - start_event[i] + 1, ))
mass = ode_map(theta, t_integration - t_integration[0], current_state)
# mass = odeint(system, current_state, t_integration,
# theta, rtol = 1e-6, atol = 1e-6, mxstep = 1e3)
y_hat = jnp.append(y_hat, mass[1, 1:])
current_state = mass[:, mass.shape[1]] + amt
return y_hat
y_hat = ode_map_event(theta)
log_y_hat = jnp.log(y_hat[1:])
sigma = 0.5
# NOTE: no observation at time t = 0.
log_y = sigma * random.normal(random.PRNGKey(1954), (y_hat.shape[0],)) \
+ jnp.log(y_hat)
y_obs = jnp.exp(log_y)
figure(figsize = [6, 6])
plot(t[1:], y_hat)
plot(t[1:], y_obs, 'o', markersize = 2)
show()
t_jax = jnp.array(t)
amt_vec = np.repeat(0., t.shape[0])
amt_vec[start_event] = 1000
amt_vec[amt_vec.shape[0] - 1] = 0.
amt_vec_jax = jnp.array(amt_vec)
# Overwrite definition of ode_map_event.
def ode_map_event(theta):
def ode_map_step (current_state, event_index):
dt = t_jax[event_index] - t_jax[event_index - 1]
y_sln = ode_map(theta, dt, current_state)
return (y_sln + jnp.array([amt_vec_jax[event_index], 0.])), y_sln[1,]
(__, yhat) = jax.lax.scan(ode_map_step, amt, np.array(range(1, t.shape[0])))
return yhat
y_hat = ode_map_event(theta)
figure(figsize = [6, 6])
plot(t[1:], y_hat)
plot(t[1:], y_obs, 'o', markersize = 2)
show()
# Remark: using more informative priors helps insure the chains mix
# reasonably well. (Could be interesting to examine with nested-rhat
# the case where they don't).
model = tfd.JointDistributionSequentialAutoBatched([
# Priors
tfd.LogNormal(loc = jnp.log(1.), scale = 0.5, name = "k1"),
tfd.LogNormal(loc = jnp.log(.5), scale = 0.25, name = "k2"),
tfd.HalfNormal(scale = 1., name = "sigma"),
lambda sigma, k2, k1: (
tfd.LogNormal(loc = jnp.log(ode_map_event(jnp.array([k1, k2]))),
scale = sigma[..., jnp.newaxis], name = "y_obs"))
])
def target_log_prob_fn(k1, k2, sigma):
return model.log_prob((k1, k2, sigma, y_obs))
initial_state = model.sample(sample_shape = (4, 1), seed = random.PRNGKey(1954))[:3]
# TODO: find a way to test target_log_prob_fn with init as a list
# print(initial_state)
# target_log_prob_fn(initial_state)
# Implement ChEES transition kernel.
init_step_size = 0.1
warmup_length = 1000
kernel = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel, warmup_length)
kernel = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel, warmup_length, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
def trace_fn(current_state, pkr):
return (
# proxy for divergent transitions
get_innermost(pkr, 'log_accept_ratio') < -1000,
get_innermost(pkr, 'step_size'),
get_innermost(pkr, 'max_trajectory_length')
)
num_chains = 4
mcmc_states, diverged = tfp.mcmc.sample_chain(
num_results = 2000,
current_state = initial_state,
kernel = kernel,
trace_fn = trace_fn,
seed = random.PRNGKey(1954))
semilogy(diverged[1], label = "step size")
semilogy(diverged[2], label = "max_trajectory length")
legend(loc = "best")
show()
# remove warmup samples
for i in range(0, len(mcmc_states)):
mcmc_states[i] = mcmc_states[i][1000:]
print("Divergent transition(s):", np.sum(diverged[1000:]))
parameter_names = model._flat_resolve_names()
az_states = az.from_dict(
prior = {k: v[tf.newaxis, ...] for k, v in zip(parameter_names, prior_samples)},
posterior={
k: np.swapaxes(v, 0, 1) for k, v in zip(parameter_names, mcmc_states)
},
)
print(az.summary(az_states).filter(items=["mean", "sd", "mcse_sd", "hdi_3%",
"hdi_97%", "ess_bulk", "ess_tail",
"r_hat"]))
# get draws for posterior predictive checks
*_, posterior_predictive = model.sample(value = mcmc_states,
seed = random.PRNGKey(37272709))
# ppc_data = posterior_predictive.reshape(1000, 4, 52)
# az_data = az.from_dict(
# posterior = dict(x = ppc_data.transpose((1, 0, 2)))
# )
# print(az.summary(az_data).filter(items=["mean", "hdi_3%",
# "hdi_97%", "ess_bulk", "ess_tail",
# "r_hat"]))
# REMARK: hmmm... the ppc's look odd. Not sure why. Everything else looks fine.
figure(figsize = [6, 6])
semilogy(t[1:], y_obs, 'o')
semilogy(t[1:], np.median(posterior_predictive, axis = (0, 1, 2)), color = 'yellow')
semilogy(t[1:], np.quantile(posterior_predictive, q = 0.95, axis = (0, 1, 2)), linestyle = ':', color = 'yellow')
semilogy(t[1:], np.quantile(posterior_predictive, q = 0.05, axis = (0, 1, 2)), linestyle = ':', color = 'yellow')
show()
# (Code from previous cells, rewritten here to make
# section 1.3 self-contained).
# TODO: replace this with a function.
time_after_dose = np.array([0.083, 0.167, 0.25, 0.5, 0.75, 1, 1.5, 2, 3, 4, 6, 8])
t = np.append(
np.append(np.append(np.append(0., time_after_dose),
np.append(12., time_after_dose + 12)),
np.linspace(start = 24, stop = 156, num = 12)),
np.append(jnp.append(168., 168. + time_after_dose),
np.array([180, 192])))
start_event = np.array([], dtype = int)
dosing_time = range(0, 192, 12)
# Use dosing events to determine times of integration between
# exterior interventions on the system.
eps = 1e-4 # hack to deal with some t being slightly offset.
for t_dose in dosing_time:
start_event = np.append(start_event, np.where(abs(t - t_dose) <= eps))
amt = jnp.array([1000., 0.])
n_dose = start_event.shape[0]
start_event = np.append(start_event, t.shape[0] - 1)
# NOTE: need to run the first cell under Section 1.2
# (Clinical event schedule)
n_patients = 100
pop_location = jnp.log(jnp.array([1.5, 0.25]))
# pop_location = jnp.log(jnp.array([0.5, 1.0]))
pop_scale = jnp.array([0.15, 0.35])
theta_patient = jnp.exp(pop_scale * random.normal(random.PRNGKey(37272709),
(n_patients, ) + (2,)) + pop_location)
amt = np.array([1000., 0.])
amt_patient = np.append(np.repeat(amt[0], n_patients),
np.repeat(amt[1], n_patients))
amt_patient = amt_patient.reshape(2, n_patients)
# redfine variables from previous section (in case we only run population model)
t_jax = jnp.array(t)
amt_vec = np.repeat(0., t.shape[0])
amt_vec[start_event] = 1000
amt_vec[amt_vec.shape[0] - 1] = 0.
amt_vec_jax = jnp.array(amt_vec)
# Rewrite ode_map_event for population case.
# TODO: remove 'use_second_axis' hack.
def ode_map (theta, dt, current_state, use_second_axis = False):
if (use_second_axis):
k1 = theta[0, :]
k2 = theta[1, :]
else:
k1 = theta[:, 0]
k2 = theta[:, 1]
y0_hat = jnp.exp(- k1 * dt) * current_state[0, :]
y1_hat = jnp.exp(- k2 * dt) / (k1 - k2) * (current_state[0, :] * k1 *\
(1 - jnp.exp((k2 - k1) * dt)) + (k1 - k2) * current_state[1, :])
return jnp.array([y0_hat, y1_hat])
# @jax.jit # Cannot use jit if function has an IF statement.
def ode_map_event(theta, use_second_axis = False):
def ode_map_step (current_state, event_index):
dt = t_jax[event_index] - t_jax[event_index - 1]
y_sln = ode_map(theta, dt, current_state, use_second_axis)
dose = jnp.repeat(amt_vec_jax[event_index], n_patients)
y_after_dose = y_sln + jnp.append(jnp.repeat(amt_vec_jax[event_index], n_patients),
jnp.repeat(0., n_patients)).reshape(2, n_patients)
return (y_after_dose, y_sln[1, ])
(__, yhat) = jax.lax.scan(ode_map_step, amt_patient,
np.array(range(1, t.shape[0])),
unroll = 20)
return yhat
# Simulate some data
y_hat = ode_map_event(theta_patient)
sigma = 0.1
# NOTE: no observation at time t = 0.
log_y = sigma * random.normal(random.PRNGKey(1954), y_hat.shape) \
+ jnp.log(y_hat)
y_obs = jnp.exp(log_y)
figure(figsize = [6, 6])
plot(t[1:], y_hat)
plot(t[1:], y_obs, 'o', markersize = 2)
show()
pop_model = tfd.JointDistributionSequentialAutoBatched([
# tfd.LogNormal(loc = jnp.log(1.), scale = 0.25, name = "k1_pop"),
# tfd.LogNormal(loc = jnp.log(0.3), scale = 0.1, name = "k2_pop"),
# tfd.Normal(loc = jnp.log(1.), scale = 0.25, name = "log_k1_pop"),
tfd.Normal(loc = jnp.log(1.), scale = 0.1, name = "log_k1_pop"),
tfd.Normal(loc = jnp.log(0.3), scale = 0.1, name = "log_k2_pop"),
tfd.Normal(loc = jnp.log(0.15), scale = 0.1, name = "log_scale_k1"),
tfd.Normal(loc = jnp.log(0.35), scale = 0.1, name = "log_scale_k2"),
# tfd.HalfNormal(scale = 1., name = "sigma"),
tfd.Normal(loc = -1., scale = 1., name = "log_sigma"),
# non-centered parameterization for hierarchy
tfd.Independent(tfd.Normal(loc = jnp.zeros(n_patients),
scale = jnp.ones(n_patients),
name = "eta_k1"),
reinterpreted_batch_ndims = 1),
tfd.Independent(tfd.Normal(loc = jnp.zeros(n_patients),
scale = jnp.ones(n_patients),
name = "eta_k2"),
reinterpreted_batch_ndims = 1),
lambda eta_k2, eta_k1, log_sigma, log_scale_k2, log_scale_k1,
log_k2_pop, log_k1_pop: (
tfd.Independent(tfd.LogNormal(
loc = jnp.log(
ode_map_event(theta = jnp.array([
jnp.exp(log_k1_pop[..., jnp.newaxis] + eta_k1 * jnp.exp(log_scale_k1[..., jnp.newaxis])),
jnp.exp(log_k2_pop[..., jnp.newaxis] + eta_k2 * jnp.exp(log_scale_k2[..., jnp.newaxis]))]),
use_second_axis = True)),
scale = jnp.exp(log_sigma[..., jnp.newaxis]), name = "y_obs")))
# lambda eta_k2, eta_k1, sigma, log_scale_k2, log_scale_k1,
# k2_pop, k1_pop: (
# tfd.Independent(tfd.LogNormal(
# loc = jnp.log(
# ode_map_event(theta = jnp.array(
# [jnp.exp(jnp.log(k1_pop[..., jnp.newaxis]) + eta_k1 * jnp.exp(log_scale_k1[..., jnp.newaxis])),
# jnp.exp(jnp.log(k2_pop[..., jnp.newaxis]) + eta_k2 * jnp.exp(log_scale_k2[..., jnp.newaxis]))]),
# use_second_axis = True)),
# scale = sigma[..., jnp.newaxis], name = "y_obs")))
])
def pop_target_log_prob_fn(log_k1_pop, log_k2_pop, log_scale_k1, log_scale_k2,
log_sigma, eta_k1, eta_k2):
return pop_model.log_prob((log_k1_pop, log_k2_pop, log_scale_k1, log_scale_k2,
log_sigma, eta_k1, eta_k2, y_obs))
# CHECK -- do we need to parenthesis?
# def pop_target_log_prob_fn(k1_pop, k2_pop, log_scale_k1, log_scale_k2,
# sigma, eta_k1, eta_k2):
# return pop_model.log_prob((k1_pop, k2_pop, log_scale_k1, log_scale_k2,
# sigma, eta_k1, eta_k2, y_obs))
def pop_target_log_prob_fn_flat(x):
k1_pop = x[:, 0]
k2_pop = x[:, 1]
sigma = x[:, 2]
log_scale_k1 = x[:, 3]
log_scale_k2 = x[:, 4]
eta_k1 = x[:, 5:(5 + n_patients)]
eta_k2 = x[:, (5 + n_patients):(5 + 2 * n_patients)]
return pop_model.log_prob((k1_pop, k2_pop, log_scale_k1, log_scale_k2,
sigma, eta_k1, eta_k2, y_obs))
# Sample initial states from prior
num_chains = 128
num_super_chains = 4 # num_chains # 128
n_parm = 5 + 2 * n_patients
initial_state_raw = pop_model.sample(sample_shape = (num_super_chains, 1),\
seed = random.PRNGKey(37272710))[:7]
# QUESTION: does this assignment create a pointer?
initial_state = initial_state_raw
for i in range(0, len(initial_state_raw)):
initial_state[i] = np.repeat(initial_state_raw[i],
num_chains // num_super_chains, axis = 0)
# Implement ChEES transition kernel. Increase the target acceptance rate
# to avoid divergent transitions.
# NOTE: increasing the target acceptance probability can lead to poor performance.
init_step_size = 0.001 # CHECK -- how to best tune this?
warmup_length = 1000 # 1000
kernel = tfp.mcmc.HamiltonianMonteCarlo(pop_target_log_prob_fn,
step_size = init_step_size,
num_leapfrog_steps = 10)
kernel = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel, warmup_length)
kernel = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel, warmup_length, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
def trace_fn(current_state, pkr):
return (
# proxy for divergent transitions
get_innermost(pkr, 'log_accept_ratio') < -1000,
get_innermost(pkr, 'step_size'),
get_innermost(pkr, 'max_trajectory_length')
)
mcmc_states, diverged = tfp.mcmc.sample_chain(
num_results = warmup_length + 1000,
current_state = initial_state,
kernel = kernel,
trace_fn = trace_fn,
seed = random.PRNGKey(1954))
# Remark: somehow modifying mcmc_states still modifies
# mcmc_states_raw.
mcmc_states_raw = mcmc_states
# remove warmup samples
# NOTE: not a good idea. It's better to store all the states.
if False:
for i in range(0, len(mcmc_states)):
mcmc_states[i] = mcmc_states_raw[i][warmup_length:]
semilogy(diverged[1], label = "step size")
semilogy(diverged[2], label = "max_trajectory length")
legend(loc = "best")
show()
mcmc_states[0].shape
# Use this to search for points where the the step size changes
# dramatically and divergences that might be happening there.
if False:
index_l = 219 # 225
index_u = index_l + 1 # 235
print("Max L:" , diverged[2][index_l:index_u])
print("Divergence:", np.sum(diverged[0][index_l:index_u]),
"at", np.where(diverged[0][index_l:index_u] == 1))
chain = 0
eta1_state = mcmc_states[5][index_l, chain, :] *\
mcmc_states[2][index_l, chain, 0] + mcmc_states[0][index_l, chain, 0]
eta2_state = mcmc_states[6][index_l, chain, :] *\
mcmc_states[3][index_l, chain, 0] + mcmc_states[1][index_l, chain, 0]
k0_state = np.exp(eta1_state)
k1_state = np.exp(eta2_state)
print(k0_state - k1_state)
print("Divergent transition(s):", np.sum(diverged[0][warmup_length:]))
# NOTE: the last parameter is an 'x': not sure where this comes from...
parameter_names = pop_model._flat_resolve_names()[:-1]
az_states = az.from_dict(
#prior = {k: v[tf.newaxis, ...] for k, v in zip(parameter_names, prior_samples)},
posterior={
k: np.swapaxes(v, 0, 1) for k, v in zip(parameter_names, mcmc_states)
},
)
print(az.summary(az_states).filter(items=["mean", "sd", "mcse_sd", "hdi_3%",
"hdi_97%", "ess_bulk", "ess_tail",
"r_hat"]))
# Only plot the population parameters.
axs = az.plot_trace(az_states, combined = False, compact = False,
var_names = parameter_names[:5])
# posterior predictive checks
# NOTE: for 100 patients, running this exhausts memory
*_, posterior_predictive = pop_model.sample(value = mcmc_states,
seed = random.PRNGKey(37272709))
ppc_data = posterior_predictive.reshape(1000 * num_chains, 52, n_patients)
# NOTE: unclear why the confidence interval is so small...
fig, axes = subplots(n_patients, 1, figsize=(8, 4 * n_patients))
for i in range(0, n_patients):
patient_ppc = posterior_predictive[:, :, :, :, i]
axes[i].semilogy(t[1:], y_obs[:, i], 'o')
axes[i].semilogy(t[1:], np.median(patient_ppc, axis = (0, 1, 2)), color = 'yellow')
axes[i].semilogy(t[1:], np.quantile(patient_ppc, q = 0.95, axis = (0, 1, 2)), linestyle = ':', color = 'yellow')
axes[i].semilogy(t[1:], np.quantile(patient_ppc, q = 0.05, axis = (0, 1, 2)), linestyle = ':', color = 'yellow')
show()
# Assumes mcmc_states contains all the samples (including warmup)
parameter_index = 0
num_samples = 500
mc_mean = np.mean(mcmc_states[parameter_index][
warmup_length:(warmup_length + num_samples), :, :])
print("Mean:", mc_mean)
print("Estimated squared error:",
np.square(mc_mean -
np.mean(mcmc_states[parameter_index][warmup_length:, :, :])))
print("Upper bound on expected squared error for one iteration:",
np.var(mcmc_states[0]) / num_chains)
nRhat = nested_rhat(result_state = mcmc_states,
num_super_chains = num_super_chains,
index_param = parameter_index,
num_samples = num_samples,
warmup_length = warmup_length)
print("num_samples:", num_samples)
print("nRhat:", nRhat)
print("Rhat:",
tfp.mcmc.potential_scale_reduction(
mcmc_states[0][warmup_length:(num_samples + warmup_length), :, :]))
t = np.array([0.0, 0.5, 0.75, 1, 1.25, 1.5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100])
y0 = np.array([100.0, 0.0])
theta = np.array([0.5, 27, 10, 14])
def system(state, time, theta):
ka = theta[0]
V = theta[1]
Vm = theta[2]
Km = theta[3]
C = state[1] / V
return jnp.array([
- ka * state[0],
ka * state[0] - Vm * C / (Km + C)
])
states = odeint(system, y0, t, theta, mxstep = 1000)
sigma = 0.5
log_y = sigma * random.normal(random.PRNGKey(37272709), (states.shape[0] - 1,)) \
+ jnp.log(states[1:, 1])
y = jnp.exp(log_y)
figure(figsize = [6, 6])
plot(t[1:], states[1:, 1])
plot(t[1:], y, 'o');
def ode_map(ka, V, Vm, Km):
theta = jnp.array([ka, V, Vm, Km])
return odeint(system, y0, t, theta, mxstep = 1e3)[1:, 1]
model = tfd.JointDistributionSequentialAutoBatched([
# Priors
tfd.LogNormal(loc = jnp.log(1), scale = 0.5, name = "ka"),
tfd.LogNormal(loc = jnp.log(35), scale = 0.5, name = "V"),
tfd.LogNormal(loc = jnp.log(10), scale = 0.5, name = "Vm"),
tfd.LogNormal(loc = jnp.log(2.5), scale = 1, name = "Km"),
tfd.HalfNormal(scale = 1., name = "sigma"),
# Likelihood (TODO: divide location by volume to get concentration)
lambda sigma, Km, Vm, V, ka: (
tfd.LogNormal(loc = jnp.log(ode_map(ka, V, Vm, Km) / V),
scale = sigma[..., jnp.newaxis], name = "y"))
])
def target_log_prob_fn(x):
ka = x[:, 0]
V = x[:, 1]
Vm = x[:, 2]
Km = x[:, 3]
sigma = x[:, 4]
return model.log_prob((ka, V, Vm, Km, sigma, y))
num_dimensions = 5
def initialize (shape, key = random.PRNGKey(37272709)):
prior_location = jnp.log(jnp.array([1.5, 35, 10, 2.5, 0.5]))
prior_scale = jnp.array([3, 0.5, 0.5, 3, 1.])
return jnp.exp(prior_scale * random.normal(key, shape + (num_dimensions,)) + prior_location)
initial_state = initialize((4, ), key = random.PRNGKey(1954))
# Test target probability density can be computed
target = target_log_prob_fn(initial_state)
print(target)
# Implement ChEES transition kernel.
init_step_size = 1
warmup_length = 250
kernel = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel, warmup_length)
kernel = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel, warmup_length, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
num_chains = 4
# NOTE: It takes 29 seconds to run one iteration. So running 500 iterations
# would take ~4 hours :(
# QUESTION: why does JAX struggle so much to solve this type of problem??
result = tfp.mcmc.sample_chain(
num_results = 1,
current_state = initial_state,
kernel = kernel,
seed = random.PRNGKey(1954))
R = 1.62
1 / (R * R - 1)
a = np.array(range(4, 1024, 4))
d = np.repeat(6., len(a))
# Two optimization solutions, solving quadratic equations (+ / -)
# Remark: + solution gives a negative upper-bound for delta_u
alpha_1 = 2 * a + d / 2 - np.sqrt(np.square(2 * a + d / 2) - 2 * a)
alpha_2 = a - alpha_1
delta_u = (np.square(alpha_1 + d / 2) / (alpha_1 * alpha_2)) / 2
eps = 0.01
delta = np.square(1 + eps) - 1
print(delta)
semilogy(a / d, delta_u)
hlines(delta, (a / d)[0], (a / d)[len(a) - 1], linestyles = '--',
label = "delta for 1.01 threshold")
xlabel("a / d")
semilogy(a / d, alpha_1 / a, label = "alpha_1")
semilogy(a / d, alpha_2 / a, label = "alpha_2")
legend(loc = 'best')
xlabel("a / d")
ylabel("alpha")
aindex_location = np.where(a / d == 100)
print(index_location)
print(delta_u[index_location])
delta
pop_model = tfd.JointDistributionSequentialAutoBatched([
tfd.LogNormal(loc = jnp.log(1.), scale = 0.5, name = "k1_pop"),
tfd.LogNormal(loc = jnp.log(.5), scale = 0.25, name = "k2_pop"),
tfd.Normal(loc = jnp.log(0.5), scale = 1., name = "log_scale_k1"),
tfd.Normal(loc = jnp.log(0.5), scale = 1., name = "log_scale_k2"),
tfd.HalfNormal(scale = 1., name = "sigma"),
# non-centered parameterization for hierarchy
tfd.Independent(tfd.Normal(loc = jnp.zeros(n_patients),
scale = jnp.ones(n_patients),
name = "eta_k1"),
reinterpreted_batch_ndims = 1),
tfd.Independent(tfd.Normal(loc = jnp.zeros(n_patients),
scale = jnp.ones(n_patients),
name = "eta_k2"),
reinterpreted_batch_ndims = 1),
lambda eta_k2, eta_k1, sigma, log_scale_k2, log_scale_k1,
k2_pop, k1_pop: (
tfd.Independent(tfd.LogNormal(
loc = jnp.log(
ode_map_event(theta = jnp.array(
[jnp.exp(jnp.log(k1_pop[..., jnp.newaxis]) + eta_k1 * jnp.exp(log_scale_k1[..., jnp.newaxis])),
jnp.exp(jnp.log(k2_pop[..., jnp.newaxis]) + eta_k2 * jnp.exp(log_scale_k2[..., jnp.newaxis]))]),
use_second_axis = True)),
scale = sigma[..., jnp.newaxis], name = "y_obs")))
])
num_hyper = 5
num_dimensions = num_hyper + 2 * n_patients
def pop_initialize(shape, key = random.PRNGKey(37272710)) :
# init for k1_pop, k2_pop, and sigma
hyper_prior_location = jnp.array([jnp.log(1.5), jnp.log(0.25), 0.])
hyper_prior_scale = jnp.array([0.5, 0.1, 0.5])
init_hyper_param = jnp.exp(hyper_prior_scale * random.normal(key, shape + \
(3, )) + hyper_prior_location)
# init for log_scale_k1 and log_scale_k2
scale_prior_location = jnp.array([-1., -1.])
scale_prior_scale = jnp.array([0.25, 0.25])
init_scale = scale_prior_scale * random.normal(key, shape + (2, )) +\
scale_prior_location
# inits for the etas
init_eta = random.normal(key, shape + (2 * n_patients, ))
return jnp.append(jnp.append(init_hyper_param, init_scale, axis = 1),
init_eta, axis = 1)
initial_state = pop_initialize((4, ))
initial_list = [initial_state[:, 0], # k1_pop
initial_state[:, 1], # k2_pop
initial_state[:, 2], # log_scale_k1
initial_state[:, 3], # log_scale_k2
initial_state[:, 4], # sigma
initial_state[:, 5:(5 + n_patients)], # eta_k1
initial_state[:, (5 + n_patients):(5 + 2 * n_patients)] # eta_k2
]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This notebook demonstrates how to fit a pharmacokinetic model with TensorFlow probability. This includes defining the relevant joint distribution and working through the basic steps of a Bayesian workflow, e.g. prior and posterior predictive checks, diagnostics for the inference, etc.
Step2: 1 One compartment model with absoprtion from the gut
Step3: 1.1 Model for one patient recieving a single dose
Step4: 1.1.1 Run model with TFP
Step5: 1.1.2 Analyze results
Step6: To convert TFP's output to something compatible with Arviz, we'll follow the example in https
Step7: 1.2 Clinical event schedule
Step8: We now wrap our ODE solver (whehter it be via an analytical solution or a numerical integrator) inside an event schedule handler. For starters, we'll go through the events using a for loop. This, it turns out, is fairly inefficient, and we'll later revise this code using jax.lax.scan.
Step9: The code above works fine to simulate data but we can do better using jax.lax.scan.
Step10: We'll only look at some essential diagnostics. For more, we can follow the code in the single-dose example.
Step11: 1.3 Population models
Step12: We rewrite the ode_map, so that, rather than returning the mass for one patient, it returns the mass across multiple patients. The function ode_map now takes in the physiological parameters for all patients, as well as the initial states for each patient.
Step13: 1.3.2 Fit the model with TFP
Step14: If we want to run many chains in parallel and use $n\hat R$ (nested $\hat R$), we need to specify the number of chains and the number of super chains.
Step15: Some care is required when setting the tuning parameters for ChEES-HMC, in particular the initial step size. In the ChEES-HMC paper, the following proceudre is used
Step16: 1.3.3 Traditional diagnostics
Step17: 1.3.3 Diagnostic using $n \hat R$.
Step18: 2 Michaelis-Menten pharmacokinetics (Incomplete)
Step19: Draft Code
|
14,659 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
number_to_words(554)
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
N=str(n)
x=list(N)
if len(x)==4:
return'one thousand'
if len(x)==3:
if x[0]=='1':
hundred_digit='one hundred'
elif x[0]=='2':
hundred_digit='two hundred'
elif x[0]=='3':
hundred_digit='three hundred'
elif x[0]=='4':
hundred_digit='four hundred'
elif x[0]=='5':
hundred_digit='five hundred'
elif x[0]=='6':
hundred_digit='six hundred'
elif x[0]=='7':
hundred_digit='seven hundred'
elif x[0]=='8':
hundred_digit='eight hundred'
elif x[0]=='9':
hundred_digit='nine hundred'
if x[1]=='0':
tens_digit=' '
elif x[1]=='1':
if x[2]=='0':
return hundred_digit+' ten'
elif x[2]=='1':
return hundred_digit+' eleven'
elif x[2]=='2':
return hundred_digit+' twelve'
elif x[2]=='3':
return hundred_digit+' thirteen'
elif x[2]=='4':
return hundred_digit+' fourteen'
elif x[2]=='5':
return hundred_digit+' fifteen'
elif x[2]=='6':
return hundred_digit+' sixteen'
elif x[2]=='7':
return hundred_digit+' seventeen'
elif x[2]=='8':
return hundred_digit+' eighteen'
elif x[2]=='9':
return hundred_digit+' nineteen'
elif x[1]=='2':
tens_digit=' twenty-'
elif x[1]=='3':
tens_digit=' thirty-'
elif x[1]=='4':
tens_digit=' fourty-'
elif x[1]=='5':
tens_digit=' fifty-'
elif x[1]=='6':
tens_digit=' sixty-'
elif x[1]=='7':
tens_digit=' seventy-'
elif x[1]=='8':
tens_digit=' eighty-'
elif x[1]=='9':
tens_digit=' ninety-'
if x[2]=='0':
return hundred_digit+tens_digit
elif x[2]=='1':
return hundred_digit+' and'+tens_digit+'one'
elif x[2]=='2':
return hundred_digit+' and'+tens_digit+'two'
elif x[2]=='3':
return hundred_digit+' and'+tens_digit+'three'
elif x[2]=='4':
return hundred_digit+' and'+tens_digit+'four'
elif x[2]=='5':
return hundred_digit+' and'+tens_digit+'five'
elif x[2]=='6':
return hundred_digit+' and'+tens_digit+'six'
elif x[2]=='7':
return hundred_digit+' and'+tens_digit+'seven'
elif x[2]=='8':
return hundred_digit+' and'+tens_digit+'eight'
elif x[2]=='9':
return hundred_digit+' and'+tens_digit+'nine'
if len(x)==2:
if x[0]=='1':
if x[1]=='0':
return 'ten'
elif x[1]=='1':
return 'eleven'
elif x[1]=='2':
return 'twelve'
elif x[1]=='3':
return 'thirteen'
elif x[1]=='4':
return 'fourteen'
elif x[1]=='5':
return 'fifteen'
elif x[1]=='6':
return 'sixteen'
elif x[1]=='7':
return 'seventeen'
elif x[1]=='8':
return 'eighteen'
elif x[1]=='9':
return 'nineteen'
elif x[0]=='2':
tens_digit1='twenty-'
elif x[0]=='3':
tens_digit1='thirty-'
elif x[0]=='4':
tens_digit1='fourty-'
elif x[0]=='5':
tens_digit1='fifty-'
elif x[0]=='6':
tens_digit1='sixty-'
elif x[0]=='7':
tens_digit1='seventy-'
elif x[0]=='8':
tens_digit1='eighty-'
elif x[0]=='9':
tens_digit1='ninety-'
if x[1]=='0':
return tens_digit1
elif x[1]=='1':
return tens_digit1+'one'
elif x[1]=='2':
return tens_digit1+'two'
elif x[1]=='3':
return tens_digit1+'three'
elif x[1]=='4':
return tens_digit1+'four'
elif x[1]=='5':
return tens_digit1+'five'
elif x[1]=='6':
return tens_digit1+'six'
elif x[1]=='7':
return tens_digit1+'seven'
elif x[1]=='8':
return tens_digit1+'eight'
elif x[1]=='9':
return tens_digit1+'nine'
if len(x)==1:
if x[0]=='1':
return 'one'
elif x[0]=='2':
return 'two'
elif x[0]=='3':
return 'three'
elif x[0]=='4':
return 'four'
elif x[0]=='5':
return 'five'
elif x[0]=='6':
return 'six'
elif x[0]=='7':
return 'seven'
elif x[0]=='8':
return 'eight'
elif x[0]=='9':
return 'nine'
assert number_to_words(3)=='three'
assert number_to_words(2)+number_to_words(4)=='twofour'
assert number_to_words(978)=='nine hundred and seventy-eight'
assert True # use this for grading the number_to_words tests.
def filter_fn(x):
if x=='-' or x==' ':
return False
else:
return True
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
total_letters=0
while n>=1:
x=number_to_words(n)
y=x.replace(' ','')
z=y.replace('-','')
total_letters=total_letters+len(z)
n=n-1
return total_letters
assert count_letters(1)==3
assert count_letters(5)==19
assert count_letters(1000)==20738
assert True # use this for grading the count_letters tests.
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for gradig the answer to the original question.
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question.
|
14,660 | <ASSISTANT_TASK:>
Python Code:
import sys
print(sys.version)
# python2 has list comprehensions
[x ** 2 for x in range(5)]
# python3 has dict comprehensions!
{str(x): x ** 2 for x in range(5)}
# and set comprehensions
{x ** 2 for x in range(5)}
# magic dictionary concatenation
some_kwargs = {'do': 'this',
'not': 'that'}
other_kwargs = {'use': 'something',
'when': 'sometime'}
{**some_kwargs, **other_kwargs}
# unpacking magic
a, *stuff, b = range(5)
print(a)
print(stuff)
print(b)
# native support for unicode
s = 'ฮคฮฟ ฮฮตฮฝ ฯฮฟฯ
ฮ ฯฮธฯฮฝฮฑ'
print(s)
# unicode variable names!
import numpy as np
ฯ = np.pi
np.cos(2 * ฯ)
# infix matrix multiplication
A = np.random.choice(list(range(-9, 10)), size=(3, 3))
B = np.random.choice(list(range(-9, 10)), size=(3, 3))
print("A = \n", A)
print("B = \n", B)
print("A B = \n", A @ B)
print("A B = \n", np.dot(A, B))
s = 'asdf'
b = s.encode('utf-8')
b
b.decode('utf-8')
# this will be problematic if other encodings are used...
s = 'asdf'
b = s.encode('utf-32')
b
b.decode('utf-8')
# shouldn't change anything in python3
from __future__ import print_function, division
print('non-truncated division in a print function: 2/3 =', 2/3)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A (non-exhaustive) list of differences between Python 2 and Python 3
Step2: New string formatting
Step3: Writing code for both Python 2 and Python 3
|
14,661 | <ASSISTANT_TASK:>
Python Code:
import fbu
myfbu = fbu.PyFBU()
myfbu.data = [100,150]
myfbu.response = [[0.08,0.02], #first truth bin
[0.02,0.08]] #second truth bin
myfbu.lower = [0,0]
myfbu.upper = [3000,3000]
myfbu.run()
trace = myfbu.trace
print( trace )
%matplotlib inline
from matplotlib import pyplot as plt
plt.hist(trace[1],
bins=20,alpha=0.85,
normed=True)
plt.ylabel('probability')
myfbu.background = {'bckg1':[20,30],'bckg2':[10,10]}
myfbu.backgroundsyst = {'bckg1':0.5,'bckg2':0.04} #50% normalization uncertainty for bckg1 and 4% normalization uncertainty for bckg2
myfbu.objsyst = {
'signal':{'syst1':[0.,0.03],'syst2':[0.,0.01]},
'background':{
'syst1':{'bckg1':[0.,0.],'bckg2':[0.1,0.1]},
'syst2':{'bckg1':[0.,0.01],'bckg2':[0.,0.]}
}
}
myfbu.run() #rerun sampling with backgrounds and systematics
unfolded_bin1 = myfbu.trace[1]
bckg1 = myfbu.nuisancestrace['bckg1']
plt.hexbin(bckg1,unfolded_bin1,cmap=plt.cm.YlOrRd)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Supply the input distribution to be unfolded as a 1-dimensional list for N bins, with each entry corresponding to the bin content.
Step2: Supply the response matrix where each row corresponds to a truth level bin.
Step3: Define the boundaries of the hyperbox to be sampled for each bin.
Step4: Run the MCMC sampling (this step might take up to several minutes for a large number of bins).
Step5: Retrieve the N-dimensional posterior distribution in the form of a list of N arrays.
Step6: Each array corresponds to the projection of the posterior distribution for a given bin.
Step7: Background
Step8: The background normalization is sampled from a gaussian with the given uncertainty. To fix the background normalization the uncertainty should be set to 0.
Step9: Each systematics is treated as fully correlated across signal and the various backgrounds.
|
14,662 | <ASSISTANT_TASK:>
Python Code:
# use the %ls magic to list the files in the current directory.
%ls
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sms
%matplotlib inline
three11s = pd.read_csv("data/pgh-311.csv", parse_dates=['CREATED_ON'])
three11s.dtypes
three11s.head()
three11s.loc[0]
# Plot the number of 311 requests per month
month_counts = three11s.groupby(three11s.CREATED_ON.dt.month)
y = month_counts.size()
x = month_counts.CREATED_ON.first()
axes = pd.Series(y.values, index=x).plot(figsize=(15,5))
plt.ylim(0)
plt.xlabel('Month')
plt.ylabel('Complaint')
grouped_by_type = three11s.groupby(three11s.REQUEST_TYPE)
size = grouped_by_type.size()
size
#len(size)
#size[size > 200]
codebook = pd.read_csv('data/codebook.csv')
codebook.head()
merged_data = pd.merge(three11s,
codebook[['Category', 'Issue']],
how='left',
left_on="REQUEST_TYPE",
right_on="Issue")
merged_data.head()
grouped_by_type = merged_data.groupby(merged_data.Category)
size = grouped_by_type.size()
size
size.plot(kind='barh', figsize=(8,6))
merged_data.groupby(merged_data.NEIGHBORHOOD).size().sort_values(inplace=False,
ascending=False)
merged_data.groupby(merged_data.NEIGHBORHOOD).size().sort_values(inplace=False,
ascending=True).plot(kind="barh", figsize=(5,20))
# create a function that generates a chart of requests per neighborhood
def issues_by_neighborhood(neighborhood):
Generates a plot of issue categories by neighborhood
grouped_by_type = merged_data[merged_data['NEIGHBORHOOD'] == neighborhood].groupby(merged_data.Category)
size = grouped_by_type.size()
size.plot(kind='barh', figsize=(8,6))
issues_by_neighborhood('Greenfield')
issues_by_neighborhood('Brookline')
issues_by_neighborhood('Garfield')
from ipywidgets import interact
@interact(hood=sorted(list(pd.Series(three11s.NEIGHBORHOOD.unique()).dropna())))
def issues_by_neighborhood(hood):
Generates a plot of issue categories by neighborhood
grouped_by_type = merged_data[merged_data['NEIGHBORHOOD'] == hood].groupby(merged_data.Category)
size = grouped_by_type.size()
size.plot(kind='barh',figsize=(8,6))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Embedded Plots
Step2: Exploring Request types
Step3: There are too many request types (268). We need some higher level categories to make this more comprehensible. Fortunately, there is an Issue and Category codebook that we can use to map between low and higher level categories.
Step4: That is a more manageable list of categories for data visualization. Let's take a look at the distribution of requests per category in the dataset.
Step5: Looking at requests at the neighborhood level
Step6: In GRAPH form
Step9: So we can see from the graph above that Brookline, followed by the South Side Slopes, Carrick, and South Side Flats, make the most 311 requests. It would be interesting to get some neighborhood population data and compute the number of requests per capita.
|
14,663 | <ASSISTANT_TASK:>
Python Code:
# built-in python modules
import os
import inspect
import datetime
# scientific python add-ons
import numpy as np
import pandas as pd
# plotting stuff
# first line makes the plots appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
# seaborn makes your plots look better
try:
import seaborn as sns
sns.set(rc={"figure.figsize": (12, 6)})
except ImportError:
print('We suggest you install seaborn using conda or pip and rerun this cell')
# finally, we import the pvlib library
import pvlib
import pvlib
from pvlib import pvsystem
pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib)))
tmy3_data, tmy3_metadata = pvlib.tmy.readtmy3(os.path.join(pvlib_abspath, 'data', '703165TY.csv'))
tmy2_data, tmy2_metadata = pvlib.tmy.readtmy2(os.path.join(pvlib_abspath, 'data', '12839.tm2'))
pvlib.pvsystem.systemdef(tmy3_metadata, 0, 0, .1, 5, 5)
pvlib.pvsystem.systemdef(tmy2_metadata, 0, 0, .1, 5, 5)
angles = np.linspace(-180,180,3601)
ashraeiam = pd.Series(pvsystem.ashraeiam(.05, angles), index=angles)
ashraeiam.plot()
plt.ylabel('ASHRAE modifier')
plt.xlabel('input angle (deg)')
angles = np.linspace(-180,180,3601)
physicaliam = pd.Series(pvsystem.physicaliam(4, 0.002, 1.526, angles), index=angles)
physicaliam.plot()
plt.ylabel('physical modifier')
plt.xlabel('input index')
plt.figure()
ashraeiam.plot(label='ASHRAE')
physicaliam.plot(label='physical')
plt.ylabel('modifier')
plt.xlabel('input angle (deg)')
plt.legend()
# scalar inputs
pvsystem.sapm_celltemp(900, 5, 20) # irrad, wind, temp
# vector inputs
times = pd.DatetimeIndex(start='2015-01-01', end='2015-01-02', freq='12H')
temps = pd.Series([0, 10, 5], index=times)
irrads = pd.Series([0, 500, 0], index=times)
winds = pd.Series([10, 5, 0], index=times)
pvtemps = pvsystem.sapm_celltemp(irrads, winds, temps)
pvtemps.plot()
wind = np.linspace(0,20,21)
temps = pd.DataFrame(pvsystem.sapm_celltemp(900, wind, 20), index=wind)
temps.plot()
plt.legend()
plt.xlabel('wind speed (m/s)')
plt.ylabel('temperature (deg C)')
atemp = np.linspace(-20,50,71)
temps = pvsystem.sapm_celltemp(900, 2, atemp).set_index(atemp)
temps.plot()
plt.legend()
plt.xlabel('ambient temperature (deg C)')
plt.ylabel('temperature (deg C)')
irrad = np.linspace(0,1000,101)
temps = pvsystem.sapm_celltemp(irrad, 2, 20).set_index(irrad)
temps.plot()
plt.legend()
plt.xlabel('incident irradiance (W/m**2)')
plt.ylabel('temperature (deg C)')
models = ['open_rack_cell_glassback',
'roof_mount_cell_glassback',
'open_rack_cell_polymerback',
'insulated_back_polymerback',
'open_rack_polymer_thinfilm_steel',
'22x_concentrator_tracker']
temps = pd.DataFrame(index=['temp_cell','temp_module'])
for model in models:
temps[model] = pd.Series(pvsystem.sapm_celltemp(1000, 5, 20, model=model).ix[0])
temps.T.plot(kind='bar') # try removing the transpose operation and replotting
plt.legend()
plt.ylabel('temperature (deg C)')
inverters = pvsystem.retrieve_sam('sandiainverter')
inverters
vdcs = pd.Series(np.linspace(0,50,51))
idcs = pd.Series(np.linspace(0,11,110))
pdcs = idcs * vdcs
pacs = pvsystem.snlinverter(inverters['ABB__MICRO_0_25_I_OUTD_US_208_208V__CEC_2014_'], vdcs, pdcs)
#pacs.plot()
plt.plot(pacs, pdcs)
plt.ylabel('ac power')
plt.xlabel('dc power')
cec_modules = pvsystem.retrieve_sam('cecmod')
cec_modules
cecmodule = cec_modules.Example_Module
cecmodule
sandia_modules = pvsystem.retrieve_sam(name='SandiaMod')
sandia_modules
sandia_module = sandia_modules.Canadian_Solar_CS5P_220M___2009_
sandia_module
from pvlib import clearsky
from pvlib import irradiance
from pvlib import atmosphere
from pvlib.location import Location
tus = Location(32.2, -111, 'US/Arizona', 700, 'Tucson')
times = pd.date_range(start=datetime.datetime(2014,4,1), end=datetime.datetime(2014,4,2), freq='30s')
ephem_data = pvlib.solarposition.get_solarposition(times, tus)
irrad_data = clearsky.ineichen(times, tus)
#irrad_data.plot()
aoi = irradiance.aoi(0, 0, ephem_data['apparent_zenith'], ephem_data['azimuth'])
#plt.figure()
#aoi.plot()
am = atmosphere.relativeairmass(ephem_data['apparent_zenith'])
# a hot, sunny spring day in the desert.
temps = pvsystem.sapm_celltemp(irrad_data['ghi'], 0, 30)
sapm_1 = pvsystem.sapm(sandia_module, irrad_data['dni']*np.cos(np.radians(aoi)),
irrad_data['ghi'], temps['temp_cell'], am, aoi)
sapm_1.head()
def plot_sapm(sapm_data):
Makes a nice figure with the SAPM data.
Parameters
----------
sapm_data : DataFrame
The output of ``pvsystem.sapm``
fig, axes = plt.subplots(2, 3, figsize=(16,10), sharex=False, sharey=False, squeeze=False)
plt.subplots_adjust(wspace=.2, hspace=.3)
ax = axes[0,0]
sapm_data.filter(like='i_').plot(ax=ax)
ax.set_ylabel('Current (A)')
ax = axes[0,1]
sapm_data.filter(like='v_').plot(ax=ax)
ax.set_ylabel('Voltage (V)')
ax = axes[0,2]
sapm_data.filter(like='p_').plot(ax=ax)
ax.set_ylabel('Power (W)')
ax = axes[1,0]
[ax.plot(sapm_data['effective_irradiance'], current, label=name) for name, current in
sapm_data.filter(like='i_').iteritems()]
ax.set_ylabel('Current (A)')
ax.set_xlabel('Effective Irradiance')
ax.legend(loc=2)
ax = axes[1,1]
[ax.plot(sapm_data['effective_irradiance'], voltage, label=name) for name, voltage in
sapm_data.filter(like='v_').iteritems()]
ax.set_ylabel('Voltage (V)')
ax.set_xlabel('Effective Irradiance')
ax.legend(loc=4)
ax = axes[1,2]
ax.plot(sapm_data['effective_irradiance'], sapm_data['p_mp'], label='p_mp')
ax.set_ylabel('Power (W)')
ax.set_xlabel('Effective Irradiance')
ax.legend(loc=2)
# needed to show the time ticks
for ax in axes.flatten():
for tk in ax.get_xticklabels():
tk.set_visible(True)
plot_sapm(sapm_1)
temps = pvsystem.sapm_celltemp(irrad_data['ghi'], 10, 5)
sapm_2 = pvsystem.sapm(sandia_module, irrad_data['dni']*np.cos(np.radians(aoi)),
irrad_data['dhi'], temps['temp_cell'], am, aoi)
plot_sapm(sapm_2)
sapm_1['p_mp'].plot(label='30 C, 0 m/s')
sapm_2['p_mp'].plot(label=' 5 C, 10 m/s')
plt.legend()
plt.ylabel('Pmp')
plt.title('Comparison of a hot, calm day and a cold, windy day')
import warnings
warnings.simplefilter('ignore', np.RankWarning)
def sapm_to_ivframe(sapm_row):
pnt = sapm_row.T.ix[:,0]
ivframe = {'Isc': (pnt['i_sc'], 0),
'Pmp': (pnt['i_mp'], pnt['v_mp']),
'Ix': (pnt['i_x'], 0.5*pnt['v_oc']),
'Ixx': (pnt['i_xx'], 0.5*(pnt['v_oc']+pnt['v_mp'])),
'Voc': (0, pnt['v_oc'])}
ivframe = pd.DataFrame(ivframe, index=['current', 'voltage']).T
ivframe = ivframe.sort('voltage')
return ivframe
def ivframe_to_ivcurve(ivframe, points=100):
ivfit_coefs = np.polyfit(ivframe['voltage'], ivframe['current'], 30)
fit_voltages = np.linspace(0, ivframe.ix['Voc', 'voltage'], points)
fit_currents = np.polyval(ivfit_coefs, fit_voltages)
return fit_voltages, fit_currents
sapm_to_ivframe(sapm_1['2014-04-01 10:00:00'])
times = ['2014-04-01 07:00:00', '2014-04-01 08:00:00', '2014-04-01 09:00:00',
'2014-04-01 10:00:00', '2014-04-01 11:00:00', '2014-04-01 12:00:00']
times.reverse()
fig, ax = plt.subplots(1, 1, figsize=(12,8))
for time in times:
ivframe = sapm_to_ivframe(sapm_1[time])
fit_voltages, fit_currents = ivframe_to_ivcurve(ivframe)
ax.plot(fit_voltages, fit_currents, label=time)
ax.plot(ivframe['voltage'], ivframe['current'], 'ko')
ax.set_xlabel('Voltage (V)')
ax.set_ylabel('Current (A)')
ax.set_ylim(0, None)
ax.set_title('IV curves at multiple times')
ax.legend()
photocurrent, saturation_current, resistance_series, resistance_shunt, nNsVth = (
pvsystem.calcparams_desoto(irrad_data.ghi,
temp_cell=temps['temp_cell'],
alpha_isc=cecmodule['alpha_sc'],
module_parameters=cecmodule,
EgRef=1.121,
dEgdT=-0.0002677) )
photocurrent.plot()
plt.ylabel('Light current I_L (A)')
saturation_current.plot()
plt.ylabel('Saturation current I_0 (A)')
resistance_series
resistance_shunt.plot()
plt.ylabel('Shunt resistance (ohms)')
plt.ylim(0,100)
nNsVth.plot()
plt.ylabel('nNsVth')
single_diode_out = pvsystem.singlediode(cecmodule, photocurrent, saturation_current,
resistance_series, resistance_shunt, nNsVth)
single_diode_out
single_diode_out['i_sc'].plot()
single_diode_out['v_oc'].plot()
single_diode_out['p_mp'].plot()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: systemdef
Step2: Angle of Incidence Modifiers
Step3: Sandia Cell Temp correction
Step4: Cell and module temperature as a function of wind speed.
Step5: Cell and module temperature as a function of ambient temperature.
Step6: Cell and module temperature as a function of incident irradiance.
Step7: Cell and module temperature for different module and racking types.
Step8: snlinverter
Step9: Need to put more effort into describing this function.
Step10: The Sandia module database.
Step11: Generate some irradiance data for modeling.
Step13: Now we can run the module parameters and the irradiance data through the SAPM function.
Step14: For comparison, here's the SAPM for a sunny, windy, cold version of the same day.
Step15: SAPM IV curves
Step16: desoto
Step17: Single diode model
|
14,664 | <ASSISTANT_TASK:>
Python Code:
x = [0.5,1.3, 2.1, 1.0, 2.1, 1.7, 1.2, 3.9, 3.9, 1.5, 3.5, 3.9, 5.7, 4.7, 5.8, 4.6, 5.1, 5.9, 5.5, 6.4, 6.7, 7.8, 7.4, 6.7, 8.4, 6.9, 10.2, 9.7, 10.0, 9.9]
y = [-1.6,0.5, 3.0, 3.1, 1.5, -1.8, -3.6, 7.0, 8.6, 2.2, 9.3, 3.6, 14.1, 9.5, 14.0, 7.4, 6.4, 17.2, 11.8, 12.2, 18.9, 21.9, 20.6, 15.7, 23.7, 13.6, 26.8, 22.0, 27.5, 23.3]
x = [-5.8,-4.6, -3.9, -3.4, -1.8, -2.1, -3.0, -0.8, 0.4, -0.2, -0.4, -0.0, 2.0, 1.1, 1.4, 1.2, 3.3, 4.3, 4.3, 3.0]
y = [-6.4,-7.7, -9.3, -9.2, -8.9, -7.3, -9.5, -5.0, -3.7, -6.9, -4.0, -3.8, 2.6, -0.6, -0.7, -0.1, 5.0, 4.8, 8.5, 2.5]
x = [1.4,2.3, 3.7, 5.3, 6.6, 8.2, 10.2, 11.8, 12.7, 13.3, 14.6, 17.3, 18.6, 19.5, 21.6, 22.7, 23.6, 24.1]
y = [1.0,0.3, -0.1, -0.1, -0.3, -0.4, -0.4, -0.5, -0.4, -0.5, -0.4, -0.6, -0.8, -0.8, -0.6, -0.9, -0.7, -1.1]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 4. Regression in Matlab (30 Points)
Step2: 5. Python Regression (40 Points)
|
14,665 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import display
from IPython.display import Image
from IPython.display import HTML
assert True # leave this to grade the import statements
Image(url='http://www.mohamedmalik.com/wp-content/uploads/2014/11/Physics.jpg',embed=True,width=600,height=600)
assert True # leave this to grade the image display
%%HTML
<table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge(e)</th>
<th>Mass(MeV/$c^2$)</th>
</tr>
<tr>
<td>up</td>
<td>u</td>
<td>$\bar{u}$</td>
<td>$+\frac{2}{3}$</td>
<td>1.5-3.3</td>
</tr>
<tr>
<td>down</td>
<td>d</td>
<td>$\bar{d}$</td>
<td>$-\frac{1}{3}$</td>
<td>3.5-6.0</td>
</tr>
<tr>
<td>charm</td>
<td>c</td>
<td>$\bar{c}$</td>
<td>$+\frac{2}{3}$</td>
<td>1160-1340</td>
</tr>
<tr>
<td>strange</td>
<td>s</td>
<td>$\bar{s}$</td>
<td>$-\frac1{3}$</td>
<td>70-130</td>
</tr>
<tr>
<td>top</td>
<td>t</td>
<td>$\bar{t}$</td>
<td>$+\frac{2}{3}$</td>
<td>169,100-173,300</td>
</tr>
<tr>
<td>bottom</td>
<td>b</td>
<td>$\bar{b}$</td>
<td>$-\frac1{3}$</td>
<td>4130-4370</td>
raise NotImplementedError()
assert True # leave this here to grade the quark table
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic rich display
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
|
14,666 | <ASSISTANT_TASK:>
Python Code:
#Cargamos los paquetes necesarios
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#Creamos arreglos de datos por un arreglo de numpy
arreglo = np.random.randn(7,4)
columnas = list('ABCD')
df = pd.DataFrame(arreglo, columns=columnas )
df
#Creamos arreglo de datos por diccionario
df2 = pd.DataFrame({'Orden':list(range(5)),'Valor': np.random.random_sample(5),
'Categorรญa':['a', 'a', np.nan, 'b', 'b']})
df2
#Creamos arreglo de datos desde archivo existente
df3 = pd.read_csv('titanic.csv', header=0)
df3.head()
#ยฟcuรกles son los รญndices?
df.index
#ยฟcuรกles son las etiquetas?
df.columns
#ยฟquรฉ valores contiene mi arreglo?
df.values
# Ahora vamos a ver un resumen de nuestros datos
df.describe()
#si quieres saber quรฉ mรกs hace este mรฉtodo ejecuta este comando
df.describe?
#Ordenamiento por valores en las columnas
df2.sort_values(by='Valor')
#Ordenamiento por etiquetas de las columnas
df.sort_index(axis=1, ascending=False)
# Primero vamos a buscar valores especรญficos en una columna de nuestros datos
df2.Categorรญa.isin(['A', 'B', 'b'])
# Ahora veremos en todo el arreglo
df2.isin(['A', 'B', 'b'])
# Busquemos valores vacรญos
pd.isnull(df2)
# ยฟquรฉ hacemos con este? ยฟlo botamos?
df2.dropna(how='any')
# ยฟo lo llenamos diferente?
df2.fillna(value='c')
#obtener promedio por columna
df.mean()
#obtener promedio por fila
df.mean(1)
#varianza por columna
df.var()
#varianza por fila
df.var(1)
#Frecuencia de valores en una columna especรญfica (series)
df2.Categorรญa.value_counts()
#Aplicar una funciรณn para todo el data frame
print(df)
df.apply(np.sum) #suma por columna
df.apply(np.square) # elevar valores al cuadrado
#solo para comprobar
0.131053**2
#Concatenar los dos data frames
pd.concat([df, df2])
#ahora por columnas
pd.concat([df, df2], axis=1)
#Agruparemos el tercer data frame por sobrevivencia
df3.groupby('Survived').mean()
df3.head()
#ahora lo agruparemos por sobrevivencia y sexo
df3.groupby(['Survived', 'Sex']).mean()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ya que tenemos hechos nuestros arreglos, ahora vamos a ver las caracterรญsticas generales de ellos...
Step2: Ejercicio 1
Step3: Ejercicio 2
Step4: Estadรญsticos bรกsicos
Step5: Ejercicio 3
Step6: Hay muchos otros mรฉtodos mรกs para combinar data frames los cuales podemos encontrar en la siguiente pรกgina
|
14,667 | <ASSISTANT_TASK:>
Python Code:
import pandas
import numpy
import itertools
gene_matrix_for_network_df = pandas.read_csv("shared/bladder_cancer_genes_tcga.txt", sep="\t")
gene_matrix_for_network = gene_matrix_for_network_df.as_matrix()
print(gene_matrix_for_network.shape)
genes_keep = numpy.where(numpy.median(gene_matrix_for_network, axis=1) > 14)
matrix_filt = gene_matrix_for_network[genes_keep, ][0]
matrix_filt.shape
N = matrix_filt.shape[0]
M = matrix_filt.shape[1]
gene_matrix_binarized = numpy.tile(numpy.mean(matrix_filt, axis=1),(M,1)).transpose() < matrix_filt
print(gene_matrix_binarized.shape)
gene_matrix_binarized[0:4,0:4]
def entropy_multiple_vecs(binary_vecs):
## use shape to get the numbers of rows and columns as [n,M]
[n, M] = binary_vecs.shape
# make a "M x n" dataframe from the transpose of the matrix binary_vecs
binary_df = pandas.DataFrame(binary_vecs.transpose())
# use the groupby method to obtain a data frame of counts of unique occurrences of the 2^n possible logical states
binary_df_counts = binary_df.groupby(binary_df.columns.values.tolist()).size().values
# divide the matrix of counts by M, to get a probability matrix
probvec = binary_df_counts/M
# compute the shannon entropy using the formula
hvec = -probvec*numpy.log2(probvec)
return numpy.sum(hvec)
print(entropy_multiple_vecs(gene_matrix_binarized[0:4,]))
ratio_thresh = 0.1
genes_to_fit = list(range(0,N))
stage = 0
regulators = [None]*N
entropies_for_stages = [None]*N
max_stage = 4
entropies_for_stages[0] = numpy.zeros(N)
for i in range(0,N):
single_row_matrix = gene_matrix_binarized[i,:,None].transpose()
entropies_for_stages[0][i] = entropy_multiple_vecs(single_row_matrix)
genes_to_fit = set(range(0,N))
for stage in range(1,max_stage + 1):
for gene in genes_to_fit.copy():
# we are trying to find regulators for gene "gene"
poss_regs = set(range(0,N)) - set([gene])
poss_regs_combs = [list(x) for x in itertools.combinations(poss_regs, stage)]
HGX = numpy.array([ entropy_multiple_vecs(gene_matrix_binarized[[gene] + poss_regs_comb,:]) for poss_regs_comb in poss_regs_combs ])
HX = numpy.array([ entropy_multiple_vecs(gene_matrix_binarized[poss_regs_comb,:]) for poss_regs_comb in poss_regs_combs ])
HG = entropies_for_stages[0][gene]
min_value = numpy.min(HGX - HX)
if HG - min_value >= ratio_thresh * HG:
regulators[gene]=poss_regs_combs[numpy.argmin(HGX - HX)]
genes_to_fit.remove(gene)
regulators
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data file shared/bladder_cancer_genes_tcga.txt into a pandas.DataFrame, convert it to a numpy.ndarray matrix, and print the matrix dimensions
Step2: Filter the matrix to include only rows for which the column-wise median is > 14; matrix should now be 13 x 414.
Step3: Binarize the gene expression matrix using the mean value as a breakpoint, turning it into a NxM matrix of booleans (True/False). Call it gene_matrix_binarized.
Step4: Test your matrix by printing the first four columns of the first four rows
Step5: The core part of the REVEAL algorithm is a function that can compute the joint entropy of a collection of binary (TRUE/FALSE) vectors X1, X2, ..., Xn (where length(X1) = length(Xi) = M).
Step6: This test case should produce the value 3.938
Step7: Example implementation of the REVEAL algorithm
|
14,668 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')
data = tsc.loadExample('fish-series')
examples = data.subset(nsamples=50, thresh=1)
plt.plot(examples.T[0:20,:]);
examples = data.center().subset(nsamples=50, thresh=10)
plt.plot(examples.T[0:20,:]);
examples = data.squelch(150).zscore().subset(nsamples=50, thresh=0.1)
plt.plot(examples.T[0:20,:]);
data.index.shape
data.between(0,8).first()
data.between(0,8).index
data.select(lambda x: x < 5).index
plt.plot(data.toTimeSeries().normalize().max());
plt.plot(data.toTimeSeries().normalize().mean());
plt.plot(data.toTimeSeries().normalize().min());
data.seriesMean().first()
data.seriesStdev().first()
from numpy import random
signal = random.randn(240)
data.correlate(signal).first()
data.dims.max
data.dims.min
data.keys().take(10)
data.subToInd(isOneBased=False).keys().take(10)
data.subToInd(isOneBased=False).indToSub(isOneBased=False).keys().take(10)
keys, values = data.query(inds=[[100,101],[200]], isOneBased=False)
keys
values.shape
out = data.select(0).pack()
out.shape
out = data.between(0,2).pack()
out.shape
ts = data.toTimeSeries()
fr = ts.fourier(freq=5)
fr.index
fr.select('coherence').first()
plt.plot(ts.mean())
plt.plot(ts.detrend('nonlinear', order=5).mean());
mat = data.toRowMatrix()
from thunder import Colorize
Colorize.image(mat.cov())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading series
Step2: Inspection
Step3: Note the variation in raw intensity levels.
Step4: Related methods include standardize, detrend, and normalize (the latter two are specified to TimeSeries, see below)
Step5: For example, to select a range
Step6: Note that the index changes to reflect the subselected range
Step7: We can also select based on an arbitrary criterion function
Step8: The default index generated for Series objects will be the range of integers starting at zero and ending one before the length of the series data, as shown in these examples. However, other data types can also be used as the index for a series object, such as a sequence of strings, providing text labels for each element in the series array, or a tuple with indices at different levels. See the tutorial on Multi-indexing tutorial for this usage.
Step9: To summarize within records
Step10: We can also correlate each record with a signal of interest. As expected, for a random signal, the correlation should be near 0.
Step11: Keys
Step12: For keys that correspond to subscripts (e.g. indices of the rows and columns of a matrix, coordinates in space), we can convert between subscript and linear indexing. The default for these conversions is currently onebased subscript indexing, so we need to set onebased to False (this will likely change in a future release).
Step13: The query method can be used to average subselected records based on their (linearized) keys. It returns the mean value and key for each of the provided index lists.
Step14: The pack method collects a series into a local array, reshaped based on the keys. If there are multiple values per record, all will be collected into the local array, so typically we select a subset of values before packing to avoid overwhelming the local returning a very large amount of data.
Step15: Conversions
Step16: Or detrend for detrending data over time
Step17: A RowMatrix provides a variety of methods for working with distributed matrices and matrix operations
|
14,669 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
%load_ext memory_profiler
from pomegranate import BayesianNetwork
import seaborn, time
seaborn.set_style('whitegrid')
X = numpy.random.randint(2, size=(2000, 7))
X[:,3] = X[:,1]
X[:,6] = X[:,1]
X[:,0] = X[:,2]
X[:,4] = X[:,5]
model = BayesianNetwork.from_samples(X, algorithm='exact')
print(model.structure)
model.plot()
from sklearn.datasets import load_digits
X, y = load_digits(10, True)
X = X > numpy.mean(X)
plt.figure(figsize=(14, 4))
plt.subplot(131)
plt.imshow(X[0].reshape(8, 8), interpolation='nearest')
plt.grid(False)
plt.subplot(132)
plt.imshow(X[1].reshape(8, 8), interpolation='nearest')
plt.grid(False)
plt.subplot(133)
plt.imshow(X[2].reshape(8, 8), interpolation='nearest')
plt.grid(False)
X = X[:,:18]
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact-dp') # << BNSL done here!
t1 = time.time() - tic
p1 = model.log_probability(X).sum()
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact')
t2 = time.time() - tic
p2 = model.log_probability(X).sum()
print("Shortest Path")
print("Time (s): ", t1)
print("P(D|M): ", p1)
%memit BayesianNetwork.from_samples(X, algorithm='exact-dp')
print()
print("A* Search")
print("Time (s): ", t2)
print("P(D|M): ", p2)
%memit BayesianNetwork.from_samples(X, algorithm='exact')
tic = time.time()
model = BayesianNetwork.from_samples(X) # << Default BNSL setting
t = time.time() - tic
p = model.log_probability(X).sum()
print("Greedy")
print("Time (s): ", t)
print("P(D|M): ", p)
%memit BayesianNetwork.from_samples(X)
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='chow-liu') # << Default BNSL setting
t = time.time() - tic
p = model.log_probability(X).sum()
print("Chow-Liu")
print("Time (s): ", t)
print("P(D|M): ", p)
%memit BayesianNetwork.from_samples(X, algorithm='chow-liu')
X, _ = load_digits(10, True)
X = X > numpy.mean(X)
t1, t2, t3, t4 = [], [], [], []
p1, p2, p3, p4 = [], [], [], []
n_vars = range(8, 19)
for i in n_vars:
X_ = X[:,:i]
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='exact-dp') # << BNSL done here!
t1.append(time.time() - tic)
p1.append(model.log_probability(X_).sum())
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='exact')
t2.append(time.time() - tic)
p2.append(model.log_probability(X_).sum())
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='greedy')
t3.append(time.time() - tic)
p3.append(model.log_probability(X_).sum())
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='chow-liu')
t4.append(time.time() - tic)
p4.append(model.log_probability(X_).sum())
plt.figure(figsize=(14, 4))
plt.subplot(121)
plt.title("Time to Learn Structure", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.ylabel("Time (s)", fontsize=14)
plt.xlabel("Variables", fontsize=14)
plt.plot(n_vars, t1, c='c', label="Exact Shortest")
plt.plot(n_vars, t2, c='m', label="Exact A*")
plt.plot(n_vars, t3, c='g', label="Greedy")
plt.plot(n_vars, t4, c='r', label="Chow-Liu")
plt.legend(fontsize=14, loc=2)
plt.subplot(122)
plt.title("$P(D|M)$ with Resulting Model", fontsize=14)
plt.xlabel("Variables", fontsize=14)
plt.ylabel("logp", fontsize=14)
plt.plot(n_vars, p1, c='c', label="Exact Shortest")
plt.plot(n_vars, p2, c='m', label="Exact A*")
plt.plot(n_vars, p3, c='g', label="Greedy")
plt.plot(n_vars, p4, c='r', label="Chow-Liu")
plt.legend(fontsize=14)
from pomegranate import DiscreteDistribution, ConditionalProbabilityTable, Node
BRCA1 = DiscreteDistribution({0: 0.999, 1: 0.001})
BRCA2 = DiscreteDistribution({0: 0.985, 1: 0.015})
LCT = DiscreteDistribution({0: 0.950, 1: 0.050})
OC = ConditionalProbabilityTable([[0, 0, 0, 0.999],
[0, 0, 1, 0.001],
[0, 1, 0, 0.750],
[0, 1, 1, 0.250],
[1, 0, 0, 0.700],
[1, 0, 1, 0.300],
[1, 1, 0, 0.050],
[1, 1, 1, 0.950]], [BRCA1, BRCA2])
LI = ConditionalProbabilityTable([[0, 0, 0.99],
[0, 1, 0.01],
[1, 0, 0.20],
[1, 1, 0.80]], [LCT])
PREG = DiscreteDistribution({0: 0.90, 1: 0.10})
LE = ConditionalProbabilityTable([[0, 0, 0.99],
[0, 1, 0.01],
[1, 0, 0.25],
[1, 1, 0.75]], [OC])
BLOAT = ConditionalProbabilityTable([[0, 0, 0, 0.85],
[0, 0, 1, 0.15],
[0, 1, 0, 0.70],
[0, 1, 1, 0.30],
[1, 0, 0, 0.40],
[1, 0, 1, 0.60],
[1, 1, 0, 0.10],
[1, 1, 1, 0.90]], [OC, LI])
LOA = ConditionalProbabilityTable([[0, 0, 0, 0.99],
[0, 0, 1, 0.01],
[0, 1, 0, 0.30],
[0, 1, 1, 0.70],
[1, 0, 0, 0.95],
[1, 0, 1, 0.05],
[1, 1, 0, 0.95],
[1, 1, 1, 0.05]], [PREG, OC])
VOM = ConditionalProbabilityTable([[0, 0, 0, 0, 0.99],
[0, 0, 0, 1, 0.01],
[0, 0, 1, 0, 0.80],
[0, 0, 1, 1, 0.20],
[0, 1, 0, 0, 0.40],
[0, 1, 0, 1, 0.60],
[0, 1, 1, 0, 0.30],
[0, 1, 1, 1, 0.70],
[1, 0, 0, 0, 0.30],
[1, 0, 0, 1, 0.70],
[1, 0, 1, 0, 0.20],
[1, 0, 1, 1, 0.80],
[1, 1, 0, 0, 0.05],
[1, 1, 0, 1, 0.95],
[1, 1, 1, 0, 0.01],
[1, 1, 1, 1, 0.99]], [PREG, OC, LI])
AC = ConditionalProbabilityTable([[0, 0, 0, 0.95],
[0, 0, 1, 0.05],
[0, 1, 0, 0.01],
[0, 1, 1, 0.99],
[1, 0, 0, 0.40],
[1, 0, 1, 0.60],
[1, 1, 0, 0.20],
[1, 1, 1, 0.80]], [PREG, LI])
s1 = Node(BRCA1, name="BRCA1")
s2 = Node(BRCA2, name="BRCA2")
s3 = Node(LCT, name="LCT")
s4 = Node(OC, name="OC")
s5 = Node(LI, name="LI")
s6 = Node(PREG, name="PREG")
s7 = Node(LE, name="LE")
s8 = Node(BLOAT, name="BLOAT")
s9 = Node(LOA, name="LOA")
s10 = Node(VOM, name="VOM")
s11 = Node(AC, name="AC")
model = BayesianNetwork("Hut")
model.add_nodes(s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, s11)
model.add_edge(s1, s4)
model.add_edge(s2, s4)
model.add_edge(s3, s5)
model.add_edge(s4, s7)
model.add_edge(s4, s8)
model.add_edge(s4, s9)
model.add_edge(s4, s10)
model.add_edge(s5, s8)
model.add_edge(s5, s10)
model.add_edge(s5, s11)
model.add_edge(s6, s9)
model.add_edge(s6, s10)
model.add_edge(s6, s11)
model.bake()
plt.figure(figsize=(14, 10))
model.plot()
plt.show()
import networkx
from pomegranate.utils import plot_networkx
constraints = networkx.DiGraph()
constraints.add_edge('genetic conditions', 'diseases')
constraints.add_edge('diseases', 'symptoms')
plot_networkx(constraints)
constraints = networkx.DiGraph()
constraints.add_edge(0, 1)
constraints.add_edge(1, 2)
constraints.add_edge(0, 2)
plot_networkx(constraints)
constraints = networkx.DiGraph()
constraints.add_edge(0, 1)
constraints.add_edge(0, 0)
plot_networkx(constraints)
numpy.random.seed(6)
X = numpy.random.randint(2, size=(200, 15))
X[:,1] = X[:,7]
X[:,12] = 1 - X[:,7]
X[:,5] = X[:,3]
X[:,13] = X[:,11]
X[:,14] = X[:,11]
a = networkx.DiGraph()
b = tuple((0, 1, 2, 3, 4))
c = tuple((5, 6, 7, 8, 9))
d = tuple((10, 11, 12, 13, 14))
a.add_edge(b, c)
a.add_edge(c, d)
print("Constraint Graph")
plot_networkx(a)
plt.show()
print("Learned Bayesian Network")
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact', constraint_graph=a)
plt.figure(figsize=(16, 8))
model.plot()
plt.show()
print("pomegranate time: ", time.time() - tic, model.structure)
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact')
plt.figure(figsize=(16, 8))
model.plot()
plt.show()
print("pomegranate time: ", time.time() - tic, model.structure)
constraint_times, times = [], []
x = numpy.arange(1, 7)
for i in x:
symptoms = tuple(range(i))
diseases = tuple(range(i, i*2))
genetic = tuple(range(i*2, i*3))
constraints = networkx.DiGraph()
constraints.add_edge(genetic, diseases)
constraints.add_edge(diseases, symptoms)
X = numpy.random.randint(2, size=(2000, i*3))
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact', constraint_graph=constraints)
constraint_times.append( time.time() - tic )
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact')
times.append( time.time() - tic )
plt.figure(figsize=(14, 6))
plt.title('Time To Learn Bayesian Network', fontsize=18)
plt.xlabel("Number of Variables", fontsize=14)
plt.ylabel("Time (s)", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.plot( x*3, times, linewidth=3, color='c', label='Exact')
plt.plot( x*3, constraint_times, linewidth=3, color='m', label='Constrained')
plt.legend(loc=2, fontsize=16)
plt.yscale('log')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The structure attribute returns a tuple of tuples, where each inner tuple corresponds to that node in the graph (and the column of data learned on). The numbers in that inner tuple correspond to the parents of that node. The results from this structure are that node 3 has node 1 as a parent, that node 2 has node 0 as a parent, and so forth. It seems to faithfully recapture the underlying dependencies in the data.
Step2: These results show that the A* algorithm is both computationally faster and requires far less memory than the traditional algorithm, making it a better default for the 'exact' algorithm. The amount of memory used by the BNSL process is under 'increment', not 'peak memory', as 'peak memory' returns the total memory used by everything, while increment shows the difference in peak memory before and after the function has run.
Step3: Approximate Learning
Step4: Comparison
Step5: We can see the expected results-- that the A* algorithm works faster than the shortest path, the greedy one faster than that, and Chow-Liu the fastest. The purple and cyan lines superimpose on the right plot as they produce graphs with the same score, followed closely by the greedy algorithm and then Chow-Liu performing the worst.
Step6: This network contains three layer, with symptoms on the bottom (low energy, bloating, loss of appetite, vomitting, and abdominal cramps), diseases in the middle (overian cancer, lactose intolerance, and pregnancy), and genetic tests on the top for three different genetic mutations. The edges in this graph are constrainted such that symptoms are explained by diseases, and diseases can be partially explained by genetic mutations. There are no edges from diseases to genetic conditions, and no edges from genetic conditions to symptoms. If we were going to design a more efficient search algorithm, we would want to exploit this fact to drastically reduce the search space of graphs.
Step7: All variables corresponding to these categories would be put in their appropriate name. This would define a scaffold for structure learning.
Step8: In this graph, we're saying that variable 0 can be a parent for 1 or 2, and that variable 1 can be a parent for variable 2. In the same way that putting multiple variables in a node of the constraint graph allowed us to define layers, putting a single variable in the nodes of a constraint graph can allow us to define an ordering.
Step9: In this situation we would have to run the exponential time algorithm on the variables in node 0 to find the optimal parents, and then run the independent parents algorithm on the variables in node 1 drawing only from the variables in node 0. To be specific
Step10: We see that reconstructed perfectly here. Lets see what would happen if we didn't use the exact algorithm.
Step11: It looks like we got three desirable attributes by using a constraint graph. The first is that there was over an order of magnitude speed improvement in finding the optimal graph. The second is that we were able to remove some edges we didn't want in the final Bayesian network, such as those between 11, 13, and 14. We also removed the edge between 1 and 12 and 1 and 3, which are spurious given the model that we originally defined. The third desired attribute is that we can specify the direction of some of the edges and get a better causal model.
|
14,670 | <ASSISTANT_TASK:>
Python Code:
import random
from numba import jit
# Monte Carlo simulation function. This is defined as
# a function so the numba library can be used to speed
# up execution. Otherwise, this would run much slower.
# p1 is the probability of the first area, and s1 is the
# score of the first area, and so on. The probabilities
# are cumulative.
@jit
def MCHist(n_hist, p1, s1, p2, s2, p3, s3, p4, s4):
money = 0
for n in range(1, n_hist):
x = random.random()
if x <= p1:
money += s1
elif x <= (p1 + p2):
money += s2
elif x <= (p1 + p2 + p3):
money += s3
elif x <= (p1 + p2 + p3 + p4):
money += s4
return money
# Run the simulation, iterating over each number of
# histories in the num_hists array. Don't cheat and look
# at these probabilities!! "You" don't know them yet.
num_hist = 1e3 # $500
results = MCHist(num_hist, 0.05, 1, 0.3, 0.3, 0.15, 0.5, 0.5, 0.2)
payout = round(results / num_hist, 3)
print('Expected payout per spin is ${}'.format(payout))
num_hist2 = 1e8 # $50 million
results2 = MCHist(num_hist2, 0.05, 1, 0.3, 0.3, 0.15, 0.5, 0.5, 0.2)
payout2 = round(results2 / num_hist2, 3)
print('Expected payout per spin is ${}'.format(payout2))
num_hist3 = 1e3 # $500
results3 = MCHist(num_hist3, 0.25, 0.2, 0.25, 0.36, 0.25, 0.3, 0.25, 0.4)
payout3 = round(results3 / num_hist3, 5)
print('Expected payout per spin is ${}'.format(payout3))
num_hist4 = 1e3 # $500
results4 = MCHist(num_hist4, 0.159, 0.315, 0.286, 0.315, 0.238, 0.315, 0.317, 0.315)
payout4 = round(results4 / num_hist4, 3)
print('Expected payout per spin is ${}'.format(payout4))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After spending a significant amount of time spinning the wheel, you feel a little unsatisfied. Sure, you found the expected payout, but there's a nagging feeling that your answer might have been much more accurate if you just had more money to spend. In fact, you're not even completely sure that last two digits are correct (see the N^(-1/2) rule from the previous blog post).
Step2: Much more satisfying. You're now fairly confident of the answer within about a hundredths of a cent. Unfortunately, you don't actually have $50 million to spend (and you certainly don't have the time for 100 million spins). Is there a way to increase the accuracy (i.e. reduce the variance) of the simulation without spending more money (i.e. requiring more histories)?
Step3: Based on the simulation above, we got an answer that was closer to the "true" value of 0.315, even though we used the same number of spins (1000). (Since Monte Carlo simulation is stochastic, it's possible that if you re-run this notebook, you might get an answer that's farther away from the "true" value, but the weight correction method will more consistently give an answer that's closer to the true value)
|
14,671 | <ASSISTANT_TASK:>
Python Code:
class SentenceIterator:
def __init__(self, words):
self.words = words
self.index = 0
def __next__(self):
try:
word = self.words[self.index]
except IndexError:
raise StopIteration()
self.index += 1
return word
def __iter__(self):
return self
class Sentence: # An iterable
def __init__(self, text):
self.text = text
self.words = text.split()
def __iter__(self):
return SentenceIterator(self.words)
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
a = Sentence("Dogs will save the world and cats will eat it.")
for item in a:
print(item)
print("\n")
it = iter(a) # it is an iterator
while True:
try:
nextval = next(it)
print(nextval)
except StopIteration:
del it
break
def gen123():
print("A")
yield 1
print("B")
yield 2
print("C")
yield 3
g = gen123()
print(gen123, " ", type(gen123), " ", type(g))
print("A generator is an iterator.")
print("It has {} and {}".format(g.__iter__, g.__next__))
print(next(g))
print(next(g))
print(next(g))
print(next(g))
for i in gen123():
print(i, "\n")
class Sentence:
def __init__(self,sentence):
self.sentence=sentence
self.words=sentence.split()
def __iter__(self):
for i in range(len(self.words)):
yield self.words[i]
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.words)
a = Sentence("Dogs will save the world and cats will eat it.")
for item in a:
print(item)
print("\n")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example Usage
Step2: Every collection in Python is iterable.
Step3: Some notes on generators
Step4: More notes on generators
Step5: Lecture Exercise
|
14,672 | <ASSISTANT_TASK:>
Python Code:
from pymongo import MongoClient
client = MongoClient('mongodb://localhost:27017/')
db = client.phonebook
print db.collection_names()
data = {'name': 'Alessandro', 'phone': '+39123456789'}
db.people.insert(data)
print db.collection_names()
db.people.insert({'name': 'Puria', 'phone': '+39123456788', 'other_phone': '+3933332323'}, w=0)
try:
db.people.insert({'name': 'Puria', 'phone': '+39123456789'}, w=2)
except Exception as e:
print e
db.people.find_one({'name': 'Alessandro'})
from bson import ObjectId
db.people.find_one({'_id': {'$gt': ObjectId('55893a1d7ab71c669f4c149e')}})
doc = db.people.find_one({'name': 'Alessandro'})
print '\nBefore Updated:', doc
db.people.update({'name': 'Alessandro'}, {'name': 'John Doe'})
doc = db.people.find_one({'name': 'John Doe'})
print '\nAfter Update:', doc
# Go back to previous state
db.people.update({'name': 'John Doe'}, {'$set': {'phone': '+39123456789'}})
print '\nAfter $set phone:', db.people.find_one({'name': 'John Doe'})
db.people.update({'name': 'John Doe'}, {'$set': {'name': 'Alessandro'}})
print '\nAfter $set name:', db.people.find_one({'name': 'Alessandro'})
db.blog.insert({'title': 'MongoDB is great!',
'author': {'name': 'Alessandro',
'surname': 'Molina',
'avatar': 'http://www.gravatar.com/avatar/7a952cebb086d2114080b4b39ed83cad.png'},
'tags': ['mongodb', 'web', 'scaling']})
db.blog.find_one({'title': 'MongoDB is great!'})
db.blog.find_one({'tags': 'mongodb'})
db.blog.find_one({'author.name': 'Alessandro'})
TAGS = ['mongodb', 'web', 'scaling', 'cooking']
import random
for postnum in range(1, 5):
db.blog.insert({'title': 'Post %s' % postnum,
'author': {'name': 'Alessandro',
'surname': 'Molina',
'avatar': 'http://www.gravatar.com/avatar/7a952cebb086d2114080b4b39ed83cad.png'},
'tags': random.sample(TAGS, 2)})
for post in db.blog.find({'tags': {'$in': ['scaling', 'cooking']}}):
print post['title'], '->', ', '.join(post['tags'])
db.blog.ensure_index([('tags', 1)])
db.blog.find({'tags': 'mongodb'}).explain()['queryPlanner']['winningPlan']
db.blog.find({'tags': 'mongodb'}).hint([('_id', 1)]).explain()['queryPlanner']['winningPlan']
db.blog.find({'title': 'Post 1'}).explain()['queryPlanner']['winningPlan']
db.blog.ensure_index([('author.name', 1), ('title', 1)])
db.blog.find({'author.name': 'Alessandro'}, {'title': True, '_id': False}).explain()['queryPlanner']['winningPlan']
db = client.twitter
# How many professors wrote a tweet?
print len(list(db.tweets.aggregate([
{'$match': {'user.description': {'$regex': 'Professor'}}}
])))
# Count them using only the pipeline
print db.tweets.aggregate([
{'$match': {'user.description': {'$regex': 'Professor'}}},
{'$group': {'_id': 'count', 'count': {'$sum': 1}}}
]).next()['count']
# Hashtags frequency
print list(db.tweets.aggregate([
{'$project': {'tags': '$entities.hashtags.text', '_id': 0}},
{'$unwind': '$tags'},
{'$group': {'_id': '$tags', 'count': {'$sum': 1}}},
{'$match': {'count': {'$gt': 20}}}
]))
db.tweets.map_reduce(
map='''function() {
var tags = this.entities.hashtags;
for(var i=0; i<tags.length; i++)
emit(tags[i].text, 1);
}''',
reduce='''function(key, values) {
return Array.sum(values);
}''',
out='tagsfrequency'
)
print(list(
db.tagsfrequency.find({'value': {'$gt': 10}})
))
from pymongo import MongoClient
client = MongoClient('mongodb://localhost:27017/')
db = client.twitter
db.tweets.map_reduce(
map='''function() {
var tags = this.entities.hashtags;
for(var i=0; i<tags.length; i++)
emit(tags[i].text, 1);
}''',
reduce='''function(key, values) {
return Array.sum(values);
}''',
out='tagsfrequency'
)
print(list(
db.tagsfrequency.find({'value': {'$gt': 10}})
))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Once the database is retrieved, collections can be accessed as attributes of the database itself.
Step2: Each inserted document will receive an ObjectId which is a uniquue identifier of the document, the ObjectId is based on some data like the current timestamp, server identifier process id and other data that guarantees it to be unique across multiple servers.
Step3: Fetching back inserted document can be done using find and find_one methods of collections. Both methods accept a query expression that filters the returned documents. Omitting it means retrieving all the documents (or in case of find_one the first document).
Step4: Filters in mongodb are described by Documents themselves, so in case of PyMongo they are dictionaries too.
Step5: Updating Documents
Step6: SubDocuments
Step7: Indexing
Step8: Checking which index MongoDB is using to perform a query can be done using the explain method, forcing an index into a query can be done using the hint method.
Step9: Aggregation Pipeline
Step10: MapReduce
Step11: Exporting from MongoDB
|
14,673 | <ASSISTANT_TASK:>
Python Code:
# create a collection matrix (using the count vectorizer)
countVectorizer = CountVectorizer()
# The CountVectorizer will return a document-term sparse matrix
# the rows represent the documents, and the columns represent terms
# since we have only 2 documents, I use 2 variables to represent the 2 vectors returned
d1_count, d2_count, q_count = countVectorizer.fit_transform(col)
print(d1_count.shape)
print(d2_count.shape)
print(q_count.shape)
def content(name):
'''
print out the cosine similarity result
params:
-----------------
name: the variable name for the cosine similarity function output (a 1X1 matrix in our case)
'''
return 'The cosine similarity between the two docs is {name:.4}.'.format(name=name[0][0])
# let's see the cosine similarity of the two documents first
cs_docs = cosine_similarity(d1_count, d2_count)
print(content(cs_docs))
# create a query matrix
d1_q = cosine_similarity(d1_count, q_count)
print(content(d1_q))
d2_q = cosine_similarity(d2_count, q_count)
print(content(d2_q))
# so the query will probably return the first file for us
tfidfVectorizer = TfidfVectorizer()
d1_tf, d2_tf, q_tf = tfidfVectorizer.fit_transform(col)
for i in (d1_tf, d2_tf, q_tf):
print(i.shape)
# again, cosine similarities
cs_docs_tf = cosine_similarity(d1_tf, d2_tf)
print(content(cs_docs_tf))
cs_d1_q_tf = cosine_similarity(d1_tf, q_tf)
print(content(cs_d1_q_tf))
cs_d2_q_tf = cosine_similarity(d2_tf, q_tf)
print(content(cs_d2_q_tf))
d3 = 'meow squeak'
cols = (d1, d2, d3)
d1_tf, tf, d3_tf = tfidfVectorizer.fit_transform(cols)
cs_d1_d2 = cosine_similarity(d1_tf, d2_tf)
print(content(cs_d1_d2))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's try the second vectorization method
Step2: if we add a new document 'meow squeak' to the collection, let's see the difference.
|
14,674 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import sklearn
from sklearn import datasets
from sklearn import svm
from sklearn.feature_extraction.text import CountVectorizer
import nltk
import numpy as np
import scipy
import re
import os, sys
print(os.getcwd())
os.listdir( os.getcwd()+"/ex6/" )
# Import BeautifulSoup into your workspace
from bs4 import BeautifulSoup
# Load saved matrices from file
ex6data1_mat_data = scipy.io.loadmat( os.getcwd()+"/ex6/ex6data1.mat")
print(type(ex6data1_mat_data))
print(ex6data1_mat_data.keys())
print(type(ex6data1_mat_data["y"]))
print(ex6data1_mat_data['y'].shape)
print(type(ex6data1_mat_data["X"]))
print(ex6data1_mat_data['X'].shape)
plt.scatter( ex6data1_mat_data['X'][:,0] ,ex6data1_mat_data['X'][:,1] )
y=ex6data1_mat_data['y']
np.where(y==1)
# these are the x-coordinates of the X input data such that y=1
# ex6data1_mat_data['X'][np.where(y==1),0]
# and so
plt.scatter( ex6data1_mat_data['X'][np.where(y==0)[0],0], ex6data1_mat_data['X'][np.where(y==0)[0],1] ,
s=35,c='y',marker='o' , label='y=0' )
plt.scatter( ex6data1_mat_data['X'][np.where(y==1)[0],0], ex6data1_mat_data['X'][np.where(y==1)[0],1] ,
s=75,c='b',marker='+' , label='y=1' )
plt.legend(loc=6)
plt.show()
ex6data1_mat_data = scipy.io.loadmat( os.getcwd()+"/ex6/ex6data1.mat")
print(type(ex6data1_mat_data))
print(ex6data1_mat_data.keys())
print("\nTraining Linear SVM ...\n")
C=1
clf=svm.SVC() # C=1. default, kernel='rbl' default, gamma : float, (default='auto')
# if gamma is 'auto' then 1/n_features will be used instead
print( ex6data1_mat_data['X'].shape )
print( ex6data1_mat_data['y'].shape )
clf.fit( ex6data1_mat_data['X'], ex6data1_mat_data['y'].flatten())
# get support vectors
clf.support_vectors_
# get indices of support vectors
clf.support_
# get number of support vectors for each class
clf.n_support_
h=.02 # step size in the mesh
# create a mesh to plot in
X = ex6data1_mat_data['X']
x_min, x_max = X[:,0].min()-1, X[:,0].max()+1
y_min, y_max = X[:,1].min()-1, X[:,1].max()+1
xx,yy = np.meshgrid(np.arange(x_min,x_max,h), np.arange(y_min,y_max,h))
Z = clf.predict(np.c_[xx.ravel(),yy.ravel()]) # translates slice objects to concatenation along the second axis
print(Z.shape)
Z = Z.reshape(xx.shape)
plt.contourf(xx,yy,Z, cmap=plt.cm.coolwarm, alpha=0.8)
# Plot also the training points
plt.scatter(X[:,0],X[:,1],c=ex6data1_mat_data['y'], cmap=plt.cm.coolwarm)
print(xx.shape); print(Z.shape)
clf_lin=svm.SVC(kernel='linear',C=C).fit(ex6data1_mat_data['X'], ex6data1_mat_data['y'].flatten())
h=.02 # step size in the mesh
# create a mesh to plot in
X = ex6data1_mat_data['X']
x_min, x_max = X[:,0].min()-1, X[:,0].max()+1
y_min, y_max = X[:,1].min()-1, X[:,1].max()+1
xx,yy = np.meshgrid(np.arange(x_min,x_max,h), np.arange(y_min,y_max,h))
Z = clf_lin.predict(np.c_[xx.ravel(),yy.ravel()]) # translates slice objects to concatenation along the second axis
print(Z.shape)
Z = Z.reshape(xx.shape)
plt.contourf(xx,yy,Z, cmap=plt.cm.coolwarm, alpha=0.8)
# Plot also the training points
plt.scatter(X[:,0],X[:,1],c=ex6data1_mat_data['y'], cmap=plt.cm.coolwarm)
C_lst = [0.01,0.1,1.,100.]
clf_lst = [svm.SVC(kernel='linear',C=C).fit(ex6data1_mat_data['X'], ex6data1_mat_data['y'].flatten()) for C in C_lst]
h=.02 # step size in the mesh
# create a mesh to plot in
X = ex6data1_mat_data['X']
x_min, x_max = X[:,0].min()-1, X[:,0].max()+1
y_min, y_max = X[:,1].min()-1, X[:,1].max()+1
xx,yy = np.meshgrid(np.arange(x_min,x_max,h), np.arange(y_min,y_max,h))
# title for the plots
titles = ['SVC linear kernel C='+str(C) for C in C_lst]
for i, clf in enumerate(clf_lst):
# Plot the decision boundary. For that, we'll assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max]
plt.subplot(2,2,i+1)
Z=clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx,yy,Z,cmap=plt.cm.coolwarm, alpha=0.8)
# Plot also the training points
plt.scatter(X[:,0], X[:,1],c=ex6data1_mat_data['y'], cmap=plt.cm.coolwarm)
plt.title(titles[i])
plt.show()
titles
x1=np.array([1,2,1])
x2=np.array([0,4,-1])
sigma=2
sum( -(x1-x2)**2 )/(2.*2**2)
np.exp( sum( -(x1-x2)**2 )/(2.*2**2) ) #
def gaussianKernel(x1,x2,sigma):
gaussianKernel : returns a gaussian kernel between x1 and x2 and returns the value in sim
# You need to return the following variables correctly.
sim = 0
sim = np.exp( -np.sum((x1-x2)**2/(2.*sigma**2)))
return sim
gaussianKernel(x1,x2,sigma)
# Load saved matrices from file
ex6data2_mat_data = scipy.io.loadmat( os.getcwd()+"/ex6/ex6data2.mat")
print(type(ex6data2_mat_data))
print(ex6data2_mat_data.keys())
ex6data3_mat_data = scipy.io.loadmat( os.getcwd()+"/ex6/ex6data3.mat")
print(type(ex6data3_mat_data))
print(ex6data3_mat_data.keys()) # Xval, yval are CROSS VALIDATION set data
X = ex6data2_mat_data['X']
print(X.shape)
plt.scatter(X[:,0],X[:,1],c=ex6data2_mat_data['y'], cmap=plt.cm.coolwarm)
C=1.
sigma=0.1
gamma_gaussiankernel = 1./(2.*sigma**2) # I'm supposing that sci-kit learn SVC's gamma = 1/(2*sigma^2)
clf=svm.SVC(kernel='rbf',C=C,gamma=gamma_gaussiankernel)
clf.fit( ex6data2_mat_data['X'], ex6data2_mat_data['y'].flatten())
h=.02 # step size in the mesh
# create a mesh to plot in
X = ex6data2_mat_data['X']
x_min, x_max = X[:,0].min()-.1, X[:,0].max()+.1
y_min, y_max = X[:,1].min()-.1, X[:,1].max()+.1
xx,yy = np.meshgrid(np.arange(x_min,x_max,h), np.arange(y_min,y_max,h))
Z = clf.predict(np.c_[xx.ravel(),yy.ravel()]) # translates slice objects to concatenation along the second axis
print(Z.shape)
Z = Z.reshape(xx.shape)
plt.contourf(xx,yy,Z, cmap=plt.cm.coolwarm, alpha=0.8)
# Plot also the training points
plt.scatter(X[:,0],X[:,1],c=ex6data2_mat_data['y'], cmap=plt.cm.coolwarm)
X = ex6data3_mat_data['X']
y = ex6data3_mat_data['y']
Xval = ex6data3_mat_data['Xval']
yval = ex6data3_mat_data['yval']
C_lst = [0.0001,0.001,0.003,0.01,0.03,0.1,0.3,1.,10.]
sigma_lst = [0.0001,0.001,0.003,0.01,0.03,0.1,0.3,1.,10.]
models = [ [svm.SVC(kernel='rbf',
C=C,
gamma=1./(2.*sigma**2)).fit(X,
y.flatten()) for C in C_lst] for sigma in sigma_lst]
ex6data3_mat_data.keys()
(models[0][0].predict(Xval) != yval).astype('int').mean()
predict_errs = np.array( [[(model.predict(Xval)!=yval).astype('int').mean() for model in rowmodel] for rowmodel in models] )
predict_errs
predict_errs[predict_errs.argmin() // 9 , predict_errs.argmin() % 9]
print( sigma_lst[predict_errs.argmin()//9] )
print( C_lst[predict_errs.argmin() % 9] )
C=1.0
sigma=0.03
clf = svm.SVC(kernel='rbf',C=C,gamma=1./(2.*sigma**2)).fit(X,y.flatten())
(clf.predict(Xval) != yval).astype('int').mean()
# Extract Features
f = open(os.getcwd()+"/ex6/emailSample1.txt",'r')
file_contents = f.read()
f.close()
file_contents
# Lower case
file_contents.lower()
# Strip all HTML
# Looks for any expression that starts with < and ends with > and replace
# and does not have any < or > in the tag it with a space
BeautifulSoup( file_contents.lower() )
BeautifulSoup( file_contents.lower() ).get_text()
# Calling get_text() gives you the text of the review, without tags or markup.
# cf. https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words
import re
# Use regular expressions to do a find-and-replace
# Handle Numbers
# look for 1 or more characters between 0-9
email_contents = re.sub("[0-9]+", # The pattern to search for
"number", # The pattern to replace it with
BeautifulSoup( file_contents.lower() ).get_text() ) # The text to search
# Handle URLS
# Look for strings starting with http:// or https://
re.sub( '(http|https)://[^\s]*','httpaddr',email_contents)
# Handle Email Addresses
# Look for strings with @ in the middle
re.sub( '[^\s]+@[^\s]+','emailaddr', email_contents)
# Handle $ sign
re.sub('[$]+','dollar', email_contents)
def processEmail_regex(email_contents):
processEmail_regex - process email with regular expressions, 1st.
# Lower case
email_contents = email_contents.lower()
# Strip all HTML
# Looks for any expression that starts with < and ends with > and replace
# and does not have any < or > in the tag it with a space
email_contents = BeautifulSoup( email_contents,"lxml" ).get_text()
# Use regular expressions to do a find-and-replace
# Handle Numbers
# look for 1 or more characters between 0-9
email_contents = re.sub("[0-9]+", # The pattern to search for
"number", # The pattern to replace it with
email_contents ) # The text to search
# Handle URLS
# Look for strings starting with http:// or https://
email_contents = re.sub( '(http|https)://[^\s]*','httpaddr',email_contents)
# Handle Email Addresses
# Look for strings with @ in the middle
email_contents = re.sub( '[^\s]+@[^\s]+','emailaddr', email_contents)
# Handle $ sign
email_contents = re.sub('[$]+','dollar', email_contents)
# Remove any non alphanumeric characters
email_contents = re.sub('[^a-zA-Z0-9]',' ', email_contents)
return email_contents
f = open(os.getcwd()+"/ex6/emailSample1.txt",'r')
file_contents = f.read()
f.close()
email_contents = processEmail_regex(file_contents)
email_contents
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
test_email_vec = count_vect.fit_transform( [email_contents,])
print( type(test_email_vec) );
print( test_email_vec.shape )
test_email_vec[0][0]
import nltk
nltk.download()
tokens_email_contents = nltk.word_tokenize(email_contents)
email_contents
tokens_email_contents
tagged_email_contents = nltk.pos_tag( tokens_email_contents )
tagged_email_contents
entities_email_contents = nltk.chunk.ne_chunk( tagged_email_contents )
type(entities_email_contents)
entities_email_contents
X = np.random.randn(300,2)
y = np.logical_xor(X[:,0] > 0, X[:,1] > 0)
print(X.shape)
print(y.shape)
print(X.max())
print(X.min())
print(y.max()); print(y.min())
plt.scatter(X[:,0],X[:,1],s=30,c=y,cmap=plt.cm.Paired)
y = y.astype("int")
print(y.max());print(y.min())
plt.scatter(X[:,0],X[:,1],s=30,c=y,cmap=plt.cm.Paired)
# we create 40 separable points
np.random.seed(0)
X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]]
y = [0] * 20 + [1] * 20
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Following ex6.pdf of Programming Exercise 6
Step2: Part 2
Step3: You should try to change the $C$ value below and see how the decision boundary varies (e.g., try $C=1000$)
Step4: cf. sklearn.svm.SVC
Step5: Plotting
Step7: Part 3
Step8: 1.2.2 Example Dataset 2, 1.2.3 Example Dataset 3
Step9: Part 4
Step10: Part 5
Step11: Indeed
Step13: Spam Classification (2. Spam Classification of ex6)
Step14: Dataset examples from sci-kit learn
Step15: cf. SVM
|
14,675 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!sudo apt -y install libportaudio2
!pip install -q --use-deprecated=legacy-resolver tflite-model-maker
!pip install -q pycocotools
!pip install -q opencv-python-headless==4.1.2.30
import numpy as np
import os
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.config import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import object_detector
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
from absl import logging
logging.set_verbosity(logging.ERROR)
spec = model_spec.get('efficientdet_lite0')
train_data, validation_data, test_data = object_detector.DataLoader.from_csv('gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv')
model = object_detector.create(train_data, model_spec=spec, batch_size=8, train_whole_model=True, validation_data=validation_data)
model.evaluate(test_data)
model.export(export_dir='.')
model.evaluate_tflite('model.tflite', test_data)
#@title Load the trained TFLite model and define some visualization functions
import cv2
from PIL import Image
model_path = 'model.tflite'
# Load the labels into a list
classes = ['???'] * model.model_spec.config.num_classes
label_map = model.model_spec.config.label_map
for label_id, label_name in label_map.as_dict().items():
classes[label_id-1] = label_name
# Define a list of colors for visualization
COLORS = np.random.randint(0, 255, size=(len(classes), 3), dtype=np.uint8)
def preprocess_image(image_path, input_size):
Preprocess the input image to feed to the TFLite model
img = tf.io.read_file(image_path)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.uint8)
original_image = img
resized_img = tf.image.resize(img, input_size)
resized_img = resized_img[tf.newaxis, :]
resized_img = tf.cast(resized_img, dtype=tf.uint8)
return resized_img, original_image
def detect_objects(interpreter, image, threshold):
Returns a list of detection results, each a dictionary of object info.
signature_fn = interpreter.get_signature_runner()
# Feed the input image to the model
output = signature_fn(images=image)
# Get all outputs from the model
count = int(np.squeeze(output['output_0']))
scores = np.squeeze(output['output_1'])
classes = np.squeeze(output['output_2'])
boxes = np.squeeze(output['output_3'])
results = []
for i in range(count):
if scores[i] >= threshold:
result = {
'bounding_box': boxes[i],
'class_id': classes[i],
'score': scores[i]
}
results.append(result)
return results
def run_odt_and_draw_results(image_path, interpreter, threshold=0.5):
Run object detection on the input image and draw the detection results
# Load the input shape required by the model
_, input_height, input_width, _ = interpreter.get_input_details()[0]['shape']
# Load the input image and preprocess it
preprocessed_image, original_image = preprocess_image(
image_path,
(input_height, input_width)
)
# Run object detection on the input image
results = detect_objects(interpreter, preprocessed_image, threshold=threshold)
# Plot the detection results on the input image
original_image_np = original_image.numpy().astype(np.uint8)
for obj in results:
# Convert the object bounding box from relative coordinates to absolute
# coordinates based on the original image resolution
ymin, xmin, ymax, xmax = obj['bounding_box']
xmin = int(xmin * original_image_np.shape[1])
xmax = int(xmax * original_image_np.shape[1])
ymin = int(ymin * original_image_np.shape[0])
ymax = int(ymax * original_image_np.shape[0])
# Find the class index of the current object
class_id = int(obj['class_id'])
# Draw the bounding box and label on the image
color = [int(c) for c in COLORS[class_id]]
cv2.rectangle(original_image_np, (xmin, ymin), (xmax, ymax), color, 2)
# Make adjustments to make the label visible for all objects
y = ymin - 15 if ymin - 15 > 15 else ymin + 15
label = "{}: {:.0f}%".format(classes[class_id], obj['score'] * 100)
cv2.putText(original_image_np, label, (xmin, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
# Return the final image
original_uint8 = original_image_np.astype(np.uint8)
return original_uint8
#@title Run object detection and show the detection results
INPUT_IMAGE_URL = "https://storage.googleapis.com/cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg" #@param {type:"string"}
DETECTION_THRESHOLD = 0.3 #@param {type:"number"}
TEMP_FILE = '/tmp/image.png'
!wget -q -O $TEMP_FILE $INPUT_IMAGE_URL
im = Image.open(TEMP_FILE)
im.thumbnail((512, 512), Image.ANTIALIAS)
im.save(TEMP_FILE, 'PNG')
# Load the TFLite model
interpreter = tf.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
# Run inference and draw detection result on the local copy of the original file
detection_result_image = run_odt_and_draw_results(
TEMP_FILE,
interpreter,
threshold=DETECTION_THRESHOLD
)
# Show the detection result
Image.fromarray(detection_result_image)
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
NUMBER_OF_TPUS = 1#@param {type:"number"}
!edgetpu_compiler model.tflite --num_segments=$NUMBER_OF_TPUS
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Object Detection with TensorFlow Lite Model Maker
Step2: Import the required packages.
Step3: Prepare the dataset
Step4: Step 2. Load the dataset.
Step5: Step 3. Train the TensorFlow model with the training data.
Step6: Step 4. Evaluate the model with the test data.
Step7: Step 5. Export as a TensorFlow Lite model.
Step8: Step 6. Evaluate the TensorFlow Lite model.
Step12: You can download the TensorFlow Lite model file using the left sidebar of Colab. Right-click on the model.tflite file and choose Download to download it to your local computer.
Step13: (Optional) Compile For the Edge TPU
Step 1. Install the EdgeTPU Compiler
Step14: Step 2. Select number of Edge TPUs, Compile
|
14,676 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import time
import pylab as pl
from IPython import display
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
return ([[source_vocab_to_int[word] for word in text.split()] for text in source_text.split('\n')],\
[[target_vocab_to_int[word] for word in text.split()] + \
[target_vocab_to_int['<EOS>']] for text in target_text.split('\n')])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
return tf.placeholder(tf.int32, [None, None], name='input'),\
tf.placeholder(tf.int32, [None, None], name='targets'),\
tf.placeholder(tf.float32, name='learning_rate'),\
tf.placeholder(tf.float32, name='keep_prob'),\
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell] * num_layers)
rnn_cell = tf.contrib.rnn.DropoutWrapper(rnn_cell, output_keep_prob = keep_prob)
rnn_output, rnn_state = tf.nn.dynamic_rnn(cell = rnn_cell, inputs = rnn_inputs, dtype=tf.float32)
return rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
rnn_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
decoder, state, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(cell = rnn_cell, \
decoder_fn = decoder_fn, \
inputs = dec_embed_input, \
sequence_length = sequence_length, \
scope=decoding_scope)
return output_fn(decoder)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: Maximum length of
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn = output_fn, \
encoder_state = encoder_state, \
embeddings = dec_embeddings, \
start_of_sequence_id = start_of_sequence_id, \
end_of_sequence_id = end_of_sequence_id, \
maximum_length = maximum_length, \
num_decoder_symbols = vocab_size)
inference_logits, _, a_ = tf.contrib.seq2seq.dynamic_rnn_decoder(cell = dec_cell, \
decoder_fn = infer_decoder_fn, \
scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell] * num_layers)
rnn_cell = tf.contrib.rnn.DropoutWrapper(rnn_cell, output_keep_prob = keep_prob)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
with tf.variable_scope('decoding_scope') as decoding_scope:
training_logits = decoding_layer_train(encoder_state, rnn_cell, dec_embed_input, \
sequence_length, decoding_scope, output_fn, \
keep_prob)
with tf.variable_scope('decoding_scope', reuse=True) as decoding_scope:
decoding_scope.reuse_variables()
inference_logits = decoding_layer_infer(encoder_state, rnn_cell, dec_embeddings, \
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], \
sequence_length, vocab_size, decoding_scope, \
output_fn, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
enc_embeddings = tf.Variable(tf.random_uniform((source_vocab_size, enc_embedding_size)))
enc_embed_input = tf.nn.embedding_lookup(enc_embeddings, input_data)
enc_input = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform((target_vocab_size, dec_embedding_size)))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
return decoding_layer(dec_embed_input, \
dec_embeddings, \
enc_input, \
target_vocab_size, \
sequence_length, \
rnn_size, \
num_layers, \
target_vocab_to_int, \
keep_prob)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 1
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.8
# Show stats for every n number of batches
show_every_n_batches = 10
# For graph
times, train_accs, valid_accs, losses = [], [], [], []
counter = 0
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
def plot_graph(epoch, train_acc, valid_acc, loss):
pl.ylim(min(min(loss), min(train_acc), min(valid_acc)), 1.0)
pl.xlim(min(epoch), max(epoch))
pl.plot(epoch, train_acc, label = 'Training Accuracy', color = 'green')
pl.plot(epoch, valid_acc, label = 'Validation Accuracy', color = 'purple')
pl.plot(epoch, loss, label = 'Training Loss', color = 'red')
pl.legend(loc='upper right')
pl.xlabel('Time')
display.clear_output(wait=True)
display.display(pl.gcf())
pl.gcf().clear()
time.sleep(0.1)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target_batch,
[(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
'constant')
if max_seq - batch_train_logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
len_batches = len(source_int_text) / batch_size
if (epoch_i * len_batches + batch_i) % show_every_n_batches == 0:
counter += 1
times = range(counter)
train_accs.append(train_acc)
valid_accs.append(valid_acc)
losses.append(loss)
plot_graph(times, train_accs, valid_accs, losses)
print('Epoch {}, Batch {:>2}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i + 1, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save trained model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [vocab_to_int[word] if word in vocab_to_int.keys() \
else vocab_to_int['<UNK>'] \
for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentences = ['he saw a old yellow truck .', \
'I like grapes during march .', \
'The united states favorite fruit is apple during september']
DON'T MODIFY ANYTHING IN THIS CELL
for sentence in translate_sentences:
translate_sentence = sentence_to_seq(sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
# print('Input')
# print(' Word Ids: {}'.format([i for i in translate_sentence]))
# print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
# print('\nPrediction')
# print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
# print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
translated = [target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]
translated.remove('<EOS>')
print('Translated: "{}" to: "{}"'.format(' '.join([source_int_to_vocab[i] for i in translate_sentence]), \
' '.join(translated)))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
14,677 | <ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.8
b['ecc'] = 0.1
b['irrad_method'] = 'none'
b.add_dataset('orb', compute_times=np.linspace(0,4,1000), dataset='orb01', component=['primary', 'secondary'])
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
b.add_dataset('lc', times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')
b.set_value('incl@orbit', 90)
b.run_compute(model='run_with_incl_90')
b.set_value('incl@orbit', 85)
b.run_compute(model='run_with_incl_85')
b.set_value('incl@orbit', 80)
b.run_compute(model='run_with_incl_80')
afig, mplfig = b.plot(show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, highlight_marker='s', highlight_color='g', highlight_ms=20, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, highlight=False, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=0.5, uncover=True, show=True)
afig, mplfig = b['primary@orb@run_with_incl_80'].plot(show=True)
afig, mplfig = b.plot(component='primary', kind='orb', model='run_with_incl_80', show=True)
afig, mplfig = b.plot('primary@orb@run_with_incl_80', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', y='vus', show=True)
b['orb01@primary@run_with_incl_80'].qualifiers
afig, mplfig = b.plot(dataset='lc01', x='phases', z=0, show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xunit='AU', yunit='AU', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlabel='X POS', ylabel='Z POS', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlim=(-2,2), show=True)
afig, mplfig = b['lc01@dataset'].plot(yerror='sigmas', show=True)
afig, mplfig = b['lc01@dataset'].plot(yerror=None, show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(c='r', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', cmap='spring', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', draw_sidebars=True, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True)
afig, mplfig = b['primary@orb@run_with_incl_80'].plot(label='primary')
afig, mplfig = b['secondary@orb@run_with_incl_80'].plot(label='secondary', legend=True, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True, legend_kwargs={'loc': 'center', 'facecolor': 'r'})
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(linestyle=':', s=0.1, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=0, projection='3d', show=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This first line is only necessary for ipython noteboooks - it allows the plots to be shown on this page instead of in interactive mode
Step2: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step3: And we'll attach some dummy datasets. See Datasets for more details.
Step4: And run the forward models. See Computing Observables for more details.
Step5: Showing and Saving
Step6: Any call to plot returns 2 objects - the autofig and matplotlib figure instances. Generally we won't need to do anything with these, but having them returned could come in handy if you want to manually edit either before drawing/saving the image.
Step7: Time (highlight and uncover)
Step8: To change the style of the "highlighted" points, you can pass matplotlib recognized markers, colors, and markersizes to the highlight_marker, highlight_color, and highlight_ms keywords, respectively.
Step9: To disable highlighting, simply send highlight=False
Step10: Uncover
Step11: Selecting Datasets
Step12: Selecting Arrays
Step13: To see the list of available qualifiers that could be passed for x or y, call the qualifiers (or twigs) property on the ParameterSet.
Step14: For more information on each of the available arrays, see the relevant tutorial on that dataset method
Step15: Note that when plotting in phase, PHOEBE will automatically sort and connect points in phase-order when plotting against phase if the system is not time-dependent (see b.hierarchy.is_time_dependent). Otherwise, the points will be sorted and conencted in time-order - with breaks automatically applied to handle intelligent phase-wrapping. In the vast majority of cases, this default behavior should make sense, but can always be overridden by passing 'times' or 'phases' to i (see the plot API docs for more details).
Step16: WARNING
Step17: Axes Limits
Step18: Errorbars
Step19: To disable the errorbars, simply set yerror=None.
Step20: Colors
Step21: In addition, you can point to an array in the dataset to use as color.
Step22: Choosing colors works slightly differently for meshes (ie you can set fc for facecolor and ec for edgecolor). For more details, see the tutorial on the MESH dataset.
Step23: Adding a Colorbar
Step24: Labels and Legends
Step25: The legend labels are generated automatically, but can be overriden by passing a string to the label keyword.
Step26: To override the position or styling of the legend, you can pass valid options to legend_kwargs which will be passed on to plt.legend
Step27: Other Plotting Options
Step28: 3D Axes
|
14,678 | <ASSISTANT_TASK:>
Python Code:
# Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
# Calculate number of students
n_students = len(student_data.index)
# Calculate number of features (minus 1, because "passed" is not a feature, but a target column)
n_features = len(student_data.columns) - 1
# Calculate passing students
n_passed = len(student_data[student_data['passed'] == 'yes'])
# Calculate failing students
n_failed = len(student_data[student_data['passed'] == 'no'])
# Calculate graduation rate
grad_rate = (n_passed / float(n_students)) * 100
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# Show the feature information by printing the first five rows
print "\nFeature values:"
print X_all.head()
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
# Import Cross Validation functionality to perform splitting the data
from sklearn import cross_validation
# Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# Calculate split percentage
splitPercentage = float(num_test) / (num_train + num_test)
print "Split percentage: {0:.2f}% ".format(splitPercentage * 100)
# Shuffle and split the dataset into the number of training and testing points above
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
X_all, y_all, test_size = splitPercentage, stratify = y_all, random_state = 0)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
# Import the three supervised learning models from sklearn
# Logistic Regression
from sklearn.linear_model import LogisticRegression
# Support Vector Machine
from sklearn.svm import SVC
# KNN classifier
from sklearn.neighbors import KNeighborsClassifier
# Random State for each model consistent with splitting Random State = 0
randState = 0
# Initialize the three models
clf_A = LogisticRegression(random_state = randState)
clf_B = SVC(random_state = randState)
clf_C = KNeighborsClassifier()
classifiers = [clf_A, clf_B, clf_C]
# Set up the training set sizes
X_train_100 = X_train[:100]
y_train_100 = y_train[:100]
X_train_200 = X_train[:200]
y_train_200 = y_train[:200]
X_train_300 = X_train
y_train_300 = y_train
trainingData = [(X_train_100, y_train_100), (X_train_200, y_train_200), (X_train_300, y_train_300)]
# Execute the 'train_predict' function for each classifier and each training set size
for each in range(len(classifiers)):
print classifiers[each]
print "-------------------"
for data in trainingData:
train_predict(classifiers[each], data[0], data[1], X_test, y_test)
print
# Import 'GridSearchCV' and 'make_scorer'
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
def gridSearch(clf, parameters):
# Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(f1_score, pos_label="yes")
# Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(clf, parameters, scoring=f1_scorer)
# Fit the grid search object to the training data and find the optimal parameters
grid_obj = grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
return clf
# Create the parameters list you wish to tune
svcParameters = [
{'C': [1, 10, 100], 'kernel': ['rbf'], 'gamma': ['auto']},
{'C': [1, 10, 100, 1000], 'kernel': ['linear', 'rbf', 'poly']}
]
knnParams = {'n_neighbors': [2, 3, 4, 5],
'weights': ['uniform', 'distance']}
regresParams = {'C': [0.5, 1.0, 10.0, 100.0],
'max_iter': [100, 1000],
'solver': ['sag', 'liblinear']}
randState = 0
classifiers = [gridSearch(SVC(random_state = randState), svcParameters),
gridSearch(KNeighborsClassifier(), knnParams),
gridSearch(LogisticRegression(random_state = randState), regresParams)]
# Report the final F1 score for training and testing after parameter tuning
# I've tested all three of them just out of curiosity, I've chosen Logistic Regression initially.
for clf in classifiers:
print clf
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
print "-----------------"
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation
Step2: Preparing the Data
Step3: Preprocess Feature Columns
Step4: Implementation
Step5: Training and Evaluating Models
Step6: Implementation
Step7: Tabular Results
|
14,679 | <ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('kQmHaI5Jw1c', width=800, height=450)
from IPython.display import YouTubeVideo
YouTubeVideo('YbNE3zhtsoo', width=800, height=450)
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from tensorflow.python import keras
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, Dropout
img_rows, img_cols = 28, 28
num_classes = 10
def data_prep(raw):
out_y = keras.utils.to_categorical(raw.label, num_classes)
num_images = raw.shape[0]
x_as_array = raw.values[:, 1:]
x_shaped_array = x_as_array.reshape(num_images, img_rows, img_cols, 1)
out_x = x_shaped_array / 255
return out_x, out_y
train_file = 'inputs/digit_recognizer/train.csv'
raw_data = pd.read_csv(train_file)
x, y = data_prep(raw_data)
print(x[0], y[0])
model = Sequential()
model.add(Conv2D(20, kernel_size=(3, 3),
activation='relu',
input_shape=(img_rows, img_cols, 1)))
model.add(Conv2D(20, kernel_size=(3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
model.fit(x, y,
batch_size=128,
epochs=2,
validation_split=0.2)
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow.python import keras
img_rows, img_cols = 28, 28
num_classes = 10
def prep_data(raw, train_size, val_size):
y = raw[:, 0]
out_y = keras.utils.to_categorical(y, num_classes)
x = raw[:, 1:]
num_images = raw.shape[0]
out_x = x.reshape(num_images, img_rows, img_cols, 1)
out_x = out_x / 255
return out_x, out_y
fashion_file = 'inputs/fashionmnist/train.csv'
fashion_data = np.loadtxt(fashion_file, skiprows=1, delimiter=',')
x, y = prep_data(fashion_data, train_size=50000, val_size=5000)
from tensorflow.python import keras
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, Conv2D
fashion_model = Sequential()
fashion_model.add(Conv2D(12, kernel_size = (3, 3),
activation='relu',
input_shape=(img_rows, img_cols, 1)))
fashion_model.add(Conv2D(12, kernel_size=(3,3), activation='relu'))
fashion_model.add(Conv2D(12, kernel_size=(3,3), activation='relu'))
fashion_model.add(Flatten())
fashion_model.add(Dense(100, activation='relu'))
fashion_model.add(Dense(num_classes, activation='softmax'))
fashion_model
fashion_model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
fashion_model.fit(x, y,
batch_size=100,
epochs=4,
validation_split=0.2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here is the ReLU activation function link that Dan mentioned.
Step2: Let's build our model
Step3: Compile and fit
Step4: You know the drill, practice makes perfect!
Step5: Specify Model
Step6: Compile Model
Step7: Fit Model
|
14,680 | <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.time_frequency import single_trial_power
from mne.stats import permutation_cluster_test
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.Raw(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = raw.info['ch_names'][picks[0]]
# Load condition 1
reject = dict(grad=4000e-13, eog=150e-6)
event_id = 1
epochs_condition_1 = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject)
data_condition_1 = epochs_condition_1.get_data() # as 3D matrix
data_condition_1 *= 1e13 # change unit to fT / cm
# Load condition 2
event_id = 2
epochs_condition_2 = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject)
data_condition_2 = epochs_condition_2.get_data() # as 3D matrix
data_condition_2 *= 1e13 # change unit to fT / cm
# Take only one channel
data_condition_1 = data_condition_1[:, 97:98, :]
data_condition_2 = data_condition_2[:, 97:98, :]
# Time vector
times = 1e3 * epochs_condition_1.times # change unit to ms
# Factor to downsample the temporal dimension of the PSD computed by
# single_trial_power. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly comptuational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 2
frequencies = np.arange(7, 30, 3) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
n_cycles = 1.5
epochs_power_1 = single_trial_power(data_condition_1, sfreq=sfreq,
frequencies=frequencies,
n_cycles=n_cycles, decim=decim)
epochs_power_2 = single_trial_power(data_condition_2, sfreq=sfreq,
frequencies=frequencies,
n_cycles=n_cycles, decim=decim)
epochs_power_1 = epochs_power_1[:, 0, :, :] # only 1 channel to get 3D matrix
epochs_power_2 = epochs_power_2[:, 0, :, :] # only 1 channel to get 3D matrix
# Compute ratio with baseline power (be sure to correct time vector with
# decimation factor)
baseline_mask = times[::decim] < 0
epochs_baseline_1 = np.mean(epochs_power_1[:, :, baseline_mask], axis=2)
epochs_power_1 /= epochs_baseline_1[..., np.newaxis]
epochs_baseline_2 = np.mean(epochs_power_2[:, :, baseline_mask], axis=2)
epochs_power_2 /= epochs_baseline_2[..., np.newaxis]
threshold = 6.0
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_test([epochs_power_1, epochs_power_2],
n_permutations=100, threshold=threshold, tail=0)
plt.clf()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
plt.subplot(2, 1, 1)
evoked_contrast = np.mean(data_condition_1, 0) - np.mean(data_condition_2, 0)
plt.plot(times, evoked_contrast.T)
plt.title('Contrast of evoked response (%s)' % ch_name)
plt.xlabel('time (ms)')
plt.ylabel('Magnetic Field (fT/cm)')
plt.xlim(times[0], times[-1])
plt.ylim(-100, 200)
plt.subplot(2, 1, 2)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
plt.imshow(T_obs,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', cmap='RdBu_r')
plt.imshow(T_obs_plot,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', cmap='RdBu_r')
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Induced power (%s)' % ch_name)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute statistic
Step3: View time-frequency plots
|
14,681 | <ASSISTANT_TASK:>
Python Code:
from copy import copy
import datetime
import os
from pathlib import Path
from pprint import pprint
import shutil
import time
from zipfile import ZipFile
import numpy as np
from planet import api
from planet.api import downloader, filters
# if your Planet API Key is not set as an environment variable, you can paste it below
API_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE')
client = api.ClientV1(api_key=API_KEY)
# these functions were developed in the best practices tutorial (part 1)
# create an api request from the search specifications
def build_request(aoi_geom, start_date, stop_date):
'''build a data api search request for clear PSScene imagery'''
query = filters.and_filter(
filters.geom_filter(aoi_geom),
filters.range_filter('clear_percent', gte=90),
filters.date_range('acquired', gt=start_date),
filters.date_range('acquired', lt=stop_date)
)
return filters.build_search_request(query, ['PSScene'])
def search_data_api(request, client, limit=500):
result = client.quick_search(request)
# this returns a generator
return result.items_iter(limit=limit)
# define test data for the filter
test_start_date = datetime.datetime(year=2019,month=4,day=1)
test_stop_date = datetime.datetime(year=2019,month=5,day=1)
# iowa crops aoi
test_aoi_geom = {
"type": "Polygon",
"coordinates": [
[
[-93.299129, 42.699599],
[-93.299674, 42.812757],
[-93.288436, 42.861921],
[-93.265332, 42.924817],
[-92.993873, 42.925124],
[-92.993888, 42.773637],
[-92.998396, 42.754529],
[-93.019154, 42.699988],
[-93.299129, 42.699599]
]
]
}
request = build_request(test_aoi_geom, test_start_date, test_stop_date)
print(request)
items = list(search_data_api(request, client))
print(len(items))
# check out an item just for fun
pprint(items[0])
# item = items[0]
# acquired_date = item['properties']['acquired'].split('T')[0]
# acquired_date
def get_acquired_date(item):
return item['properties']['acquired'].split('T')[0]
acquired_dates = [get_acquired_date(item) for item in items]
unique_acquired_dates = set(acquired_dates)
unique_acquired_dates
def get_date_item_ids(date, all_items):
return [i['id'] for i in all_items if get_acquired_date(i) == date]
def get_ids_by_date(items):
acquired_dates = [get_acquired_date(item) for item in items]
unique_acquired_dates = set(acquired_dates)
ids_by_date = dict((d, get_date_item_ids(d, items))
for d in unique_acquired_dates)
return ids_by_date
ids_by_date = get_ids_by_date(items)
pprint(ids_by_date)
def build_order(ids, name, aoi_geom):
# specify the PSScene 4-Band surface reflectance product
# make sure to get the *_udm2 bundle so you get the udm2 product
# note: capitalization really matters in item_type when using planet client orders api
item_type = 'PSScene'
bundle = 'analytic_sr_udm2'
orders_request = {
'name': name,
'products': [{
'item_ids': ids,
'item_type': item_type,
'product_bundle': bundle
}],
'tools': get_tools(aoi_geom),
'delivery': {
'single_archive': True,
'archive_filename':'{{name}}_{{order_id}}.zip',
'archive_type':'zip'
},
'notifications': {
'email': False
},
}
return orders_request
def get_tools(aoi_geom):
# clip to AOI
clip_tool = {'clip': {'aoi': aoi_geom}}
# convert to NDVI
bandmath_tool = {'bandmath': {
"pixel_type": "32R",
"b1": "(b4 - b3) / (b4+b3)"
}}
# composite into one image
composite_tool = {
"composite":{}
}
tools = [clip_tool, bandmath_tool, composite_tool]
return tools
# uncomment to see what an order request would look like
# pprint(build_order(['id'], 'demo', test_aoi_geom), indent=4)
def get_orders_requests(ids_by_date, aoi_geom):
order_requests = [build_order(ids, date, aoi_geom)
for date, ids in ids_by_date.items()]
return order_requests
order_requests = get_orders_requests(ids_by_date, test_aoi_geom)
print(len(order_requests))
pprint(order_requests[0])
def create_orders(order_requests, client):
orders_info = [client.create_order(r).get()
for r in order_requests]
order_ids = [i['id'] for i in orders_info]
return order_ids
# testing: lets just create two orders
order_limit = 2
order_ids = create_orders(order_requests[:order_limit], client)
order_ids
def poll_for_success(order_ids, client, num_loops=50):
count = 0
polling = copy(order_ids)
completed = []
while(count < num_loops):
count += 1
states = []
for oid in copy(polling):
order_info = client.get_individual_order(oid).get()
state = order_info['state']
states += state
print('{}:{}'.format(oid, state))
success_states = ['success', 'partial']
if state == 'failed':
raise Exception(response)
elif state in success_states:
polling.remove(oid)
completed += oid
if not len(polling):
print('done')
break
print('--')
time.sleep(30)
poll_for_success(order_ids, client)
data_dir = os.path.join('data', 'use_case_1')
# make the download directory if it doesn't exist
Path(data_dir).mkdir(parents=True, exist_ok=True)
def poll_for_download(dest, endswith, num_loops=50):
count = 0
while(count < num_loops):
count += 1
matched_files = (f for f in os.listdir(dest)
if os.path.isfile(os.path.join(dest, f))
and f.endswith(endswith))
match = next(matched_files, None)
if match:
match = os.path.join(dest, match)
print('downloaded')
break
else:
print('waiting...')
time.sleep(10)
return match
def download_orders(order_ids, client, dest='.', limit=None):
files = []
for order_id in order_ids:
print('downloading {}'.format(order_id))
filename = download_order(order_id, dest, client, limit=limit)
if filename:
files.append(filename)
return files
def download_order(order_id, dest, client, limit=None):
'''Download an order by given order ID'''
# this returns download stats but they aren't accurate or informative
# so we will look for the downloaded file on our own.
dl = downloader.create(client, order=True)
urls = client.get_individual_order(order_id).items_iter(limit=limit)
dl.download(urls, [], dest)
endswith = '{}.zip'.format(order_id)
filename = poll_for_download(dest, endswith)
return filename
downloaded_files = download_orders(order_ids, client, data_dir)
downloaded_files
def unzip(filename, overwrite=False):
location = Path(filename)
zipdir = location.parent / location.stem
if os.path.isdir(zipdir):
if overwrite:
print('{} exists. overwriting.'.format(zipdir))
shutil.rmtree(zipdir)
else:
raise Exception('{} already exists'.format(zipdir))
with ZipFile(location) as myzip:
myzip.extractall(zipdir)
return zipdir
zipdirs = [unzip(f, overwrite=True) for f in downloaded_files]
pprint(zipdirs)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 4.2
Step6: Step 5
Step7: Step 5.2
Step8: Step 6
|
14,682 | <ASSISTANT_TASK:>
Python Code:
import pyspark
sc = pyspark.SparkContext(appName="my_spark_app")
lines = sc.textFile("../data/people.csv")
lines.count()
lines.first()
lines = sc.textFile("../data/people.csv")
filtered_lines = lines.filter(lambda line: "individuum" in line)
filtered_lines.first()
# loading an external dataset
lines = sc.textFile("../data/people.csv")
print(type(lines))
# applying a transformation to an existing RDD
filtered_lines = lines.filter(lambda line: "individuum" in line)
print(type(filtered_lines))
# if we print lines we get only this
print(lines)
# when we perform an action, then we get the result
action_result = lines.first()
print(type(action_result))
action_result
# filtered_lines is not computed until the next action is applied over it
# it make sense when working with big data sets, as it is not necessary to
# transform the whole RDD to get an action over a subset
# Spark doesn't even reads the complete file!
filtered_lines.first()
import time
lines = sc.textFile("../data/REFERENCE/*")
lines_nonempty = lines.filter( lambda x: len(x) > 0 )
words = lines_nonempty.flatMap(lambda x: x.split())
words_persisted = lines_nonempty.flatMap(lambda x: x.split())
t1 = time.time()
words.count()
print("Word count 1:",time.time() - t1)
t1 = time.time()
words.count()
print("Word count 2:",time.time() - t1)
t1 = time.time()
words_persisted.persist()
words_persisted.count()
print("Word count persisted 1:",time.time() - t1)
t1 = time.time()
words_persisted.count()
print("Word count persisted 2:", time.time() - t1)
# load a file
lines = sc.textFile("../data/REFERENCE/*")
# make a transformation filtering positive length lines
lines_nonempty = lines.filter( lambda x: len(x) > 0 )
print("-> lines_nonepmty is: {} and if we print it we get\n {}".format(type(lines_nonempty), lines_nonempty))
# we transform again
words = lines_nonempty.flatMap(lambda x: x.split())
print("-> words is: {} and if we print it we get\n {}".format(type(words), words))
words_persisted = lines_nonempty.flatMap(lambda x: x.split())
print("-> words_persisted is: {} and if we print it we get\n {}".format(type(words_persisted), words_persisted))
final_result = words.take(10)
print("-> final_result is: {} and if we print it we get\n {}".format(type(final_result), final_result))
import time
# we checkpint the initial time
t1 = time.time()
words.count()
# and count the time expmended on the computation
print("Word count 1:",time.time() - t1)
t1 = time.time()
words.count()
print("Word count 2:",time.time() - t1)
t1 = time.time()
words_persisted.persist()
words_persisted.count()
print("Word count persisted 1:",time.time() - t1)
t1 = time.time()
words_persisted.count()
print("Word count persisted 2:", time.time() - t1)
lines = sc.textFile("../data/people.csv")
print("-> Three elements:\n", lines.take(3))
print("-> The whole RDD:\n", lines.collect())
lines = sc.textFile("../data/people.csv")
# we create a lambda function to apply tp all lines of the dataset
# WARNING, see that after splitting we get only the first element
first_cells = lines.map(lambda x: x.split(",")[0])
print(first_cells.collect())
# we can define a function as well
def get_cell(x):
return x.split(",")[0]
first_cells = lines.map(get_cell)
print(first_cells.collect())
import urllib3
def download_file(csv_line):
link = csv_line[0]
http = urllib3.PoolManager()
r = http.request('GET', link, preload_content=False)
response = r.read()
return response
books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(","))
print(books_info.take(10))
books_content = books_info.map(download_file)
print(books_content.take(10)[1][:100])
import re
def is_dickens(csv_line):
link = csv_line[0]
t = re.match("http://www.textfiles.com/etext/AUTHORS/DICKENS/",link)
return t != None
dickens_books_info = books_info.filter(is_dickens)
print(dickens_books_info.take(4))
dickens_books_content = dickens_books_info.map(download_file)
# take into consideration that each time an action is performed over dickens_book_content, the file is downloaded
# this has a big impact into calculations
print(dickens_books_content.take(2)[1][:100])
flat_content = dickens_books_info.map(lambda x: x)
print(flat_content.take(4))
flat_content = dickens_books_info.flatMap(lambda x: x)
print(flat_content.take(4))
def get_author(csv_line):
link = csv_line[0]
t = re.match("http://www.textfiles.com/etext/AUTHORS/(\w+)/",link)
if t:
return t.group(1)
return u'UNKNOWN'
authors = books_info.map(get_author)
authors.distinct().collect()
import re
def get_author_and_link(csv_line):
link = csv_line[0]
t = re.match("http://www.textfiles.com/etext/AUTHORS/(\w+)/",link)
if t:
return (t.group(1), link)
return (u'UNKNOWN',link)
authors_links = books_info.map(get_author_and_link)
# not very efficient
dickens_books = authors_links.filter(lambda x: x[0]=="DICKENS")
poes_books = authors_links.filter(lambda x: x[0]=="POE")
poes_dickens_books = poes_books.union(dickens_books)
# sample is a transformation that returns an RDD sampled over the original RDD
# https://spark.apache.org/docs/1.1.1/api/python/pyspark.rdd.RDD-class.html
poes_dickens_books.sample(True,0.05).collect()
# takeSample is an action, returning a sampled subset of the RDD
poes_dickens_books.takeSample(True,10)
authors_links.subtract(poes_dickens_books).map(lambda x: x[0]).distinct().collect()
authors_links.map(lambda x: 1).reduce(lambda x,y: x+y) == authors_links.count()
# let's see this approach more in detail
# this transformation generates an rdd of 1, one per element in the RDD
authors_map = authors_links.map(lambda x: 1)
authors_map.takeSample(True,10)
# with reduce, we pass a function with two parameters which is applied by pairs
# inside the the function we specify which operation we perform with the two parameters
# the result is then returned and the action is applied again using the result until there is only one element in the resulting
# this is a very efficient way to do a summation in parallel
# using a functional approach
# we could define any operation inside the function
authors_map.reduce(lambda x,y: x*y)
sacramento_estate_csv = sc.textFile("../data/Sacramentorealestatetransactions.csv")
header = sacramento_estate_csv.first()
# first load the data
# we know that the price is in column 9
sacramento_estate = sacramento_estate_csv.filter(lambda x: x != header)\
.map(lambda x: x.split(","))\
.map(lambda x: int(x[9]))
sacramento_estate.takeSample(True, 10)
seqOp = (lambda x,y: (x[0] + y, x[1] + 1))
combOp = (lambda x,y: (x[0] + y[0], x[1] + y[1]))
total_sum, number = sacramento_estate.aggregate((0,0),seqOp,combOp)
mean = float(total_sum)/number
mean
print(sacramento_estate.top(5))
print(sacramento_estate.top(5, key=lambda x: -x))
import re
def get_author_data(csv_line):
link = csv_line[0]
t = re.match("http://www.textfiles.com/etext/AUTHORS/(\w+)/",link)
if t:
return (t.group(1), csv_line)
return (u'UNKNOWN', csv_line)
books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(","))
authors_info = books_info.map(get_author_data)
print(authors_info.take(5))
authors_info.filter(lambda x: x[0] != "UNKNOWN").take(3)
authors_info.mapValues(lambda x: x[2]).take(5)
# first get each book size, keyed by author
authors_data = authors_info.mapValues(lambda x: int(x[2]))
authors_data.take(5)
# ther reduce summing
authors_data.reduceByKey(lambda y,x: y+x).collect()
authors_data.reduceByKey(lambda y,x: y+x).top(5,key=lambda x: x[1])
import numpy as np
# generate the data
for pair in list(zip(np.arange(5).tolist()*5, np.random.normal(0,1,5*5))):
print(pair)
rdd = sc.parallelize(zip(np.arange(5).tolist()*5, np.random.normal(0,1,5*5)))
createCombiner = lambda value: (value,1)
# you can check what createCombiner does
# rdd.mapValues(createCombiner).collect()
# here x is the combiner (sum,count) and value is value in the
# initial RDD (the random variable)
mergeValue = lambda x, value: (x[0] + value, x[1] + 1)
# here, all combiners are summed (sum,count)
mergeCombiner = lambda x, y: (x[0] + y[0], x[1] + y[1])
sumCount = rdd.combineByKey(createCombiner,
mergeValue,
mergeCombiner)
print(sumCount.collect())
sumCount.mapValues(lambda x: x[0]/x[1]).collect()
createCombiner = lambda value: (value,1)
# you can check what createCombiner does
# rdd.mapValues(createCombiner).collect()
# here x is the combiner (sum,count) and value is value in the
# initial RDD (the random variable)
mergeValue = lambda x, value: (x[0] + value, x[1] + 1)
# here, all combiners are summed (sum,count)
mergeCombiner = lambda x, y: (x[0] + y[0], x[1] + y[1])
sumCount = authors_data.combineByKey(createCombiner,
mergeValue,
mergeCombiner)
print(sumCount.mapValues(lambda x: x[0]/x[1]).collect())
# I would choose the author with lowest average book size
print(sumCount.mapValues(lambda x: x[0]/x[1]).top(5,lambda x: -x[1]))
import urllib3
import re
def download_file(csv_line):
link = csv_line[0]
http = urllib3.PoolManager()
r = http.request('GET', link, preload_content=False)
response = r.read()
return str(response)
books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(","))
#books_content = books_info.map(download_file)
# while trying the function use only two samples
books_content = sc.parallelize(books_info.map(download_file).take(2))
words_rdd = books_content.flatMap(lambda x: x.split(" ")).\
flatMap(lambda x: x.split("\r\n")).\
map(lambda x: re.sub('[^0-9a-zA-Z]+', '', x).lower()).\
filter(lambda x: x != '')
words_rdd.map(lambda x: (x,1)).reduceByKey(lambda x,y: x+y).top(5, key=lambda x: x[1])
print(authors_info.groupBy(lambda x: x[0][0]).collect())
authors_info.map(lambda x: x[0]).distinct().\
map(lambda x: (x[0],1)).\
reduceByKey(lambda x,y: x+y).\
filter(lambda x: x[1]>1).\
collect()
import string
sc.parallelize(list(string.ascii_uppercase)).\
map(lambda x: (x,[])).\
cogroup(authors_info.groupBy(lambda x: x[0][0])).\
take(5)
#more info: https://www.worlddata.info/downloads/
rdd_countries = sc.textFile("../data/countries_data_clean.csv").map(lambda x: x.split(","))
#more info: http://data.worldbank.org/data-catalog/GDP-ranking-table
rdd_gdp = sc.textFile("../data/countries_GDP_clean.csv").map(lambda x: x.split(";"))
# check rdds size
hyp_final_rdd_num = rdd_gdp.count() if rdd_countries.count() > rdd_gdp.count() else rdd_countries.count()
print("The final number of elements in the joined rdd should be: ", hyp_final_rdd_num)
p_rdd_gdp = rdd_gdp.map(lambda x: (x[3],x))
p_rdd_countries = rdd_countries.map(lambda x: (x[1],x))
print(p_rdd_countries.take(1))
print(p_rdd_gdp.take(1))
p_rdd_contry_data = p_rdd_countries.join(p_rdd_gdp)
final_join_rdd_size = p_rdd_contry_data.count()
hyp = hyp_final_rdd_num == final_join_rdd_size
print("The initial hypothesis is ", hyp)
if not hyp:
print("The final joined rdd size is ", final_join_rdd_size)
n = 5
rdd_1 = sc.parallelize([(x,1) for x in range(n)])
rdd_2 = sc.parallelize([(x*2,1) for x in range(n)])
print("rdd_1: ",rdd_1.collect())
print("rdd_2: ",rdd_2.collect())
print("leftOuterJoin: ",rdd_1.leftOuterJoin(rdd_2).collect())
print("rightOuterJoin: ",rdd_1.rightOuterJoin(rdd_2).collect())
print("join: ", rdd_1.join(rdd_2).collect())
#explore what hapens if a key is present twice or more
rdd_3 = sc.parallelize([(x*2,1) for x in range(n)] + [(4,2),(6,4)])
print("rdd_3: ",rdd_3.collect())
print("join: ", rdd_2.join(rdd_3).collect())
rdd_gdp = sc.textFile("../data/countries_GDP_clean.csv").map(lambda x: x.split(";"))
rdd_gdp.take(2)
#generate a pair rdd with countrycode and GDP
rdd_cc_gdp = rdd_gdp.map(lambda x: (x[1],x[4]))
rdd_cc_gdp.take(2)
rdd_countries = sc.textFile("../data/countries_data_clean.csv").map(lambda x: x.split(","))
print(rdd_countries.take(2))
#generate a pair rdd with countrycode and lifexpectancy
#(more info in https://www.worlddata.info/downloads/)
#we don't have countrycode in this dataset, but let's try to add it
#we have a dataset with countrynames and countrycodes
#let's take countryname and ISO 3166-1 alpha3 code
rdd_cc = sc.textFile("../data/countrycodes.csv").\
map(lambda x: x.split(";")).\
map(lambda x: (x[0].strip("\""),x[4].strip("\""))).\
filter(lambda x: x[0] != 'Country (en)')
print(rdd_cc.take(2))
rdd_cc_info = rdd_countries.map(lambda x: (x[1],x[16]))
rdd_cc_info.take(2)
#let's count and see if something is missing
print(rdd_cc.count())
print(rdd_cc_info.count())
#take only values, the name is no longer needed
rdd_name_cc_le = rdd_cc_info.leftOuterJoin(rdd_cc)
rdd_cc_le = rdd_name_cc_le.map(lambda x: x[1])
print(rdd_cc_le.take(5))
print(rdd_cc_le.count())
#what is missing?
rdd_name_cc_le.filter(lambda x: x[1][1] == None).collect()
#how can we solve this problem??
print("Is there some data missing?", rdd_cc_gdp.count() != rdd_cc_le.count())
print("GDP dataset: ", rdd_cc_gdp.count())
print("Life expectancy dataset: ", rdd_cc_le.count())
#lets try to see what happens
print(rdd_cc_le.take(10))
print (rdd_cc_gdp.take(10))
rdd_cc_gdp_le = rdd_cc_le.map(lambda x: (x[1],x[0])).leftOuterJoin(rdd_cc_gdp)
#we have some countries that the data is missing
# we have to check if this data is available
# or there is any error
rdd_cc_gdp_le.take(10)
p_rdd_contry_data.sortByKey().take(2)
p_rdd_contry_data.countByKey()["Andorra"]
p_rdd_contry_data.collectAsMap()["Andorra"]
#p_rdd_contry_data.lookup("Andorra")
rdd_userinfo = sc.textFile("../data/users_events_example/user_info_1000users_20topics.csv")\
.filter(lambda x: len(x)>0)\
.map(lambda x: (x.split(",")[0],x.split(",")[1].split("|")))
rdd_userinfo.take(2)
rdd_userevents = sc.textFile("../data/users_events_example/userevents_*.log")\
.filter(lambda x: len(x))\
.map(lambda x: (x.split(",")[1], [x.split(",")[2]]))
print(rdd_userevents.take(2))
rdd_joined = rdd_userinfo.join(rdd_userevents)
print(rdd_joined.count())
print(rdd_joined.filter(lambda x: (x[1][1][0] not in x[1][0])).count())
print(rdd_joined.filter(lambda x: (x[1][1][0] in x[1][0])).count())
rdd_userinfo = sc.textFile("../data/users_events_example/user_info_1000users_20topics.csv")\
.filter(lambda x: len(x)>0)\
.map(lambda x: (x.split(",")[0],x.split(",")[1].split("|"))).persist()
def process_new_logs(event_fite_path):
rdd_userevents = sc.textFile(event_fite_path)\
.filter(lambda x: len(x))\
.map(lambda x: (x.split(",")[1], [x.split(",")[2]]))
rdd_joined = rdd_userinfo.join(rdd_userevents)
print("Number of visits to non-subscribed topics: ",
rdd_joined.filter(lambda x: (x[1][1][0] not in x[1][0])).count())
process_new_logs("../data/users_events_example/userevents_01012016000500.log")
rdd_userinfo = sc.textFile("../data/users_events_example/user_info_1000users_20topics.csv")\
.filter(lambda x: len(x)>0)\
.map(lambda x: (x.split(",")[0],x.split(",")[1].split("|"))).partitionBy(10)
rdd_userinfo
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first thing to note is that with Spark all computation is parallelized by means of distributed data structures that are spread through the cluster. These collections are called Resilient Distributed Datasets (RDD). We will talk more about RDD, as they are the main piece in Spark.
Step2: This is a very simple first example, where we create an RDD (variable lines) and then we apply some operations (count and first) in a parallel manner. It has to be noted, that as we are running all our examples in a single computer the parallelization is not applied.
Step3: RDD Basics
Step4: It is important to note that once we have an RDD, we can run two kind of operations
Step5: Transformations and actions are very different because of the way Spark computes RDDs.
Step6: The drawback is that Spark recomputes again the RDD at each action application.
Step7: RDD Operations
Step8: filter applies the lambda function to each line in lines RDD, only lines that accomplish the condition that the length is greater than zero are in lines_nonempty variable (this RDD is not computed yet!)
Step9: Actions are the operations that return a final value to the driver program or write data to an external storage system.
Step10: Question
Step11: Working with common Spark transformations
Step12: Exercise 2
Step13: Exercise 4
Step14: Exercise 5
Step15: Exercise 6
Step16: Exercise 7
Step17: Exercise 8
Step18: Exercise 9
Step19: Exercise 10
Step20: Spark Key/Value Pairs
Step21: Exercise 2
Step22: Exercise 3
Step23: Transformations on Pair RDDs
Step24: Exercise 2
Step25: Exercise 3
Step26: Exercise 4
Step27: Exercise 5
Step28: Exercise 6
Step29: Exercise 7
Step30: Joins
Step31: Left and Right outer Joins
Step32: Exercise
Step33: Inspect the dataset with life expectancy.
Step34: We have some missing data, that we have to complete, but we have quite a lot of data, let's follow.
Step35: Sort Data
Step36: Actions over Pair RDDs
Step37: Data Partitioning
Step38: The application periodically combines this table with a smaller file representing events that happened in the past five minutesโsay, a table of (UserID, LinkInfo) pairs for users who have clicked a link on a website in those five minutes.
Step39: For example, we may wish to count how many users visited a link that was not to one of their subscribed topics. We can perform this combination with Sparkโs join() operation, which can be used to group the User Info and LinkInfo pairs for each UserID by key.
Step40: Imagine that we want to count the number of visits to non-subscribed visits using a function.
Step41: This code will run fine as is, but it will be inefficient.
|
14,683 | <ASSISTANT_TASK:>
Python Code:
from math import *
L = 500
sigma0 = 5.8e7
alpha = 0.0039
d = 0.2e-3
T0 = 20
# The cross section area
S = pi/4*d**2
# Resistance @ -45
R_1 = L/(sigma0*S)*(1+alpha*(-45-T0))
# Resistance @ +10
R_2 = L/(sigma0*S)*(1+alpha*(+10-T0))
print('R(-45) = %2.2f Ohm' % (R_1))
print('R(+10) = %2.2f Ohm' % (R_2))
V = 1.5
# I @ -45
I_1 = V/R_1
# I @ +10
I_2 = V/R_2
print('I(-45) = %2.2f mA' % (I_1*1000)) # we multiply by 1000 to express in milliAmpere
print('I(+10) = %2.2f mA' % (I_2*1000))
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
T = np.linspace(-45,10)
R = L/(sigma0*S)*(1+alpha*(T-T0))
I = 1.5/R
# We plot I vs T
plt.plot(T,I*1000);
plt.plot(np.array([-45, 10]), np.array([I_1, I_2])*1000,':');
plt.ylabel('Sensor current [mA]')
plt.xlabel('Temperature [C]')
plt.legend(['Real transfer function', 'Linear approximation'])
plt.show()
Sensitivity = (I_2-I_1)/(10-(-45))
print('The sensitivity is %2.2f uA/ยบC' % (Sensitivity*1e6))
# P @ -45
P_1 = I_1**2*R_1
# P @ +10
P_2 = I_2**2*R_2
print('P(-45) = %2.2f mW' % (P_1*1000)) # we multiply by 1000 to express in milliWatt
print('P(+10) = %2.2f mW' % (P_2*1000))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we compute the currents by $I=V/R$.
Step2: We know the resistance is linear with the temperature. However, the current is not. We can check how large is the non-linearity.
Step3: Although the current is not linear with temperature, it can be approximated as linear. We can compute the approximate sensitiviy of the sensor to get a sense of how the current changes with temperature. We can approximate the sensitivity as
Step4: Although it is relatively small, it is still measurable. Any digital ammeter should be able to measure at least a change in $10 \mu$A.
|
14,684 | <ASSISTANT_TASK:>
Python Code:
import logging
import random
import time
import matplotlib.pyplot as plt
import mxnet as mx
from mxnet import gluon, nd, autograd
import numpy as np
batch_size = 128
epochs = 5
ctx = mx.gpu() if len(mx.test_utils.list_gpus()) > 0 else mx.cpu()
lr = 0.01
train_dataset = gluon.data.vision.MNIST(train=True)
test_dataset = gluon.data.vision.MNIST(train=False)
def transform(x,y):
x = x.transpose((2,0,1)).astype('float32')/255.
y1 = y
y2 = y % 2 #odd or even
return x, np.float32(y1), np.float32(y2)
train_dataset_t = train_dataset.transform(transform)
test_dataset_t = test_dataset.transform(transform)
train_data = gluon.data.DataLoader(train_dataset_t, shuffle=True, last_batch='rollover', batch_size=batch_size, num_workers=5)
test_data = gluon.data.DataLoader(test_dataset_t, shuffle=False, last_batch='rollover', batch_size=batch_size, num_workers=5)
print("Input shape: {}, Target Labels: {}".format(train_dataset[0][0].shape, train_dataset_t[0][1:]))
class MultiTaskNetwork(gluon.HybridBlock):
def __init__(self):
super(MultiTaskNetwork, self).__init__()
self.shared = gluon.nn.HybridSequential()
with self.shared.name_scope():
self.shared.add(
gluon.nn.Dense(128, activation='relu'),
gluon.nn.Dense(64, activation='relu'),
gluon.nn.Dense(10, activation='relu')
)
self.output1 = gluon.nn.Dense(10) # Digist recognition
self.output2 = gluon.nn.Dense(1) # odd or even
def hybrid_forward(self, F, x):
y = self.shared(x)
output1 = self.output1(y)
output2 = self.output2(y)
return output1, output2
loss_digits = gluon.loss.SoftmaxCELoss()
loss_odd_even = gluon.loss.SigmoidBCELoss()
mx.random.seed(42)
random.seed(42)
net = MultiTaskNetwork()
net.initialize(mx.init.Xavier(), ctx=ctx)
net.hybridize() # hybridize for speed
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate':lr})
def evaluate_accuracy(net, data_iterator):
acc_digits = mx.metric.Accuracy(name='digits')
acc_odd_even = mx.metric.Accuracy(name='odd_even')
for i, (data, label_digit, label_odd_even) in enumerate(data_iterator):
data = data.as_in_context(ctx)
label_digit = label_digit.as_in_context(ctx)
label_odd_even = label_odd_even.as_in_context(ctx).reshape(-1,1)
output_digit, output_odd_even = net(data)
acc_digits.update(label_digit, output_digit.softmax())
acc_odd_even.update(label_odd_even, output_odd_even.sigmoid() > 0.5)
return acc_digits.get(), acc_odd_even.get()
alpha = 0.5 # Combine losses factor
for e in range(epochs):
# Accuracies for each task
acc_digits = mx.metric.Accuracy(name='digits')
acc_odd_even = mx.metric.Accuracy(name='odd_even')
# Accumulative losses
l_digits_ = 0.
l_odd_even_ = 0.
for i, (data, label_digit, label_odd_even) in enumerate(train_data):
data = data.as_in_context(ctx)
label_digit = label_digit.as_in_context(ctx)
label_odd_even = label_odd_even.as_in_context(ctx).reshape(-1,1)
with autograd.record():
output_digit, output_odd_even = net(data)
l_digits = loss_digits(output_digit, label_digit)
l_odd_even = loss_odd_even(output_odd_even, label_odd_even)
# Combine the loss of each task
l_combined = (1-alpha)*l_digits + alpha*l_odd_even
l_combined.backward()
trainer.step(data.shape[0])
l_digits_ += l_digits.mean()
l_odd_even_ += l_odd_even.mean()
acc_digits.update(label_digit, output_digit.softmax())
acc_odd_even.update(label_odd_even, output_odd_even.sigmoid() > 0.5)
print("Epoch [{}], Acc Digits {:.4f} Loss Digits {:.4f}".format(
e, acc_digits.get()[1], l_digits_.asscalar()/(i+1)))
print("Epoch [{}], Acc Odd/Even {:.4f} Loss Odd/Even {:.4f}".format(
e, acc_odd_even.get()[1], l_odd_even_.asscalar()/(i+1)))
print("Epoch [{}], Testing Accuracies {}".format(e, evaluate_accuracy(net, test_data)))
def get_random_data():
idx = random.randint(0, len(test_dataset))
img = test_dataset[idx][0]
data, _, _ = test_dataset_t[idx]
data = data.as_in_context(ctx).expand_dims(axis=0)
plt.imshow(img.squeeze().asnumpy(), cmap='gray')
return data
data = get_random_data()
digit, odd_even = net(data)
digit = digit.argmax(axis=1)[0].asnumpy()
odd_even = (odd_even.sigmoid()[0] > 0.5).asnumpy()
print("Predicted digit: {}, odd: {}".format(digit, odd_even))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parameters
Step2: Data
Step3: We assign the transform to the original dataset
Step4: We load the datasets DataLoaders
Step5: Multi-task Network
Step6: We can use two different losses, one for each output
Step7: We create and initialize the network
Step8: Evaluate Accuracy
Step9: Training Loop
Step10: Testing
|
14,685 | <ASSISTANT_TASK:>
Python Code:
# 10 segons de vรญdeo.
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview(alpha=200)
sleep(10)
camera.stop_preview()
# Guardar una imatge
camera.start_preview()
sleep(5)
camera.capture('/home/pi/Desktop/image.jpg')
camera.stop_preview()
# รs important posar a dormir com a mรญnim 2 segons abans de capturar la imatge.
# 5 imatges seguides
camera.start_preview()
for i in range(5):
sleep(5)
camera.capture('/home/pi/Desktop/image{}.jpg'.format(i))
camera.stop_preview()
camera.start_preview()
camera.start_recording('/home/pi/video.h264')
sleep(10)
camera.stop_recording()
camera.stop_preview()
camera.resolution = (2592, 1944)
camera.framerate = 15
camera.start_preview()
sleep(5)
camera.capture('/home/pi/Desktop/max.jpg')
camera.stop_preview()
camera.start_preview()
camera.annotate_text = "Hello world!"
sleep(5)
camera.capture('/home/pi/Desktop/text.jpg')
camera.stop_preview()
camera.annotate_text_size = 50
from picamera import PiCamera, Color
camera.start_preview()
camera.annotate_background = Color('blue')
camera.annotate_foreground = Color('yellow')
camera.annotate_text = " Hello world "
sleep(5)
camera.stop_preview()
# brillantor
camera.start_preview()
for i in range(100):
camera.annotate_text = "Brightness: {}s".format(i)
camera.brightness = i
sleep(0.1)
camera.stop_preview()
# contrast
camera.start_preview()
for i in range(100):
camera.annotate_text = "Contrast: {}s".format(i)
camera.contrast = i
sleep(0.1)
camera.stop_preview()
camera.start_preview()
camera.image_effect = 'colorswap'
sleep(5)
camera.capture('/home/pi/Desktop/colorswap.jpg')
camera.stop_preview()
camera.start_preview()
for effect in camera.IMAGE_EFFECTS:
camera.image_effect = effect
camera.annotate_text = "Effect: {}".format(effect)
sleep(5)
camera.stop_preview()
camera.start_preview()
camera.awb_mode = 'sunlight'
sleep(5)
camera.capture('/home/pi/Desktop/sunlight.jpg')
camera.stop_preview()
camera.start_preview()
camera.exposure_mode = 'beach'
sleep(5)
camera.capture('/home/pi/Desktop/beach.jpg')
camera.stop_preview()
from twython import Twython
from picamera import PiCamera
from time import sleep
from datetime import datetime
import random
from auth import (
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
twitter = Twython(
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
camera = PiCamera()
timestamp = datetime.now().isoformat()
photo_path = '/home/pi/tweeting-babbage/photos/{}.jpg'.format(timestamp)
sleep(3)
camera.capture(photo_path)
with open(photo_path, 'rb') as photo:
twitter.update_status_with_media(status=message, media=photo)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Guardant una imatge
Step2: GRAVANT UN VIDEO
Step3: EFECTES
Step4: La ressoluciรณ mรญnima รฉs de 64x64, proveu de fer una foto amb aquesta ressoluciรณ
Step5: Es pot ajustar la mida del texte aixรญ
Step6: Les mides vร lides van de 6 a 160. Per defecte รฉs 32.
Step7: 3. Ajustant la brillantor i el constrast
Step8: 4. Aplicant efectes
Step9: Per provar-los tots podem fer
Step10: Amb camera.awb_mode podeu tocar els nivells de blanc. Les opcions sรณn
Step11: Amb camera.exposure_mode podeu tocar el mode d'exposicรณ. Les opcions sรณn off, auto, night, nightpreview, backlight, spotlight, sports, snow, beach, verylong, fixedfps, antishake, i fireworks. L'opciรณ per defecte รฉs auto.
Step12: Projecte IoT -> Fer un piulet amb una foto presa amb la pi-camera
|
14,686 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import datetime
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import roc_auc_score
congr_datasetDF = pd.DataFrame.from_csv('https://raw.githubusercontent.com/oslugr/contaminAND/master/datos/contaminAND-gr-congresos.csv').reset_index()
congr_datasetDF['date'] = congr_datasetDF['date'].apply(lambda x: pd.to_datetime(x,dayfirst = True))
congr_datasetDF
#OBTENEMOS DOS NUEVAS COLUMNAS CON EL DIA DE LA SEMANA Y LA HORA DEL DIA(EN MINUTOS)
congr_datasetDF['WEEKDAY'] = congr_datasetDF['date'].apply(lambda x: x.weekday())
congr_datasetDF['TIME_minutesofday'] = congr_datasetDF['date'].apply(lambda x: x.hour*60+x.minute)
congr_datasetDF = congr_datasetDF.drop(['date'],axis=1)
congr_datasetDF.info()
congr_datasetDF = congr_datasetDF.apply(lambda x: pd.to_numeric(x, errors='coerce'))
congr_datasetDF.isnull().any()
congr_datasetDF.isnull().sum()
def media(column):
column = column.fillna(value=column.mean())
return column
congr_datasetDF = congr_datasetDF.apply(media,axis=0)
#VARIABLE DUMMY EN FUCION DE SI ES FIN DE SEMANA O NO(CONSIDERADO COMO V-D)
def fin_semana(row):
if ((row.WEEKDAY == 0) or (row.WEEKDAY >= 5 )):
return 1
else:
return 0
congr_datasetDF['WEEKEND_VD'] = congr_datasetDF.apply(fin_semana,axis=1)
def fin_semana(row):
if ((row.WEEKDAY == 0) or (row.WEEKDAY == 6)):
return 1
else:
return 0
#VARIABLE DUMMY EN FUCION DE SI ES FIN DE SEMANA O NO(CONSIDERADO COMO S-D)
congr_datasetDF['WEEKEND_SD'] = congr_datasetDF.apply(fin_semana,axis=1)
congr_datasetDF
X_bis = congr_datasetDF.copy()
y = X_bis.pop('NO2')
model = RandomForestRegressor(n_estimators = 100,oob_score=True,n_jobs=-1)
model.fit(X_bis,y)
score = model.oob_score_
feature_importances = model.feature_importances_
feature_importances_Series = pd.Series(data=feature_importances,index=X_bis.columns).sort_values(ascending=False)
features = feature_importances_Series
print 'Precision del modelo para el analisis de la variable "NO2": ',score
print 'Variables que explican los valores de NO2 \n \n',features
X_bis = congr_datasetDF.copy()
y = X_bis.pop('PART')
model = RandomForestRegressor(n_estimators = 100,oob_score=True,n_jobs=-1)
model.fit(X_bis,y)
score = model.oob_score_
feature_importances = model.feature_importances_
feature_importances_Series = pd.Series(data=feature_importances,index=X_bis.columns).sort_values(ascending=False)
features = feature_importances_Series
print 'Precision del modelo para el analisis de la variable "PART": ' ,score
print 'Variables que explican los valores de PART \n \n',features
X_bis = congr_datasetDF.copy()
y = X_bis.pop('SO2')
model = RandomForestRegressor(n_estimators = 100,oob_score=True,n_jobs=-1)
model.fit(X_bis,y)
score = model.oob_score_
feature_importances = model.feature_importances_
feature_importances_Series = pd.Series(data=feature_importances,index=X_bis.columns).sort_values(ascending=False)
features = feature_importances_Series
print('Precision del modelo para el analisis de la variable "S02": ',score)
print 'Variables que explican los valores de S02 \n \n',features
X_bis = congr_datasetDF.copy()
y = X_bis.pop('O3')
model = RandomForestRegressor(n_estimators = 100,oob_score=True,n_jobs=-1)
model.fit(X_bis,y)
score = model.oob_score_
feature_importances = model.feature_importances_
feature_importances_Series = pd.Series(data=feature_importances,index=X_bis.columns).sort_values(ascending=False)
features = feature_importances_Series
print 'Precision del modelo para el analisis de la variable "O3": ',score
print 'Variables que explican los valores de 03 \n \n',features
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: FORZAMOS VALORES NUMERICOS. ALLI DONDE NO ES POSIBLE SERA PORQUE NO HABIA DATOS, O ERAN TEXTO. ESOS PASAN A SER NP.NAN, AHORA MAS ABAJO LES METEMOS EL VALOR MEDIO DE LA COLUMNA A LA QUE PERTENEZCAN. AUNQUE SE PUEDE VER ARRIBA QUE EL UNICO QUE LE AFECTA DE VERDAD ES A LA COLUMNA 'PART' QUE HA PERDIDO CASI 15000 VALORES, EL RESTO ESTAN BASTANTE BIEN
Step2: ANALIZAMOS AHORA QUE VARIABLES INFLUYEN MAS EN LOS NIVELES DE NO2
Step3: ANALIZAMOS AHORA QUE VARIABLES INFLUYEN MAS EN LOS NIVELES DE PARTICULAS
Step4: ANALIZAMOS AHORA QUE VARIABLES INFLUYEN MAS EN LOS NIVELES DE S02
Step5: ANALIZAMOS AHORA QUE VARIABLES INFLUYEN MAS EN LOS NIVELES DE O3
|
14,687 | <ASSISTANT_TASK:>
Python Code:
# Authors: Alex Rockhill <aprockhill@mailbox.org>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif')
raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=True,
exclude=raw.info['bads']).load_data()
events = mne.find_events(raw)
raw.set_eeg_reference(projection=True).apply_proj()
raw_csd = mne.preprocessing.compute_current_source_density(raw)
raw.plot()
raw_csd.plot()
raw.plot_psd()
raw_csd.plot_psd()
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5,
preload=True)
evoked = epochs['auditory'].average()
times = np.array([-0.1, 0., 0.05, 0.1, 0.15])
evoked_csd = mne.preprocessing.compute_current_source_density(evoked)
evoked.plot_joint(title='Average Reference', show=False)
evoked_csd.plot_joint(title='Current Source Density')
fig, ax = plt.subplots(4, 4)
fig.subplots_adjust(hspace=0.5)
fig.set_size_inches(10, 10)
for i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]):
for j, m in enumerate([5, 4, 3, 2]):
this_evoked_csd = mne.preprocessing.compute_current_source_density(
evoked, stiffness=m, lambda2=lambda2)
this_evoked_csd.plot_topomap(
0.1, axes=ax[i, j], outlines='skirt', contours=4, time_unit='s',
colorbar=False, show=False)
ax[i, j].set_title('stiffness=%i\nฮปยฒ=%s' % (m, lambda2))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load sample subject data
Step2: Plot the raw data and CSD-transformed raw data
Step3: Also look at the power spectral densities
Step4: CSD can also be computed on Evoked (averaged) data.
Step5: First let's look at how CSD affects scalp topography
Step6: CSD has parameters stiffness and lambda2 affecting smoothing and
|
14,688 | <ASSISTANT_TASK:>
Python Code:
data2 = data[(data.TMIN>-9999)]
data3 = data2[(data2.DATE>=20150601) & (data2.DATE<=20150630) & (data2.PRCP>0)]
stations = data2[(data2.STATION=='GHCND:USC00047326') | (data2.STATION=='GHCND:USC00047902') | (data2.STATION=='GHCND:USC00044881')]
st = stations.groupby(['STATION'])
temp = st.agg({'TMIN' : [np.min], 'TMAX' : [np.max]})
temp.plot(kind='bar')
june = stations[(stations.DATE>=20150601) & (stations.DATE<=20150630)]
rain = june.groupby(['STATION'])
rain.plot('DATE','PRCP')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So we can print data3 and, then, select the stations in the table that will be printed.
Step2: Analysing the plot above, we can see that the 3 cities experienced a big variation of temperature in the time of observation. The variation was more expressive in Lee Vining.
|
14,689 | <ASSISTANT_TASK:>
Python Code:
import sympy
x, u = sympy.symbols('x u', real=True)
U = sympy.Function('U')(x,u)
U
x = sympy.Symbol('x',real=True)
y = sympy.Function('y')(x)
U = sympy.Function('U')(x,y)
X = sympy.Function('X')(x,y)
Y = sympy.Function('Y')(X)
sympy.pprint(sympy.diff(U,x))
sympy.pprint( sympy.diff(Y,x))
sympy.pprint( sympy.diff(Y,x).args[0] )
sympy.pprint( sympy.diff(U,x)/ sympy.diff(Y,x).args[0])
YprimeX = sympy.diff(U,x)/sympy.diff(Y,x).args[0]
sympy.pprint( sympy.diff(YprimeX,x).simplify() )
sympy.factor_list( sympy.diff(Y,x)) # EY 20160522 I don't know how to simply obtain the factors of an expression
# EY 20160522 update resolved: look at above and look at this page; it explains all:
# http://docs.sympy.org/dev/tutorial/manipulation.html
t, x, u, u_1, x_t, u_t, u_1t = sympy.symbols('t x u u_1 x_t u_t u_1t', real=True)
X = -u_1
U = u - x*u_1
U_1 = x
from sympy import Derivative, diff, expr
def difftotal(expr, diffby, diffmap):
Take the total derivative with respect to a variable.
Example:
theta, t, theta_dot = symbols("theta t theta_dot")
difftotal(cos(theta), t, {theta: theta_dot})
returns
-theta_dot*sin(theta)
# Replace all symbols in the diffmap by a functional form
fnexpr = expr.subs({s:s(diffby) for s in diffmap})
# Do the differentiation
diffexpr = diff(fnexpr, diffby)
# Replace the Derivatives with the variables in diffmap
derivmap = {Derivative(v(diffby), diffby):dv
for v,dv in diffmap.iteritems()}
finaldiff = diffexpr.subs(derivmap)
# Replace the functional forms with their original form
return finaldiff.subs({s(diffby):s for s in diffmap})
difftotal( U,t,{x:x_t, u:u_t, u_1:u_1t}) + (-U_1)* (-u_1t)
x = sympy.Symbol('x',real=True)
u = sympy.Function('u')(x)
U = x
X = u
Y = sympy.Function('Y')(X)
sympy.pprint( sympy.diff(Y,x))
sympy.pprint(sympy.diff(U,x))
sympy.pprint( 1/ sympy.diff(Y,x).args[0])
sympy.pprint( sympy.diff( 1/ sympy.diff(Y,x).args[0], x))
x = sympy.Symbol('x',real=True)
y = sympy.Function('y')(x)
U = sympy.Function('U')(x,y)
X = sympy.Function('X')(x,y)
Y = sympy.Function('Y')(X)
sympy.pprint(sympy.diff(U,x))
sympy.pprint( sympy.diff(Y,x))
sympy.pprint( sympy.diff(Y,x).args[0] )
sympy.pprint( sympy.diff(U,x)/ sympy.diff(Y,x).args[0])
YprimeX = sympy.diff(U,x)/sympy.diff(Y,x).args[0]
Yprime2X = sympy.diff(YprimeX,x)
sympy.pprint( Yprime2X.simplify() )
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The case of a(n arbitrary) point transformation
Step2: For $Y''(X)$,
Step4: cf. How to do total derivatives
Step5: This transformation is the Legendre transformation
Step6: And so $Y'(X)$ is
Step7: And so $Y''(X)$ is
Step8: cf. (2) from 4. Exercises, Chapter 2 Lie Transformations pp. 20
Step9: For $Y''(X)$,
|
14,690 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Image('fermidist.png')
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
H = (1/((np.e)**((energy - mu)/kT)+1))
return H
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
?np.arange
def plot_fermidist(mu, kT):
energy = np.linspace(0.,10.0,50)
plt.plot(fermidist(energy, mu, kT), energy)
plt.title('The Fermi Distribution')
plt.grid(True)
plt.xlabel('F [(Unitless)]')
plt.ylabel('Energy [(eV)]')
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
interact(plot_fermidist, mu = [0.0,5.0], kT=[0.1,10.0]);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring the Fermi distribution
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Step5: Use interact with plot_fermidist to explore the distribution
|
14,691 | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import sys, os, copy, logging, socket, time
import numpy as np
import pylab as plt
#from ndparse.algorithms import nddl as nddl
#import ndparse as ndp
sys.path.append('..'); import ndparse as ndp
try:
logger
except:
# do this precisely once
logger = logging.getLogger("deploy_model")
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setFormatter(logging.Formatter('[%(asctime)s:%(name)s:%(levelname)s] %(message)s'))
logger.addHandler(ch)
print("Running on system: %s" % socket.gethostname())
# Load previously trained CNN weights
weightsFile = './isbi2012_weights_e025.h5'
if True:
# Using a local copy of data volume
#inDir = '/Users/graywr1/code/bio-segmentation/data/ISBI2012/'
inDir = '/home/pekalmj1/Data/EM_2012'
Xtrain = ndp.nddl.load_cube(os.path.join(inDir, 'train-volume.tif'))
Ytrain = ndp.nddl.load_cube(os.path.join(inDir, 'train-labels.tif'))
Xtest = ndp.nddl.load_cube(os.path.join(inDir, 'test-volume.tif'))
else:
# example of using ndio database call
import ndio.remote.neurodata as ND
tic = time.time()
nd = ND()
token = 'kasthuri11cc'
channel = 'image'
xstart, xstop = 5472, 6496
ystart, ystop = 8712, 9736
zstart, zstop = 1000, 1100
res = 1
Xtest = nd.get_cutout(token, channel, xstart, xstop, ystart, ystop, zstart, zstop, resolution=res)
Xtest = np.transpose(Xtest, [2, 0, 1])
Xtest = Xtest[:, np.newaxis, :, :] # add a channel dimension
print 'time elapsed is: {} seconds'.format(time.time()-tic)
# show some details. Note that data tensors are assumed to have dimensions:
# (#slices, #channels, #rows, #columns)
#
print('Test data shape is: %s' % str(Xtest.shape))
plt.imshow(Xtest[0,0,...], interpolation='none', cmap='bone')
plt.title('test volume, slice 0')
plt.gca().axes.get_xaxis().set_ticks([])
plt.gca().axes.get_yaxis().set_ticks([])
plt.show()
# In the interest of time, only deploy on one slice (z-dimension) of the test volume
# *and* only evaluate a subset of the pixels in that slice.
#
# Note: depending upon your system (e.g. CPU vs GPU) this may take a few minutes...
#
tic = time.time()
P0 = ndp.nddl.fit(Xtest, weightsFile, slices=[0,], evalPct=.1, log=logger)
print("Time to deploy: %0.2f sec" % (time.time() - tic))
# The shape of the probability estimate tensor is:
# (#slices, #classes, #rows, #cols)
print('Class probabilities shape: %s' % str(P0.shape))
# Use a simple interpolation scheme to fill in "missing" values
# (i.e. those pixels we did not evaluate using the CNN).
#
Pint = ndp.nddl.interpolate_nn(P0)
# visualize
plt.imshow(P0[0,0,...]); plt.colorbar()
plt.gca().axes.get_xaxis().set_ticks([])
plt.gca().axes.get_yaxis().set_ticks([])
plt.title('Class Estimates (slice 0, subsampled)')
plt.show()
plt.imshow(Pint[0,0,...]); plt.colorbar()
plt.title('Class Estimates: (slice 0, interpolated)')
plt.gca().axes.get_xaxis().set_ticks([])
plt.gca().axes.get_yaxis().set_ticks([])
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Step 4
|
14,692 | <ASSISTANT_TASK:>
Python Code:
# Create a SystemML MLContext object
from systemml import MLContext, dml
ml = MLContext(sc)
%%sh
mkdir -p data/mnist/
cd data/mnist/
curl -O https://pjreddie.com/media/files/mnist_train.csv
curl -O https://pjreddie.com/media/files/mnist_test.csv
script_string =
source("nn/examples/mnist_lenet.dml") as mnist_lenet
# Read training data
data = read($data, format="csv")
n = nrow(data)
# Extract images and labels
images = data[,2:ncol(data)]
labels = data[,1]
# Scale images to [-1,1], and one-hot encode the labels
images = (images / 255.0) * 2 - 1
labels = table(seq(1, n), labels+1, n, 10)
# Split into training (55,000 examples) and validation (5,000 examples)
X = images[5001:nrow(images),]
X_val = images[1:5000,]
y = labels[5001:nrow(images),]
y_val = labels[1:5000,]
# Train
epochs = 10
[W1, b1, W2, b2, W3, b3, W4, b4] = mnist_lenet::train(X, y, X_val, y_val, C, Hin, Win, epochs)
script = (dml(script_string).input("$data", "data/mnist/mnist_train.csv")
.input(C=1, Hin=28, Win=28)
.output("W1", "b1", "W2", "b2", "W3", "b3", "W4", "b4"))
W1, b1, W2, b2, W3, b3, W4, b4 = (ml.execute(script)
.get("W1", "b1", "W2", "b2", "W3", "b3", "W4", "b4"))
script_string =
source("nn/examples/mnist_lenet.dml") as mnist_lenet
# Read test data
data = read($data, format="csv")
n = nrow(data)
# Extract images and labels
X_test = data[,2:ncol(data)]
y_test = data[,1]
# Scale images to [-1,1], and one-hot encode the labels
X_test = (X_test / 255.0) * 2 - 1
y_test = table(seq(1, n), y_test+1, n, 10)
# Eval on test set
probs = mnist_lenet::predict(X_test, C, Hin, Win, W1, b1, W2, b2, W3, b3, W4, b4)
[loss, accuracy] = mnist_lenet::eval(probs, y_test)
print("Test Accuracy: " + accuracy)
script = dml(script_string).input(**{"$data": "data/mnist/mnist_train.csv",
"C": 1, "Hin": 28, "Win": 28,
"W1": W1, "b1": b1,
"W2": W2, "b2": b2,
"W3": W3, "b3": b3,
"W4": W4, "b4": b4})
ml.execute(script)
W1_df = W1.toDF()
b1_df = b1.toDF()
W2_df = W2.toDF()
b2_df = b2.toDF()
W3_df = W3.toDF()
b3_df = b3.toDF()
W4_df = W4.toDF()
b4_df = b4.toDF()
W1_df, b1_df, W2_df, b2_df, W3_df, b3_df, W4_df, b4_df
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download Data - MNIST
Step3: SystemML "LeNet" Neural Network
Step5: 2. Compute Test Accuracy
Step6: 3. Extract Model Into Spark DataFrames For Future Use
|
14,693 | <ASSISTANT_TASK:>
Python Code:
class BankAccount:
"Represents a bank account."
def __init__(self, account_number=None):
"Initialize or create a new account."
self.account_number = ''
self.__balance = 0
self.holder = None
self._transactions = []
if account_number is None:
self._create_new_acconunt()
else:
self.account_number = account_number
self._fetch_account_data(self, account_number)
def get_balance(self):
"Return the actual balance."
return self.__balance
def get_available_amount(self):
"Return the maximum amount which can be withdrawn."
rv = self.__balance + self._calculate_overdraft()
if rv < 0:
rv = 0
return rv
def make_deposit(self, amount, text):
"Add money to this account."
self.__balance += amount
self.transactions.append(Transaction(amount, info=text))
def withdraw(self, amount, text):
"Withdraw amount from this account."
overdraft = self._calculate_overdraft()
if self.__balance + overdraft - amount > 0:
self.__balance -= amount
self.transactions.append(Transaction(amount * -1, info=text))
return True
else:
raise BalanceTooLowException('You max amount is ...')
def transfer_money(self, target_account, amount, text):
"Transfer money from this account to target_account."
overdraft = self._calculate_overdraft()
if self.__balance + overdraft - amount > 0:
# start some complecated procedurce which transfers money
self.transactions.append(Transaction(amount * -1, target=target_account))
else:
raise BalanceTooLowException('You max amount is ...')
def _create_new_account(self):
"Create initial data for new account"
self.holder = input('Enter name of account holder:')
def _fetch_account_data(self, account_number):
"Fetch some data from e.g. a database"
# Missing: the DB code
self.balance = account_data.get_balance()
self.holder = account_data.get_holder()
class Product:
def __init__(name, price):
self.name = name
self.price = price
def get_gross_price(self):
"Return gross price (with tax)."
return self.prince + self.price / 100 * 20
class MaxBorrowingsError(Exception): pass
class BookNotBorrowedException(Exception): pass
class User:
def __init__(self, firstname, lastname):
self.max_books = 10
self.firstname = firstname
self.lastname = lastname
self.borrowed_books = []
def borrow_book(self, book):
if len(self.borrowed_books) < self.max_books:
self.borrowed_books.append(book)
else:
raise MaxBorrowingsError('You have exceeded the number of books you are '
'allowed to borrow.')
def return_book(self, book):
if book in self.borrowed_books:
self.borrowed_books.remove(book)
else:
raise BookNotBorrowedException('You did not borrow this book!')
class Student(User):
def __init__(self, firstname, lastname, matrikelnummer):
self.max_books = 10
self.firstname = firstname
self.lastname = lastname
self.matrikelnummer = matrikelnummer
self.borrowed_books = []
otto = Student('Otto', 'Huber', '017844556')
otto.borrow_book('A Book')
print(otto.borrowed_books)
otto.return_book('A Book')
print(otto.borrowed_books)
type(otto)
type(otto) is Student
isinstance(otto, Student)
isinstance(otto, User)
class User2:
def __init__(self, firstname, lastname):
self.max_books = 10
self.firstname = firstname
self.lastname = lastname
self.borrowed_books = []
class Student2(User2):
def __init__(self, firstname, lastname, matrikelnummer):
super().__init__(firstname, lastname)
self.matrikelnummer = matrikelnummer
anna = Student2('Anna', 'Meier', '0175546655')
print(anna.firstname, anna.matrikelnummer)
import math
class Rectangle:
def __init__(self, length, width):
self.length = length
self.width = width
def get_area(self):
return self.length * self.width
class Circle:
def __init__(self, radius):
self.radius = radius
def get_area(self):
return self.radius ** 2 * math.pi
rect = Rectangle(60, 40)
circ = Circle(25)
print(rect.get_area())
print(circ.get_area())
figures = [Rectangle(72, 55), Circle(42), Rectangle(22,19)]
sum([fig.get_area() for fig in figures])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Das ist nun relativ viel Code. Wesentlich aus der Sicht der Kapselung sind hier zwei Dinge
Step2: Sollte sich die Mehrwertsteuer รคndern, kann dies in der Methode get_full_price() geรคndert werden. Existierender Code, der Objekte vom Typ Product nutzt, braucht nicht verรคndert zu werden. Viel wichtiger kann das natรผrlich werden, wenn sich zum Beispiel die Art verรคndert, wie Banken รberweisungen durchfรผhren
Step3: Ein Student ist ein spezieller User, der zusรคtzlich noch eine Matrikellnummer hat
Step4: Hier sind 2 Dinge zu beachten
Step5: Wir sehen hier, dass die Methoden borrow_books() und return_books() zur Verfรผgung stehen, obwohl wir sie beim Schreiben der Klasse Student gar nicht definiert haben. Sie kommen von der Basisklasse User. Dieser Mechanismus ist sehr mรคchtig, weil wir nur die Teile einer Klasse zu verรคndern brauchen, die den Spezialfall der Elternklasse manifestieren.
Step6: Wir kรถnnen auch auf einen bestimmten Typen testen
Step7: Mit isinstance() kรถnnen wir sogar testen, ob ein Wert einen bestimmten Typus hat, der weiter oben in der Vererbungskette steht
Step8: Da Vererbung immer Spezialisierung bedeutet, ist in unserer Typhierarchie ein Student immer auch ein User.
Step9: Polymorphie
Step10: Das hat den Vorteil, dass wir die Formen in gleicher Weise verwenden kรถnnen. Um etwas die Gesamtflรคche mehrerer geometrischer Formen zu ermitteln kรถnnen wir das tun
|
14,694 | <ASSISTANT_TASK:>
Python Code:
client = Media(env="test", debug=False).configured_login(create_config_file=True)
client.url
result = client.get("POMS_NTR_388772")
print(json.dumps(json.loads(result), indent=1))
client.get("bla")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The credentials where read from a config file. If that file would not have existed, the user would have been requested to provide the api key, secret and origin, and the config file would have been created then for the next time.
Step2: Now you can do actual requests
|
14,695 | <ASSISTANT_TASK:>
Python Code:
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) / 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
print quicksort([3,6,8,10,1,2,1])
x = 3
print x, type(x)
print x + 1 # Addition;
print x - 1 # Subtraction;
print x * 2 # Multiplication;
print x ** 2 # Exponentiation;
x += 1
print x # Prints "4"
x *= 2
print x # Prints "8"
y = 2.5
print type(y) # Prints "<type 'float'>"
print y, y + 1, y * 2, y ** 2 # Prints "2.5 3.5 5.0 6.25"
t, f = True, False
print type(t) # Prints "<type 'bool'>"
print t and f # Logical AND;
print t or f # Logical OR;
print not t # Logical NOT;
print t != f # Logical XOR;
hello = 'hello' # String literals can use single quotes
world = "world" # or double quotes; it does not matter.
print hello, len(hello)
hw = hello + ' ' + world # String concatenation
print hw # prints "hello world"
hw12 = '%s %s %d' % (hello, world, 12) # sprintf style string formatting
print hw12 # prints "hello world 12"
s = "hello"
print s.capitalize() # Capitalize a string; prints "Hello"
print s.upper() # Convert a string to uppercase; prints "HELLO"
print s.rjust(7) # Right-justify a string, padding with spaces; prints " hello"
print s.center(7) # Center a string, padding with spaces; prints " hello "
print s.replace('l', '(ell)') # Replace all instances of one substring with another;
# prints "he(ell)(ell)o"
print ' world '.strip() # Strip leading and trailing whitespace; prints "world"
xs = [3, 1, 2] # Create a list
print xs, xs[2]
print xs[-1] # Negative indices count from the end of the list; prints "2"
xs[2] = 'foo' # Lists can contain elements of different types
print xs
xs.append('bar') # Add a new element to the end of the list
print xs
x = xs.pop() # Remove and return the last element of the list
print x, xs
nums = range(5) # range is a built-in function that creates a list of integers
print nums # Prints "[0, 1, 2, 3, 4]"
print nums[2:4] # Get a slice from index 2 to 4 (exclusive); prints "[2, 3]"
print nums[2:] # Get a slice from index 2 to the end; prints "[2, 3, 4]"
print nums[:2] # Get a slice from the start to index 2 (exclusive); prints "[0, 1]"
print nums[:] # Get a slice of the whole list; prints ["0, 1, 2, 3, 4]"
print nums[:-1] # Slice indices can be negative; prints ["0, 1, 2, 3]"
nums[2:4] = [8, 9] # Assign a new sublist to a slice
print nums # Prints "[0, 1, 8, 8, 4]"
animals = ['cat', 'dog', 'monkey']
for animal in animals:
print animal
animals = ['cat', 'dog', 'monkey']
for idx, animal in enumerate(animals):
print '#%d: %s' % (idx + 1, animal)
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
print squares
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
print squares
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
print even_squares
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print d['cat'] # Get an entry from a dictionary; prints "cute"
print 'cat' in d # Check if a dictionary has a given key; prints "True"
d['fish'] = 'wet' # Set an entry in a dictionary
print d['fish'] # Prints "wet"
print d['monkey'] # KeyError: 'monkey' not a key of d
print d.get('monkey', 'N/A') # Get an element with a default; prints "N/A"
print d.get('fish', 'N/A') # Get an element with a default; prints "wet"
del d['fish'] # Remove an element from a dictionary
print d.get('fish', 'N/A') # "fish" is no longer a key; prints "N/A"
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal in d:
legs = d[animal]
print 'A %s has %d legs' % (animal, legs)
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.iteritems():
print 'A %s has %d legs' % (animal, legs)
nums = [0, 1, 2, 3, 4]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
print even_num_to_square
animals = {'cat', 'dog'}
print 'cat' in animals # Check if an element is in a set; prints "True"
print 'fish' in animals # prints "False"
animals.add('fish') # Add an element to a set
print 'fish' in animals
print len(animals) # Number of elements in a set;
animals.add('cat') # Adding an element that is already in the set does nothing
print len(animals)
animals.remove('cat') # Remove an element from a set
print len(animals)
animals = {'cat', 'dog', 'fish'}
for idx, animal in enumerate(animals):
print '#%d: %s' % (idx + 1, animal)
# Prints "#1: fish", "#2: dog", "#3: cat"
from math import sqrt
print {int(sqrt(x)) for x in range(30)}
d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys
t = (5, 6) # Create a tuple
print type(t)
print d[t]
print d[(1, 2)]
t[0] = 1
def sign(x):
if x > 0:
return 'positive'
elif x < 0:
return 'negative'
else:
return 'zero'
for x in [-1, 0, 1]:
print sign(x)
def hello(name, loud=False):
if loud:
print 'HELLO, %s' % name.upper()
else:
print 'Hello, %s!' % name
hello('Bob')
hello('Fred', loud=True)
class Greeter:
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print 'HELLO, %s!' % self.name.upper()
else:
print 'Hello, %s' % self.name
g = Greeter('Fred') # Construct an instance of the Greeter class
g.greet() # Call an instance method; prints "Hello, Fred"
g.greet(loud=True) # Call an instance method; prints "HELLO, FRED!"
import numpy as np
a = np.array([1, 2, 3]) # Create a rank 1 array
print type(a), a.shape, a[0], a[1], a[2]
a[0] = 5 # Change an element of the array
print a
b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array
print b
print b.shape
print b[0, 0], b[0, 1], b[1, 0]
a = np.zeros((2,2)) # Create an array of all zeros
print a
b = np.ones((1,2)) # Create an array of all ones
print b
c = np.full((2,2), 7) # Create a constant array
print c
d = np.eye(2) # Create a 2x2 identity matrix
print d
e = np.random.random((2,2)) # Create an array filled with random values
print e
import numpy as np
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print b
print a[0, 1]
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print a[0, 1]
# Create the following rank 2 array with shape (3, 4)
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print a
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print row_r1, row_r1.shape
print row_r2, row_r2.shape
print row_r3, row_r3.shape
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print col_r1, col_r1.shape
print
print col_r2, col_r2.shape
a = np.array([[1,2], [3, 4], [5, 6]])
print a
# An example of integer array indexing.
# The returned array will have shape (3,) and
print a[[0, 1, 2], [0, 1, 0]]
# The above example of integer array indexing is equivalent to this:
print np.array([a[0, 0], a[1, 1], a[2, 0]])
# When using integer array indexing, you can reuse the same
# element from the source array:
print a[[0, 0], [1, 1]]
# Equivalent to the previous integer array indexing example
print np.array([a[0, 1], a[0, 1]])
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print a
# Create an array of indices
b = np.array([0, 2, 0, 1])
# Select one element from each row of a using the indices in b
print a[np.arange(4), b] # Prints "[ 1 6 7 11]"
# Mutate one element from each row of a using the indices in b
a[np.arange(4), b] += 10
print a
import numpy as np
a = np.array([[1,2], [3, 4], [5, 6]])
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
print bool_idx
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
print a[bool_idx]
# We can do all of the above in a single concise statement:
print a[a > 2]
x = np.array([1, 2]) # Let numpy choose the datatype
y = np.array([1.0, 2.0]) # Let numpy choose the datatype
z = np.array([1, 2], dtype=np.int64) # Force a particular datatype
print x.dtype, y.dtype, z.dtype
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum; both produce the array
print x + y
print np.add(x, y)
# Elementwise difference; both produce the array
print x - y
print np.subtract(x, y)
# Elementwise product; both produce the array
print x * y
print np.multiply(x, y)
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print x / y
print np.divide(x, y)
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print np.sqrt(x)
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print v.dot(w)
print np.dot(v, w)
# Matrix / vector product; both produce the rank 1 array [29 67]
print x.dot(v)
print np.dot(x, v)
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print x.dot(y)
print np.dot(x, y)
x = np.array([[1,2],[3,4]])
print np.sum(x) # Compute sum of all elements; prints "10"
print np.sum(x, axis=0) # Compute sum of each column; prints "[4 6]"
print np.sum(x, axis=1) # Compute sum of each row; prints "[3 7]"
print x
print x.T
v = np.array([[1,2,3]])
print v
print v.T
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = np.empty_like(x) # Create an empty matrix with the same shape as x
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
y[i, :] = x[i, :] + v
print y
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
print vv # Prints "[[1 0 1]
# [1 0 1]
# [1 0 1]
# [1 0 1]]"
y = x + vv # Add x and vv elementwise
print y
import numpy as np
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v # Add v to each row of x using broadcasting
print y
# Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,)
w = np.array([4,5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
print np.reshape(v, (3, 1)) * w
# Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
print x + v
# Add a vector to each column of a matrix
# x has shape (2, 3) and w has shape (2,).
# If we transpose x then it has shape (3, 2) and can be broadcast
# against w to yield a result of shape (3, 2); transposing this result
# yields the final result of shape (2, 3) which is the matrix x with
# the vector w added to each column. Gives the following matrix:
print (x.T + w).T
# Another solution is to reshape w to be a row vector of shape (2, 1);
# we can then broadcast it directly against x to produce the same
# output.
print x + np.reshape(w, (2, 1))
# Multiply a matrix by a constant:
# x has shape (2, 3). Numpy treats scalars as arrays of shape ();
# these can be broadcast together to shape (2, 3), producing the
# following array:
print x * 2
import matplotlib.pyplot as plt
%matplotlib inline
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# Plot the points using matplotlib
plt.plot(x, y)
y_cos = np.cos(x)
# Plot the points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python versions
Step2: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Step3: Now we let's look at the operations
Step4: Strings
Step5: String objects have a bunch of useful methods; for example
Step6: You can find a list of all string methods in the documentation.
Step7: As usual, you can find all the gory details about lists in the documentation.
Step8: Loops
Step9: If you want access to the index of each element within the body of a loop, use the built-in enumerate function
Step10: List comprehensions
Step11: You can make this code simpler using a list comprehension
Step12: List comprehensions can also contain conditions
Step13: Dictionaries
Step14: You can find all you need to know about dictionaries in the documentation.
Step15: If you want access to keys and their corresponding values, use the iteritems method
Step16: Dictionary comprehensions
Step17: Sets
Step18: Loops
Step19: Set comprehensions
Step20: Tuples
Step21: Functions
Step22: We will often define functions to take optional keyword arguments, like this
Step23: Classes
Step24: Numpy
Step25: Arrays
Step26: Numpy also provides many functions to create arrays
Step27: Array indexing
Step28: A slice of an array is a view into the same data, so modifying it will modify the original array.
Step29: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing
Step30: Two ways of accessing the data in the middle row of the array.
Step31: Integer array indexing
Step32: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix
Step33: Boolean array indexing
Step34: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Step35: You can read all about numpy datatypes in the documentation.
Step36: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects
Step37: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum
Step38: You can find the full list of mathematical functions provided by numpy in the documentation.
Step39: Broadcasting
Step40: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this
Step41: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting
Step42: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Step43: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
Step44: By running this special iPython command, we will be displaying plots inline
Step45: Plotting
Step46: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels
Step47: Subplots
|
14,696 | <ASSISTANT_TASK:>
Python Code:
from scipy import sparse
sa = sparse.random(10, 10, density = 0.01, format = 'lil')
result = (sa.count_nonzero()==0)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,697 | <ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'gi',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
{k: vocab_to_int[k] for k in list(vocab_to_int.keys())[:30]}
"{:,}".format(len(int_words))
from collections import Counter
t = 1e-5
word_counts = Counter(int_words)
amount_of_total_words = len(int_words)
def subsampling_probability(threshold, current_word_count):
word_relative_frequency = current_word_count / amount_of_total_words
return 1 - np.sqrt(threshold / word_relative_frequency)
probability_per_word = { current_word: subsampling_probability(t, current_word_count) for current_word, current_word_count in word_counts.items() }
train_words = [ i for i in int_words if np.random.random() > probability_per_word[i] ]
print("Words dropped: {:,}, final size: {:,}".format(len(int_words) - len(train_words), len(train_words)))
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
r = np.random.randint(1, window_size + 1)
min_index = max(idx - r, 0)
max_index = idx + r
words_in_batch = words[min_index:idx] + words[idx + 1:max_index + 1] # avoid returning the current word on idx
return list(set(words_in_batch)) # avoid duplicates
get_target([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 4, 5)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, (None))
labels = tf.placeholder(tf.int32, (None, None))
n_vocab = len(int_to_vocab)
n_embedding = 400
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
import random
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Step5: Making batches
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
Step8: Embedding
Step9: Negative sampling
Step10: Validation
Step11: Training
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
|
14,698 | <ASSISTANT_TASK:>
Python Code:
sig_train_modes_names = [11114001, 11296013, 11874042, 12103035, 13246001, 13264021]
bck_train_mode_name = 30000000
sig_train_files = ['mod_{}.csv'.format(name) for name in sig_train_modes_names]
bck_train_files = 'mod_30000000.csv'
folder = "datasets/prepared_hlt_body/"
# concat all signal data
if not os.path.exists(folder + 'signal_hlt2.csv'):
concat_files(folder, sig_train_files, os.path.join(folder , 'signal_hlt2.csv'))
signal_data = pandas.read_csv(os.path.join(folder , 'signal_hlt2.csv'), sep='\t')
bck_data = pandas.read_csv(os.path.join(folder , bck_train_files), sep='\t')
signal_data.columns
print 'Signal', statistic_length(signal_data)
print 'Bck', statistic_length(bck_data)
total_bck_events = statistic_length(bck_data)['Events'] + empty_events[bck_train_mode_name]
total_signal_events_by_mode = dict()
for mode in sig_train_modes_names:
total_signal_events_by_mode[mode] = statistic_length(signal_data[signal_data['mode'] == mode])['Events'] + empty_events[mode]
print 'Bck:', total_bck_events
'Signal:', total_signal_events_by_mode
variables = ["n", "mcor", "chi2", "eta", "fdchi2", "minpt", "nlt16", "ipchi2", "n1trk", "sumpt"]
# hlt2 nbody selection
signal_data = signal_data[(signal_data['pass_nbody'] == 1) & (signal_data['mcor'] <= 10e3)]
bck_data = bck_data[(bck_data['pass_nbody'] == 1) & (bck_data['mcor'] <= 10e3)]
print 'Signal', statistic_length(signal_data)
print 'Bck', statistic_length(bck_data)
total_signal_events_by_mode_presel = dict()
for mode in sig_train_modes_names:
total_signal_events_by_mode_presel[mode] = statistic_length(signal_data[signal_data['mode'] == mode])['Events']
total_bck_events_presel = statistic_length(bck_data)['Events']
print 'Bck:', total_bck_events_presel
'Signal:', total_signal_events_by_mode_presel
signal_data.head()
ds_train_signal, ds_train_bck, ds_test_signal, ds_test_bck = prepare_data(signal_data, bck_data, 'unique')
print 'Signal', statistic_length(ds_train_signal)
print 'Bck', statistic_length(ds_train_bck)
train = pandas.concat([ds_train_bck, ds_train_signal])
print 'Signal', statistic_length(ds_test_signal)
print 'Bck', statistic_length(ds_test_bck)
test = pandas.concat([ds_test_bck, ds_test_signal])
total_test_bck_events = (total_bck_events - total_bck_events_presel) // 2 + statistic_length(ds_test_bck)['Events']
total_test_signal_events = dict()
for mode in sig_train_modes_names:
total_not_passed_signal = total_signal_events_by_mode[mode] - total_signal_events_by_mode_presel[mode]
total_test_signal_events[mode] = total_not_passed_signal // 2 + \
statistic_length(ds_test_signal[ds_test_signal['mode'] == mode])['Events']
print 'Bck total test events:', total_test_bck_events
'Signal total test events:', total_test_signal_events
import cPickle
if os.path.exists('models/prunned.pkl'):
with open('models/prunned.pkl', 'r') as file_pr:
estimators = cPickle.load(file_pr)
from rep_ef.estimators import MatrixNetSkyGridClassifier
ef_base = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares',
connection='skygrid',
iterations=5000, sync=False)
ef_base.fit(train, train['signal'])
special_b = {
'n': [2.5, 3.5],
'mcor': [2000,3000,4000,5000,7500], # I want to remove splits too close the the B mass as I was looking in simulation and this could distort the mass peak (possibly)
'chi2': [1,2.5,5,7.5,10,100], # I also propose we add a cut to the pre-selection of chi2 < 1000. I don't want to put in splits at too small values here b/c these type of inputs are never modeled quite right in the simulation (they always look a bit more smeared in data).
'sumpt': [3000,4000,5000,6000,7500,9000,12e3,23e3,50e3], # I am happy with the MN splits here (these are almost "as is" from modify-6)
'eta': [2.5,3,3.75,4.25,4.5], # Close to MN.
'fdchi2': [33,125,350,780,1800,5000,10000], # I want to make the biggest split 10e3 because in the simulated events there is pretty much only BKGD above 40e3 but we don't want the BDT to learn to kill these as new particles would live here. Otherwise I took the MN splits and modified the first one (the first one is 5sigma now).
'minpt': [350,500,750,1500,3000,5000], # let's make 500 the 2nd split so that this lines up with the HLT1 SVs.
'nlt16': [0.5],
'ipchi2': [8,26,62,150,500,1000], # I also propose we add a cut of IP chi2 < 5000 as it's all background out there.
'n1trk': [0.5, 1.5, 2.5, 3.5]
}
ef_base_bbdt = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares',
connection='skygrid',
iterations=5000, sync=False, intervals=special_b)
ef_base_bbdt.fit(train, train['signal'])
ef_base_bbdt5 = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares',
connection='skygrid',
iterations=5000, sync=False, intervals=5)
ef_base_bbdt5.fit(train, train['signal'])
ef_base_bbdt6 = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares',
connection='skygrid',
iterations=5000, sync=False, intervals=6)
ef_base_bbdt6.fit(train, train['signal'])
from rep.data import LabeledDataStorage
from rep.report import ClassificationReport
report = ClassificationReport({'base': ef_base}, LabeledDataStorage(test, test['signal']))
report.roc()
%run pruning.py
new_trainlen = (len(train) // 8) * 8
trainX = train[ef_base.features][:new_trainlen].values
trainY = train['signal'][:new_trainlen].values
trainW = numpy.ones(len(trainY))
trainW[trainY == 0] *= sum(trainY) / sum(1 - trainY)
new_features, new_formula_mx, new_classifier = select_trees(trainX, trainY, sample_weight=trainW,
initial_classifier=ef_base,
iterations=100, n_candidates=100,
learning_rate=0.1, regularization=50.)
prunned = cPickle.loads(cPickle.dumps(ef_base))
prunned.formula_mx = new_formula_mx
def mode_scheme_fit(train, base, suf, model_file):
blending_parts = OrderedDict()
for n_ch, ch in enumerate(sig_train_modes_names):
temp = FoldingClassifier(base_estimator=base, random_state=11, features=variables, ipc_profile=PROFILE)
temp_data = train[(train['mode'] == ch) | (train['mode'] == bck_train_mode_name)]
temp.fit(temp_data, temp_data['signal'])
blending_parts['ch' + str(n_ch) + suf] = temp
import cPickle
with open(model_file, 'w') as f:
cPickle.dump(blending_parts, f)
def mode_scheme_predict(data, suf, model_file, mode='train'):
with open(model_file, 'r') as f:
blending_parts = cPickle.load(f)
for n_ch, ch in enumerate(sig_train_modes_names):
temp_name = 'ch' + str(n_ch) + suf
if mode == 'train':
temp_key = ((data['mode'] == ch) | (data['mode'] == bck_train_mode_name))
data.ix[temp_key, temp_name] = blending_parts[temp_name].predict_proba(
data[temp_key])[:, 1]
data.ix[~temp_key, temp_name] = blending_parts[temp_name].predict_proba(
data[~temp_key])[:, 1]
else:
data[temp_name] = blending_parts[temp_name].predict_proba(data)[:, 1]
def get_best_svr_by_channel(data, feature_mask, count=1):
add_events = []
for id_est, channel in enumerate(sig_train_modes_names):
train_part = data[(data['mode'] == channel)]
for num, group in train_part.groupby('unique'):
index = numpy.argsort(group[feature_mask.format(id_est)].values)[::-1]
add_events.append(group.iloc[index[:count], :])
good_events = pandas.concat([data[(data['mode'] == bck_train_mode_name)]] + add_events)
print len(good_events)
return good_events
from sklearn.ensemble import RandomForestClassifier
from rep.metaml import FoldingClassifier
base = RandomForestClassifier(n_estimators=500, min_samples_leaf=50, max_depth=6,
max_features=7, n_jobs=8)
mode_scheme_fit(train, base, '', 'forest_trick.pkl')
mode_scheme_predict(train, '', 'forest_trick.pkl')
mode_scheme_predict(test, '', 'forest_trick.pkl', mode='test')
good_events = get_best_svr_by_channel(train, 'ch{}', 2)
forest_mn = MatrixNetSkyGridClassifier(train_features=variables,
user_name='antares',
connection='skygrid',
iterations=5000, sync=False)
forest_mn.fit(good_events, good_events['signal'])
forest_mn_bbdt = MatrixNetSkyGridClassifier(train_features=variables,
user_name='antares',
connection='skygrid',
iterations=5000, sync=False, intervals=special_b)
forest_mn_bbdt.fit(good_events, good_events['signal'])
new_trainlen = (len(good_events) // 8) * 8
trainX = good_events[forest_mn.features][:new_trainlen].values
trainY = good_events['signal'][:new_trainlen].values
trainW = numpy.ones(len(trainY))
trainW[trainY == 0] *= sum(trainY) / sum(1 - trainY)
len(train), len(good_events)
new_features_f, new_formula_mx_f, new_classifier_f = select_trees(trainX, trainY, sample_weight=trainW,
initial_classifier=forest_mn,
iterations=100, n_candidates=100,
learning_rate=0.1, regularization=50.)
prunned_f = cPickle.loads(cPickle.dumps(forest_mn))
prunned_f.formula_mx = new_formula_mx_f
estimators = {'base MN': ef_base, 'BBDT MN-6': ef_base_bbdt6, 'BBDT MN-5': ef_base_bbdt5,
'BBDT MN special': ef_base_bbdt,
'Prunned MN': prunned, 'base MN + forest': forest_mn,
'BBDT MN special + forest': forest_mn_bbdt, 'Prunned MN + forest': prunned_f}
import cPickle
with open('models/prunned.pkl', 'w') as file_pr:
cPickle.dump(estimators, file_pr)
thresholds = dict()
test_bck = test[test['signal'] == 0]
RATE = [2500., 4000.]
events_pass = dict()
for name, cl in estimators.items():
prob = cl.predict_proba(test_bck)
thr, result = calculate_thresholds(test_bck, prob, total_test_bck_events, rates=RATE)
for rate, val in result.items():
events_pass['{}-{}'.format(rate, name)] = val[1]
thresholds[name] = thr
print name, result
train_modes_eff, statistic = result_statistic(estimators, sig_train_modes_names,
test[test['signal'] == 1],
thresholds, RATE, total_test_signal_events)
from rep.plotting import BarComparePlot
xticks_labels = ['$B^0 \\to K^*\mu^+\mu^-$', "$B^0 \\to D^+D^-$", "$B^0 \\to D^- \mu^+ \\nu_{\mu}$",
'$B^+ \\to \pi^+ K^-K^+$', '$B^0_s \\to \psi(1S) K^+K^-\pi^+\pi^-$', '$B^0_s \\to D_s^-\pi^+$']
for r in RATE:
new_dict = []
for key, val in train_modes_eff.iteritems():
if (key[0] in {'base MN', 'Prunned MN', 'BBDT MN special',
'base MN + forest', 'Prunned MN + forest', 'BBDT MN special + forest'}) and r == key[1]:
new_dict.append((key, val))
new_dict = dict(new_dict)
BarComparePlot(new_dict).plot(new_plot=True, figsize=(24, 8), ylabel='efficiency', fontsize=22)
xticks(3 + 11 * numpy.arange(6), xticks_labels, rotation=0)
lgd = legend(bbox_to_anchor=(0.5, 1.3), loc='upper center', ncol=2, fontsize=22)
# plt.savefig('hlt2-experiments.pdf' , format='pdf', bbox_extra_artists=(lgd,), bbox_inches='tight')
from rep.plotting import BarComparePlot
for r in RATE:
new_dict = []
for key, val in train_modes_eff.iteritems():
if r == key[1]:
new_dict.append((key, val))
new_dict = dict(new_dict)
BarComparePlot(new_dict).plot(new_plot=True, figsize=(24, 8), ylabel='efficiency', fontsize=22)
lgd = legend(bbox_to_anchor=(0.5, 1.3), loc='upper center', ncol=2, fontsize=22)
# plt.savefig('hlt2-experiments.pdf' , format='pdf', bbox_extra_artists=(lgd,), bbox_inches='tight')
plots = OrderedDict()
for key, value in estimators.items():
plots[key] = plot_roc_events(value, test[test['signal'] == 1], test[test['signal'] == 0], key)
bbdt_plots = plots.copy()
bbdt_plots.pop('Prunned MN')
bbdt_plots.pop('Prunned MN + forest')
from rep.plotting import FunctionsPlot
FunctionsPlot(bbdt_plots).plot(new_plot=True, xlim=(0.02, 0.06), ylim=(0.65, 0.82))
plot([1. * events_pass['2500.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2,
[0., 1], 'b--', label='rate: 2.5 kHz')
plot([1. * events_pass['4000.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2,
[0., 1], 'g--', label='rate: 4. kHz')
lgd = legend(loc='upper center', fontsize=16, bbox_to_anchor=(0.5, 1.3), ncol=3)
title('ROC for events (training decays)', fontsize=20)
xlabel('FRP, background events efficiency', fontsize=20)
ylabel('TPR, signal events efficiency', fontsize=20)
from rep.plotting import FunctionsPlot
FunctionsPlot(plots).plot(new_plot=True, xlim=(0.02, 0.06), ylim=(0.65, 0.82))
plot([1. * events_pass['2500.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2,
[0., 1], 'b--', label='rate: 2.5 kHz')
plot([1. * events_pass['4000.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2,
[0., 1], 'g--', label='rate: 4. kHz')
lgd = legend(loc='upper center', fontsize=16, bbox_to_anchor=(0.5, 1.4), ncol=3)
title('ROC for events (training decays)', fontsize=20)
xlabel('FRP, background events efficiency', fontsize=20)
ylabel('TPR, signal events efficiency', fontsize=20)
from collections import defaultdict
all_channels = []
efficiencies = defaultdict(OrderedDict)
for mode in empty_events.keys():
if mode in set(sig_train_modes_names) or mode == bck_train_mode_name:
continue
df = pandas.read_csv(os.path.join(folder , 'mod_{}.csv'.format(mode)), sep='\t')
if len(df) <= 0:
continue
total_events = statistic_length(df)['Events'] + empty_events[mode]
df = df[(df['pass_nbody'] == 1) & (df['mcor'] <= 10e3)]
passed_events = statistic_length(df)['Events']
all_channels.append(df)
for name, cl in estimators.items():
prob = cl.predict_proba(df)
for rate, thresh in thresholds[name].items():
eff = final_eff_for_mode(df, prob, total_events, thresh)
latex_name = '$' + Samples[str(mode)]['root'].replace("#", "\\") + '$'
efficiencies[(name, rate)][latex_name] = eff
for key, val in efficiencies.items():
for key_2, val_2 in val.items():
if val_2 <= 0.1:
efficiencies[key].pop(key_2)
from rep.plotting import BarComparePlot
for r in RATE:
new_dict = []
for key, val in efficiencies.iteritems():
if r == key[1]:
new_dict.append((key, val))
new_dict = dict(new_dict)
BarComparePlot(new_dict).plot(new_plot=True, figsize=(24, 8), ylabel='efficiency', fontsize=22)
lgd = legend(bbox_to_anchor=(0.5, 1.4), loc='upper center', ncol=2, fontsize=22)
plots_all = OrderedDict()
for key, value in estimators.items():
plots_all[key] = plot_roc_events(value, pandas.concat([test[test['signal'] == 1]] + all_channels),
test[test['signal'] == 0], key)
from rep.plotting import FunctionsPlot
FunctionsPlot(plots_all).plot(new_plot=True, xlim=(0.02, 0.06), ylim=(0.5, 0.66))
plot([1. * events_pass['2500.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2,
[0., 1], 'b--', label='rate: 2.5 kHz')
plot([1. * events_pass['4000.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2,
[0., 1], 'g--', label='rate: 4. kHz')
lgd = legend(loc='upper center', fontsize=16, bbox_to_anchor=(0.5, 1.3), ncol=4)
title('ROC for events (all decays together)', fontsize=20)
xlabel('FRP, background events efficiency', fontsize=20)
ylabel('TPR, signal events efficiency', fontsize=20)
thresholds = OrderedDict()
RATE = [2000., 2500., 3000., 3500., 4000.]
for name, cl in estimators.items():
prob = cl.predict_proba(ds_test_bck)
thr, result = calculate_thresholds(ds_test_bck, prob, total_test_bck_events, rates=RATE)
thresholds[name] = thr
print name, result
train_modes_eff, statistic = result_statistic({'base MN': estimators['base MN']}, sig_train_modes_names,
test[test['signal'] == 1],
thresholds, RATE, total_test_signal_events)
order_rate = OrderedDict()
for j in numpy.argsort([i[1] for i in train_modes_eff.keys()]):
order_rate[train_modes_eff.keys()[j]] = train_modes_eff.values()[j]
from rep.plotting import BarComparePlot
BarComparePlot(order_rate).plot(new_plot=True, figsize=(18, 6), ylabel='efficiency', fontsize=18)
lgd = legend(bbox_to_anchor=(0.5, 1.2), loc='upper center', ncol=5, fontsize=18)
# plt.savefig('rates.pdf' , format='pdf', bbox_extra_artists=(lgd,), bbox_inches='tight')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Counting events and svrs,
Step2: events distribution by mode
Step3: Define variables
Step4: Counting events and svrs,
Step5: events distribution by mode
Step6: Prepare train/test splitting
Step7: train
Step8: test
Step9: Define all total events in test samples
Step10: Matrixnet training
Step11: Base model with 5000 trees
Step12: Base BBDT model
Step13: BBDT-5, 6
Step14: Pruning
Step15: Minimize log_loss
Step16: Training sample is cut to be aliquot 8
Step17: Calculate thresholds on classifiers
Step18: Final efficiencies for each mode
Step19: Classification report using events
Step20: all channels efficiencies
Step21: DIfferent rates
|
14,699 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import math
import re
from scipy.sparse import csr_matrix
import matplotlib.pyplot as plt
import seaborn as sns
from surprise import Reader, Dataset, SVD, evaluate
sns.set_style("darkgrid")
# Skip date
df1 = pd.read_csv('../input/combined_data_1.txt', header = None, names = ['Cust_Id', 'Rating'], usecols = [0,1])
df1['Rating'] = df1['Rating'].astype(float)
print('Dataset 1 shape: {}'.format(df1.shape))
print('-Dataset examples-')
print(df1.iloc[::5000000, :])
#df2 = pd.read_csv('../input/combined_data_2.txt', header = None, names = ['Cust_Id', 'Rating'], usecols = [0,1])
#df3 = pd.read_csv('../input/combined_data_3.txt', header = None, names = ['Cust_Id', 'Rating'], usecols = [0,1])
#df4 = pd.read_csv('../input/combined_data_4.txt', header = None, names = ['Cust_Id', 'Rating'], usecols = [0,1])
#df2['Rating'] = df2['Rating'].astype(float)
#df3['Rating'] = df3['Rating'].astype(float)
#df4['Rating'] = df4['Rating'].astype(float)
#print('Dataset 2 shape: {}'.format(df2.shape))
#print('Dataset 3 shape: {}'.format(df3.shape))
#print('Dataset 4 shape: {}'.format(df4.shape))
# load less data for speed
df = df1
#df = df1.append(df2)
#df = df.append(df3)
#df = df.append(df4)
df.index = np.arange(0,len(df))
print('Full dataset shape: {}'.format(df.shape))
print('-Dataset examples-')
print(df.iloc[::5000000, :])
p = df.groupby('Rating')['Rating'].agg(['count'])
# get movie count
movie_count = df.isnull().sum()[1]
# get customer count
cust_count = df['Cust_Id'].nunique() - movie_count
# get rating count
rating_count = df['Cust_Id'].count() - movie_count
ax = p.plot(kind = 'barh', legend = False, figsize = (15,10))
plt.title('Total pool: {:,} Movies, {:,} customers, {:,} ratings given'.format(movie_count, cust_count, rating_count), fontsize=20)
plt.axis('off')
for i in range(1,6):
ax.text(p.iloc[i-1][0]/4, i-1, 'Rating {}: {:.0f}%'.format(i, p.iloc[i-1][0]*100 / p.sum()[0]), color = 'white', weight = 'bold')
df_nan = pd.DataFrame(pd.isnull(df.Rating))
df_nan = df_nan[df_nan['Rating'] == True]
df_nan = df_nan.reset_index()
movie_np = []
movie_id = 1
for i,j in zip(df_nan['index'][1:],df_nan['index'][:-1]):
# numpy approach
temp = np.full((1,i-j-1), movie_id)
movie_np = np.append(movie_np, temp)
movie_id += 1
# Account for last record and corresponding length
# numpy approach
last_record = np.full((1,len(df) - df_nan.iloc[-1, 0] - 1),movie_id)
movie_np = np.append(movie_np, last_record)
print('Movie numpy: {}'.format(movie_np))
print('Length: {}'.format(len(movie_np)))
# remove those Movie ID rows
df = df[pd.notnull(df['Rating'])]
df['Movie_Id'] = movie_np.astype(int)
df['Cust_Id'] = df['Cust_Id'].astype(int)
print('-Dataset examples-')
print(df.iloc[::5000000, :])
f = ['count','mean']
df_movie_summary = df.groupby('Movie_Id')['Rating'].agg(f)
df_movie_summary.index = df_movie_summary.index.map(int)
movie_benchmark = round(df_movie_summary['count'].quantile(0.8),0)
drop_movie_list = df_movie_summary[df_movie_summary['count'] < movie_benchmark].index
print('Movie minimum times of review: {}'.format(movie_benchmark))
df_cust_summary = df.groupby('Cust_Id')['Rating'].agg(f)
df_cust_summary.index = df_cust_summary.index.map(int)
cust_benchmark = round(df_cust_summary['count'].quantile(0.8),0)
drop_cust_list = df_cust_summary[df_cust_summary['count'] < cust_benchmark].index
print('Customer minimum times of review: {}'.format(cust_benchmark))
print('Original Shape: {}'.format(df.shape))
df = df[~df['Movie_Id'].isin(drop_movie_list)]
df = df[~df['Cust_Id'].isin(drop_cust_list)]
print('After Trim Shape: {}'.format(df.shape))
print('-Data Examples-')
print(df.iloc[::5000000, :])
df_p = pd.pivot_table(df,values='Rating',index='Cust_Id',columns='Movie_Id')
print(df_p.shape)
# Below is another way I used to sparse the dataframe...doesn't seem to work better
#Cust_Id_u = list(sorted(df['Cust_Id'].unique()))
#Movie_Id_u = list(sorted(df['Movie_Id'].unique()))
#data = df['Rating'].tolist()
#row = df['Cust_Id'].astype('category', categories=Cust_Id_u).cat.codes
#col = df['Movie_Id'].astype('category', categories=Movie_Id_u).cat.codes
#sparse_matrix = csr_matrix((data, (row, col)), shape=(len(Cust_Id_u), len(Movie_Id_u)))
#df_p = pd.DataFrame(sparse_matrix.todense(), index=Cust_Id_u, columns=Movie_Id_u)
#df_p = df_p.replace(0, np.NaN)
df_title = pd.read_csv('../input/movie_titles.csv', encoding = "ISO-8859-1", header = None, names = ['Movie_Id', 'Year', 'Name'])
df_title.set_index('Movie_Id', inplace = True)
print (df_title.head(10))
reader = Reader()
# get just top 100K rows for faster run time
data = Dataset.load_from_df(df[['Cust_Id', 'Movie_Id', 'Rating']][:100000], reader)
data.split(n_folds=3)
svd = SVD()
evaluate(svd, data, measures=['RMSE', 'MAE'])
df_785314 = df[(df['Cust_Id'] == 785314) & (df['Rating'] == 5)]
df_785314 = df_785314.set_index('Movie_Id')
df_785314 = df_785314.join(df_title)['Name']
print(df_785314)
user_785314 = df_title.copy()
user_785314 = user_785314.reset_index()
user_785314 = user_785314[~user_785314['Movie_Id'].isin(drop_movie_list)]
# getting full dataset
data = Dataset.load_from_df(df[['Cust_Id', 'Movie_Id', 'Rating']], reader)
trainset = data.build_full_trainset()
svd.train(trainset)
user_785314['Estimate_Score'] = user_785314['Movie_Id'].apply(lambda x: svd.predict(785314, x).est)
user_785314 = user_785314.drop('Movie_Id', axis = 1)
user_785314 = user_785314.sort_values('Estimate_Score', ascending=False)
print(user_785314.head(10))
def recommend(movie_title, min_count):
print("For movie ({})".format(movie_title))
print("- Top 10 movies recommended based on Pearsons'R correlation - ")
i = int(df_title.index[df_title['Name'] == movie_title][0])
target = df_p[i]
similar_to_target = df_p.corrwith(target)
corr_target = pd.DataFrame(similar_to_target, columns = ['PearsonR'])
corr_target.dropna(inplace = True)
corr_target = corr_target.sort_values('PearsonR', ascending = False)
corr_target.index = corr_target.index.map(int)
corr_target = corr_target.join(df_title).join(df_movie_summary)[['PearsonR', 'Name', 'count', 'mean']]
print(corr_target[corr_target['count']>min_count][:10].to_string(index=False))
recommend("What the #$*! Do We Know!?", 0)
recommend("X2: X-Men United", 0)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next let's load first data file and get a feeling of how huge the dataset is
Step2: Let's try to load the 3 remaining dataset as well
Step3: Now we combine datasets
Step4: Data viewing
Step5: We can see that the rating tends to be relatively positive (>3). This may be due to the fact that unhappy customers tend to just leave instead of making efforts to rate. We can keep this in mind - low rating movies mean they are generally really bad
Step6: Data slicing
Step7: Now let's trim down our data, whats the difference in data size?
Step8: Let's pivot the data set and put it into a giant matrix - we need it for our recommendation system
Step9: Data mapping
Step10: Recommendation models
Step11: Below is what user 783514 liked in the past
Step12: Let's predict which movies user 785314 would love to watch
Step13: Recommend with Pearsons' R correlations
Step14: A recommendation for you if you like 'What the #$*! Do We Know!?'
Step15: X2
|