Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
---|---|---|
14,500 | <ASSISTANT_TASK:>
Python Code:
import rebound
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
def run_sim(a):
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(primary=sim.particles[0],m=1e-3, a=1)
sim.add(primary=sim.particles[0],m=1e-3, a=a)
sim.integrate(2.*np.pi*10.)
return sim.particles[1].x
N=400
x_exact = np.zeros((N))
a_grid = np.linspace(1.4,1.7,N)
for i,a in enumerate(a_grid):
x_exact[i] = run_sim(a)
def run_sim_var(a):
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(primary=sim.particles[0],m=1e-3, a=1)
sim.add(primary=sim.particles[0],m=1e-3, a=a)
var_da = sim.add_variation()
var_dda = sim.add_variation(order=2, first_order=var_da)
var_da.vary(2, "a")
var_dda.vary(2, "a")
sim.integrate(2.*np.pi*10.)
return sim.particles[1].x, var_da.particles[1].x, var_dda.particles[1].x
a_0 = 1.56
x, dxda, ddxdda = run_sim_var(a_0)
x_1st_order = np.zeros(N)
x_2nd_order = np.zeros(N)
for i,a in enumerate(a_grid):
x_1st_order[i] = x + (a-a_0)*dxda
x_2nd_order[i] = x + (a-a_0)*dxda + 0.5*(a-a_0)*(a-a_0)*ddxdda
fig = plt.figure(figsize=(6,4))
ax = plt.subplot(111)
ax.set_xlim(a_grid[0],a_grid[-1])
ax.set_ylim(np.min(x_exact),np.max(x_exact)*1.01)
ax.set_xlabel("initial semi-major axis of outer planet")
ax.set_ylabel("$x$ position of inner planet after 10 orbits")
ax.plot(a_grid, x_exact, "-", color="black", lw=2)
ax.plot(a_grid, x_1st_order, "--", color="green")
ax.plot(a_grid, x_2nd_order, ":", color="blue")
ax.plot(a_0, x, "ro",ms=10);
plt.savefig('paper_test1.pdf',bbox_inches='tight'); # Save to file.
from ipywidgets import interact
def generate_plot(a_0=1.56):
x, dxda, ddxdda = run_sim_var(a_0)
x_1st_order = np.zeros(N)
x_2nd_order = np.zeros(N)
for i,a in enumerate(a_grid):
x_1st_order[i] = x + (a-a_0)*dxda
x_2nd_order[i] = x + (a-a_0)*dxda + 0.5*(a-a_0)*(a-a_0)*ddxdda
fig = plt.figure(figsize=(6,4))
ax = plt.subplot(111)
ax.set_xlim(a_grid[0],a_grid[-1])
ax.set_ylim(np.min(x_exact),np.max(x_exact)*1.01)
ax.set_xlabel("initial semi-major axis of outer planet")
ax.set_ylabel("$x$ position of inner planet after 10 orbits")
ax.plot(a_grid, x_exact, "-", color="black", lw=2)
ax.plot(a_grid, x_1st_order, "--", color="green")
ax.plot(a_grid, x_2nd_order, ":", color="blue")
ax.plot(a_0, x, "ro",ms=10)
plt.show()
return
interact(generate_plot,a_0=(1.4,1.7,0.01));
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We setup a planetary system with two Jupiter mass planets. The following function takes that system, integrates it forward in time by 10 orbits and returns the inner planet's $x$ coordinate at the end of the simulation. The $x$ coordinate changes because the planet orbits the star, but also because the planet interacts with the other planet. The function takes the outer planet's initial semi-major axis, $a$, as a parameter. We setup the system using heliocentric coordinates and therefore specify the primary attribute when adding particles to REBOUND (by default REBOUND uses Jacobi coordinates which are not supported by variational equations).
Step2: We now run this simulation 400 times for different initial $a$ in the range [1.4, 1.7] and store the final $x$ coordinate of the inner planet in the array x_exact.
Step3: Next, we create a function that runs an $N$-body simulation including first and second order differential equations. For that we add two sets of variational particles with the add_variation() command (one for first order and one for second order). We then initialize the variational particles by varying the outer planet's semi-major axis. After integrating the system forward in time, the function returns the $x$ coordinate of the inner planet as well as the $x$ coordinate of the corresponding variational particles
Step4: We run one simulation with variational particles at $a_0=1.56$. We then use the derivates we got from the run_sim_var() function to approximate the final position of the inner planet as a function of the outer planet's initial semi-major axis using a Taylor series
Step5: Finally, we plot the exact final position that we obtained from running a full $N$-body simulation as well as our approximation near a neighbourhood of $a_0$ which we got from the variational equations.
Step6: The following code produces an interactive version of this graph where one can change the initial semi-major axis $a_0$ and immediately see the new plot. It uses the ipywidgets tool interact. Move the slider and see how REBOUND accurately calculates the first and second derivate using variational equations.
|
14,501 | <ASSISTANT_TASK:>
Python Code:
# Import modules
import time
import math
import random
import numpy as np
import scipy
import sympy
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def linear_congruential_generator(x, a, b, m):
x = (a * x + b) % m
u = x / m
return u, x, a, b, m
x0 = 3
args = (x0, 13, 0, 31)
for i in range(10):
u, *args = linear_congruential_generator(*args)
print('idx_%02d x:%02d, u:%.4f' %(i + 1, args[0], u))
x = sympy.symbols('x')
exact_value = sympy.integrate(x ** 2, (x, 0, 1))
# Arguments for our LCG
x0 = 3
args = (x0, 13, 0, 31)
# Function and arguments for the curve y = x^2
f = lambda x : pow(x, 2)
# Process for this example
def process(f, args, total_iterations):
avg = 0
for i in range(total_iterations):
u, *args = linear_congruential_generator(*args)
avg += f(u)
avg /= total_iterations
return avg
print('exact value = %s (%.6f in numerical representations)' %(exact_value, exact_value.evalf()))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 10), 10, abs(process(f, args, 10) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 20), 20, abs(process(f, args, 20) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 30), 30, abs(process(f, args, 30) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 40), 40, abs(process(f, args, 40) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 50), 50, abs(process(f, args, 50) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 60), 60, abs(process(f, args, 60) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 70), 70, abs(process(f, args, 70) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 80), 80, abs(process(f, args, 80) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 90), 90, abs(process(f, args, 90) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 100), 100, abs(process(f, args, 100) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, args, 1000), 1000, abs(process(f, args, 1000) - exact_value.evalf())))
def stdrand(x):
return linear_congruential_generator(x, pow(7, 5), 0, pow(2, 31) - 1)[:2]
# Function and arguments for the curve y = x^2
f = lambda x : pow(x, 2)
# Process for this example
def process(f, total_iterations):
avg = 0
x = 3
for i in range(total_iterations):
u, x = stdrand(x)
avg += f(u)
avg /= total_iterations
return avg
print('exact value = %s (%.6f in numerical representations)' %(exact_value, exact_value.evalf()))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 10), 10, abs(process(f, 10) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 20), 20, abs(process(f, 20) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 30), 30, abs(process(f, 30) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 40), 40, abs(process(f, 40) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 50), 50, abs(process(f, 50) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 60), 60, abs(process(f, 60) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 70), 70, abs(process(f, 70) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 80), 80, abs(process(f, 80) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 90), 90, abs(process(f, 90) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 100), 100, abs(process(f, 100) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 200), 200, abs(process(f, 200) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 300), 300, abs(process(f, 300) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 2000), 2000, abs(process(f, 2000) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 10000), 10000, abs(process(f, 10000) - exact_value.evalf())))
print('average = %.6f with %3d uniform random numbers, error = %.6f' %(process(f, 100000), 100000, abs(process(f, 100000) - exact_value.evalf())))
# restrict : 0 <= (x, y) <= 1
# Arguments
x0 = 3
args = (x0, pow(7, 5), 0, pow(2, 31) - 1)
f = lambda x, y : 4 * pow(2 * x - 1, 4) + 8 * pow(2 * y - 1, 8) < 1 + 2 * pow(2 * y - 1, 3) * pow(3 * x - 2, 2)
# Process for this example
def process(f, args, total_iterations):
hits = 0
for i in range(total_iterations):
ux, *args = linear_congruential_generator(*args)
uy, *args = linear_congruential_generator(*args)
hits += f(ux, uy)
area = hits / total_iterations
return area
print('area = %.6f with %3d uniform random numbers' %(process(f, args, 300000), 300000))
def randu(x):
return linear_congruential_generator(x, pow(2, 16) + 3, 0, pow(2, 31))[:2]
# For matplotlib
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.view_init(azim=225)
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_zlim(0, 1)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
# Arguments for randu
datax = np.array([])
datay = np.array([])
dataz = np.array([])
x = 3
total_iterations = 20000
# Process
for i in range(total_iterations):
u1, x = randu(x)
u2, x = randu(x)
u3, x = randu(x)
datax = np.append(datax, u1)
datay = np.append(datay, u2)
dataz = np.append(dataz, u3)
ax.scatter(datax, datay, dataz, zdir='z', s=2)
# For matplotlib
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.view_init(azim=225)
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_zlim(0, 1)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
# Arguments for randu
datax = np.array([])
datay = np.array([])
dataz = np.array([])
x = 3
total_iterations = 20000
# Process
for i in range(total_iterations):
u1, x = stdrand(x)
u2, x = stdrand(x)
u3, x = stdrand(x)
datax = np.append(datax, u1)
datay = np.append(datay, u2)
dataz = np.append(dataz, u3)
ax.scatter(datax, datay, dataz, zdir='z', s=2)
datax = np.array([])
for i in range(10000):
u1 = np.random.normal()
datax = np.append(datax, u1)
plt.plot(datax)
def halton(p, n):
b = np.zeros(math.ceil(math.log(n + 1) / math.log(p)))
u = np.zeros(n)
for j in range(n):
i = 0
b[0] = b[0] + 1
while b[i] > p - 1 + np.finfo(float).eps:
b[i] = 0
i += 1
b[i] += 1
u[j] = 0
for k in range(1, b.size + 1):
u[j] = u[j] + b[k-1] * pow(p, -k)
return u
# Example
print(halton(2, 8))
print(halton(3, 8))
pair_count = 2000
pr_xdata = np.array([])
pr_ydata = np.array([])
qr_xdata = np.array([])
qr_ydata = np.array([])
qrx_seq = halton(2, pair_count)
qry_seq = halton(3, pair_count)
x = time.time()
for idx in range(pair_count):
ux, x = stdrand(x)
uy, x = stdrand(x)
pr_xdata = np.append(pr_xdata, ux)
pr_ydata = np.append(pr_ydata, uy)
qr_xdata = np.append(qr_xdata, qrx_seq[idx])
qr_ydata = np.append(qr_ydata, qry_seq[idx])
plt.figure(1)
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.subplot(121)
plt.plot(pr_xdata, pr_ydata, 'o', markersize=1)
plt.subplot(122)
plt.plot(qr_xdata, qr_ydata, 'o', markersize=1)
t = 10
w = 0
for i in range(t):
if random.random() > 0.5:
w += 1
else:
w -= 1
def random_walk(n, interval):
lowerbound = interval[0]
upperbound = interval[1]
top_exits = 0
avg_esc_time = 0
for _ in range(n):
w = 0
l = 0
while(True):
if random.random() > 0.5:
w += 1
else:
w -= 1
l += 1
if w == lowerbound:
pass
break
elif w == upperbound:
top_exits += 1
break
avg_esc_time += l
return top_exits, avg_esc_time / n
interval = (-3, 6)
top_exit_100, _ = random_walk(100, interval)
top_exit_200, _ = random_walk(200, interval)
top_exit_400, _ = random_walk(400, interval)
top_exit_800, _ = random_walk(800, interval)
top_exit_1600, _ = random_walk(1600, interval)
top_exit_3200, _ = random_walk(3200, interval)
top_exit_6400, _ = random_walk(6400, interval)
top_exit_12800, _ = random_walk(12800, interval)
top_exit_25600, _ = random_walk(25600, interval)
output = lambda n, top_exit : print('n = %5d, top exits = %4d, prob = %f, error = %f' \
%(n, top_exit, top_exit / n, abs(1 / 3 - top_exit / n)))
output(100, top_exit_100)
output(200, top_exit_200)
output(400, top_exit_400)
output(800, top_exit_800)
output(1600, top_exit_1600)
output(3200, top_exit_3200)
output(6400, top_exit_6400)
output(12800, top_exit_12800)
output(25600, top_exit_25600)
interval = (-3, 6)
_, avg_esc_100 = random_walk(100, interval)
_, avg_esc_200 = random_walk(200, interval)
_, avg_esc_400 = random_walk(400, interval)
_, avg_esc_800 = random_walk(800, interval)
_, avg_esc_1600 = random_walk(1600, interval)
_, avg_esc_3200 = random_walk(3200, interval)
_, avg_esc_6400 = random_walk(6400, interval)
output = lambda n, avg_esc : print('n = %5d, average esc. time = %f, error = %f' \
%(n, avg_esc, abs(18 - avg_esc)))
output(100, avg_esc_100)
output(200, avg_esc_200)
output(400, avg_esc_400)
output(800, avg_esc_800)
output(1600, avg_esc_1600)
output(3200, avg_esc_3200)
output(6400, avg_esc_6400)
brownian() implements one dimensional Brownian motion (i.e. the Wiener process).
# File: brownian.py
from math import sqrt
from scipy.stats import norm
import numpy as np
def brownian(x0, n, dt, delta, out=None):
Generate an instance of Brownian motion (i.e. the Wiener process):
X(t) = X(0) + N(0, delta**2 * t; 0, t)
where N(a,b; t0, t1) is a normally distributed random variable with mean a and
variance b. The parameters t0 and t1 make explicit the statistical
independence of N on different time intervals; that is, if [t0, t1) and
[t2, t3) are disjoint intervals, then N(a, b; t0, t1) and N(a, b; t2, t3)
are independent.
Written as an iteration scheme,
X(t + dt) = X(t) + N(0, delta**2 * dt; t, t+dt)
If `x0` is an array (or array-like), each value in `x0` is treated as
an initial condition, and the value returned is a numpy array with one
more dimension than `x0`.
Arguments
---------
x0 : float or numpy array (or something that can be converted to a numpy array
using numpy.asarray(x0)).
The initial condition(s) (i.e. position(s)) of the Brownian motion.
n : int
The number of steps to take.
dt : float
The time step.
delta : float
delta determines the "speed" of the Brownian motion. The random variable
of the position at time t, X(t), has a normal distribution whose mean is
the position at time t=0 and whose variance is delta**2*t.
out : numpy array or None
If `out` is not None, it specifies the array in which to put the
result. If `out` is None, a new numpy array is created and returned.
Returns
-------
A numpy array of floats with shape `x0.shape + (n,)`.
Note that the initial value `x0` is not included in the returned array.
x0 = np.asarray(x0)
# For each element of x0, generate a sample of n numbers from a
# normal distribution.
r = norm.rvs(size=x0.shape + (n,), scale=delta*sqrt(dt))
# If `out` was not given, create an output array.
if out is None:
out = np.empty(r.shape)
# This computes the Brownian motion by forming the cumulative sum of
# the random samples.
np.cumsum(r, axis=-1, out=out)
# Add the initial condition.
out += np.expand_dims(x0, axis=-1)
return out
N = 500
xlim = 2.0
# For SDE
sigma = 0.3
r = 1
y0 = 0
X = np.linspace(0, xlim, N)
# For Brownian motion
dt = 0.1
delta = 0.3
B1 = brownian(y0, N, dt, delta)
B2 = brownian(y0, N, dt, delta)
# Process
Y = y0 + r * X
Y1 = y0 + r * X + sigma * B1
Y2 = y0 + r * X + sigma * B2
plt.xlim(0, 2)
plt.plot(X, Y1)
plt.plot(X, Y2)
plt.plot(X, Y, color='black')
N = 500
xlim = 2.0
r = 0.1
sigma = 0.3
delta = 0.1
dt = 0.2
y0 = 1
X = np.linspace(0, xlim, N)
# For Brownian motion
B = brownian(0, N, dt, delta)
# Process
Y = y0 * np.exp((r - 0.5 * pow(sigma, 2)) * X + sigma * B)
plt.plot(X, Y)
plt.plot(X, B, linestyle = '--')
plt.grid(True)
dt = 0.01
xlimit = 2
y0 = 1
r = 0.1
sigma = 0.3
times = np.arange(0, xlimit + dt, dt)
dB = np.random.standard_normal(times.size) * np.sqrt(dt)
ws = np.empty(times.size)
ws[0] = y0
for i in range(times.size - 1):
ws[i + 1] = ws[i] + r * ws[i] * dt + sigma * ws[i] * dB[i]
# Plot the chart
plt.plot(times, ws)
plt.axhline(y=0, color='black')
plt.axvline(x=0, color='black')
plt.grid(True, which='both')
dt = 0.1
xlimit = 100
y0 = 0
r = 10
sigma = 1
delta = 0.5
times = np.arange(0, xlimit + dt, dt)
dB = np.random.standard_normal(times.size) * np.sqrt(dt)
ws = np.empty(times.size)
ws[0] = y0
for i in range(times.size - 1):
ws[i + 1] = ws[i] - r * ws[i] * dt + sigma * dB[i]
# For Brownian motion realization
BM = brownian(0, times.size, dt, delta)
# Plot the chart
plt.plot(times, ws, label='Langevin equation')
plt.plot(times, BM, label='Brownian motion')
plt.axhline(y = 0, color='black')
plt.axvline(x = 0, color='black')
plt.grid(True, which='both')
plt.legend()
dt = 0.1
xlimit = 4
y0 = 1e-2
r = 0.1
sigma = 0.3
times = np.arange(0, xlimit + dt, dt)
dB = np.random.standard_normal(times.size) * np.sqrt(dt)
ws = np.empty(times.size)
ws[0] = y0 # For Euler-Maruyama Method
wms = np.empty(times.size)
wms[0] = y0 # For Milstein Method
for i in range(times.size - 1):
# Euler-Maruyama
ws[i + 1] = ws[i] + r * ws[i] * dt \
+ sigma * ws[i] * dB[i]
# Milstein
wms[i + 1] = wms[i] + r * wms[i] * dt \
+ sigma * wms[i] * dB[i] \
+ 0.5 * pow(sigma, 2) * wms[i] * (pow(dB[i], 2) - dt)
# Calculate y(T)
tmp = dB
tmp[-1] = 0
B = np.cumsum(np.roll(tmp, 1))
f = lambda y0, sigma, t, B : y0 * np.exp((r - 0.5 * np.power(sigma, 2)) * t + sigma * B)
Y = f(y0, sigma, times, B)
# Plot the chart
plt.plot(times, ws, label='w(t) by Euler-Maruyama Method')
plt.plot(times, wms, label='w(t) by Milstein Method')
plt.plot(times, Y, label='Y(T)')
plt.grid(True, which='both')
plt.legend()
plt.show()
# Plot the chart
plt.ylabel('|y(T)-w(T)|')
plt.plot(times, np.abs(Y - ws), label='Euler-Maruyama Method')
plt.plot(times, np.abs(Y - wms), label='Milstein Method')
plt.grid(True, which='both')
plt.legend()
plt.show()
dts = np.array([
pow(2, -1), pow(2, -2), pow(2, -3), pow(2, -4), pow(2, -5),
pow(2, -6), pow(2, -7), pow(2, -8), pow(2, -9), pow(2, -10)
])
errs_em = np.empty(dts.size)
errs_m = np.empty(dts.size)
xlimit = 4
y0 = 1e-2
r = 0.1
sigma = 0.3
# For each dt
for i in range(dts.size):
dt = dts[i]
times = np.arange(0, xlimit + dt, dt)
dB = np.random.standard_normal(times.size) * np.sqrt(dt)
ws = np.empty(times.size)
ws[0] = y0 # For Euler-Maruyama Method
wms = np.empty(times.size)
wms[0] = y0 # For Milstein Method
for j in range(times.size - 1):
# Euler-Maruyama
ws[j + 1] = ws[j] + r * ws[j] * dt \
+ sigma * ws[j] * dB[j]
# Milstein
wms[j + 1] = wms[j] + r * wms[j] * dt \
+ sigma * wms[j] * dB[j] \
+ 0.5 * pow(sigma, 2) * wms[j] * (pow(dB[j], 2) - dt)
# Calculate y(T)
tmp = dB
tmp[-1] = 0
B = np.cumsum(np.roll(tmp, 1))
f = lambda y0, sigma, t, B : y0 * np.exp((r - 0.5 * np.power(sigma, 2)) * t + sigma * B)
Y = f(y0, sigma, times, B)
errs_em[i] = abs(Y[-1] - ws[-1])
errs_m[i] = abs(Y[-1] - wms[-1])
# Plot the chart
fig, ax = plt.subplots()
plt.xlabel('dt')
plt.ylabel('|y(T)-w(T)|')
xi = np.arange(dts.size)
plt.xticks(xi, dts)
plt.plot(xi, errs_em, label='Euler-Maruyama Method')
plt.plot(xi, errs_m, label='Milstein Method')
plt.grid(True, which='both')
plt.legend()
fig.autofmt_xdate()
plt.show()
dt = 0.1
xlimit = 4
y0 = 2
times = np.arange(0, xlimit + dt, dt)
dB = np.random.standard_normal(times.size) * np.sqrt(dt)
ws_em = np.empty(times.size)
ws_em[0] = y0 # For Euler-Maruyama Method
ws_m = np.empty(times.size)
ws_m[0] = y0 # For Milstein Method
ws_rk = np.empty(times.size)
ws_rk[0] = y0 # For First-Order Stochastic Runge-Kutta Method
for i in range(times.size - 1):
# Euler-Maruyama Method
ws_em[i + 1] = ws_em[i] - 2 * np.exp(-2 * ws_em[i]) * dt + 2 * np.exp(-ws_em[i]) * dB[i]
# Milstein Method
ws_m[i + 1] = ws_m[i] - 2 * np.exp(-2 * ws_m[i]) * dt + 2 * np.exp(-ws_m[i]) * dB[i] - \
2 * np.exp(-2 * ws_m[i]) * (np.power(dB[i], 2) - dt)
# First-Order Stochastic Runge-Kutta Method
ws_rk[i + 1] = ws_rk[i] - 2 * np.exp(-2 * ws_rk[i]) * dt + 2 * np.exp(-ws_rk[i]) * dB[i] + \
(2 * np.exp(-(ws_rk[i] + 2 * np.exp(-ws_rk[i]) * np.sqrt(dt))) - 2 * np.exp(-ws_rk[i])) * (np.power(dB[i], 2) - dt) / (2 * np.sqrt(dt))
# Plot the chart
plt.plot(times, ws_em, label = 'Euler-Maruyama Method')
plt.plot(times, ws_m, label = 'Milstein Method')
plt.plot(times, ws_rk, label = 'First-Order Stochastic Runge-Kutta Method')
plt.legend()
plt.show()
dt = 0.01
t0, t1 = 1, 3
y0, y1 = 1, 2
times = np.arange(t0, t1 + dt * 1, dt)
dB1 = np.random.standard_normal(times.size) * np.sqrt(dt)
dB1[-2] = 0
ws1 = np.empty(times.size)
ws1[0] = y0
dB2 = np.random.standard_normal(times.size) * np.sqrt(dt)
dB2[-2] = 0
ws2 = np.empty(times.size)
ws2[0] = y0
dB3 = np.random.standard_normal(times.size) * np.sqrt(dt)
dB3[-2] = 0
ws3 = np.empty(times.size)
ws3[0] = y0
# Let's use Euler-Maruyama Method
for i in range(times.size - 1):
ws1[i + 1] = ws1[i] + (y1 - ws1[i]) * dt / (dt * (times.size - i - 1)) + dB1[i]
ws2[i + 1] = ws2[i] + (y1 - ws2[i]) * dt / (dt * (times.size - i - 1)) + dB2[i]
ws3[i + 1] = ws3[i] + (y1 - ws3[i]) * dt / (dt * (times.size - i - 1)) + dB3[i]
# Plot the chart
plt.plot(times, ws1)
plt.plot(times, ws2)
plt.plot(times, ws3)
plt.plot(t0, y0, marker='o', color='k')
plt.plot(t1, y1, marker='o', color='k')
plt.grid(True)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 9.1 Random Numbers
Step2: Example
Step3: Minimal standard random number generator
Step4: Example
Step5: Example
Step6: For its visualization (from https
Step7: 9.1.2 Exponential and normal random numbers
Step8: 9.2 Monte Carlo Simulation
Step9: pseudo-random vs quasi-random
Step10: 9.3 Discrete And Continuous Brownian Motion
Step11: Example
Step12: Example
Step15: 9.4 Stochastic Differential Equations
Step16: Example
Step17: Ito formula
Step18: Euler-Maruyama Method
Step19: Example
Step20: Definition
Step21: Approximation
Step22: Example
|
14,502 | <ASSISTANT_TASK:>
Python Code:
#%% librerias
import pandas as pd
# Crear un dataFrame
data = {'Company':['GOOG','GOOG','MSFT','MSFT','FB','FB'],
'Person':['Sam','Charlie','Amy','Vanessa','Carl','Sarah'],
'Sales':[200,120,340,124,243,350]}
df = pd.DataFrame(data)
df
df.groupby('Company')
by_comp = df.groupby("Company")
by_comp.mean()
df.groupby('Company').mean()
by_comp.std()
by_comp.min()
by_comp.max()
by_comp.count()
by_comp.describe()
by_comp.describe().transpose()
by_comp.describe().transpose()['GOOG']
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ahora ya podemos usar la funcion .groupby() para agrupar la informacion en base a los nombres de las columnas. Agrupemos la informacion por el nombre de la compania. Esto creara un objeto DataFrameGroupBy
Step2: Este objeto lo podemos guardar como una nueva variable
Step3: Y en seguida mandar llamar los metodos de agregacion
Step4: Mas ejemplos de funciones
|
14,503 | <ASSISTANT_TASK:>
Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
import kfp.components as comp
dataflow_template_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_flex_template/component.yaml')
help(dataflow_template_op)
PROJECT_ID = '[Your PROJECT_ID]'
BIGQUERY_TABLE_SPEC = '[Your PROJECT_ID:DATASET_ID.TABLE_ID]'
GCS_OUTPUT_FOLDER = 'gs://[Your output GCS folder]'
GCS_STAGING_FOLDER = 'gs://[Your staging GCS folder]'
LOCATION = 'us'
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Flex Template'
flex_temp_launch_parameters = {
"parameters": {
"tableRef": BIGQUERY_TABLE_SPEC,
"bucket": GCS_OUTPUT_FOLDER
},
"containerSpecGcsPath": "gs://dataflow-templates/2021-03-29-00_RC00/flex/BigQuery_to_Parquet",
}
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataflow launch flex template pipeline',
description='Dataflow launch flex template pipeline'
)
def pipeline(
project_id = PROJECT_ID,
location = LOCATION,
launch_parameters = json.dumps(flex_temp_launch_parameters),
staging_dir = GCS_STAGING_FOLDER,
wait_interval = 30):
dataflow_template_op(
project_id = project_id,
location = location,
launch_parameters = launch_parameters,
staging_dir = staging_dir,
wait_interval = wait_interval)
import kfp
pipeline_func = pipeline
run_name = pipeline_func.__name__ + ' run'
kfp.Client().create_run_from_pipeline_func(
pipeline_func,
arguments = {},
run_name = run_name,
experiment_name=EXPERIMENT_NAME,
namespace='default'
)
!gsutil cat $GCS_OUTPUT_FOLDER*
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Load the component using KFP SDK
Step2: 3. Configure job parameters
Step3: 4. Example pipeline that uses the component
Step4: 5. Create pipeline run
Step5: 6. Inspect the output
|
14,504 | <ASSISTANT_TASK:>
Python Code:
T = 3.0 # duration in seconds
fs = 44100.0 # sampling rate in Hertz
f0 = 440*numpy.logspace(-2, 1, T*fs, endpoint=False, base=2.0) # time-varying frequency
print f0.min(), f0.max() # starts at 110 Hz, ends at 880 Hz
t = numpy.linspace(0, T, T*fs, endpoint=False)
x = 0.01*numpy.sin(2*numpy.pi*f0*t)
from IPython.display import Audio
Audio(x, rate=fs)
import essentia
from essentia.standard import ZeroCrossingRate
zcr = ZeroCrossingRate()
frame_sz = 1024
hop_sz = 512
plt.semilogy([zcr(essentia.array(x[i:i+frame_sz])) for i in range(0, len(x), hop_sz)])
F = librosa.util.frame(x, frame_sz, hop_sz)
print F.shape
import essentia
from essentia.standard import FrameGenerator
plt.semilogy([zcr(frame) for frame in FrameGenerator(essentia.array(x), frameSize=frame_sz, hopSize=hop_sz)])
from essentia.standard import Spectrum, Windowing, FrameGenerator
hamming_window = Windowing(type='hamming')
spectrum = Spectrum() # we just want the magnitude spectrum
spectrogram = numpy.array([spectrum(hamming_window(frame))
for frame in FrameGenerator(essentia.array(x), frameSize=frame_sz, hopSize=hop_sz)])
print spectrogram.shape
plt.imshow(spectrogram.T, origin='lower', aspect='auto', interpolation='nearest')
plt.ylabel('Spectral Bin Index')
plt.xlabel('Frame Index')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the sweep signal
Step2: Listen to the signal
Step3: Segmentation Using Python List Comprehensions
Step4: librosa.util.frame
Step5: (That being said, in librosa, manual segmentation of a signal is often unnecessary, because the feature extraction methods themselves do segmentation for you.)
Step6: Example
Step7: This spectrogram has 260 frames, each containing 513 frequency bins.
Step8: Finally, plot the spectrogram. We must transpose the spectrogram array such that time is displayed along the horizontal axis, and frequency is along the vertical axis.
|
14,505 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-vhr4', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
14,506 | <ASSISTANT_TASK:>
Python Code:
import pandas
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
td = pandas.read_csv('titanic_train.csv')
td.info()
surivors = td[td.Survived==1]
dead = td[td.Survived==0]
plt.figure(figsize=(13,6))
plt.hist(surivors.Fare, alpha=.5, bins=np.arange(0,300,10), label="Surviors")
plt.hist(dead.Fare, alpha=.5, bins=np.arange(0,300,10), label="Died")
plt.legend()
plt.title('Fare Distribution of Passenger Groups')
plt.xlabel('Fare Payed')
plt.ylabel('Number of Passengers')
plt.show()
from scipy.stats import mannwhitneyu
u, p = mannwhitneyu(surivors.Fare, dead.Fare)
print("Results:\n\tU-statistic: %.5f\n\tp-value: %g" % (u, p * 2))
td.info()
valid_age = td.Age[td.Age>0]
valid_fare = td.Fare[td.Age>0]
plt.figure(figsize=(7,4))
plt.scatter(valid_age, valid_fare)
plt.xlim(0,80)
plt.ylim(0,150)
plt.title('Comparision of Age and Fare')
plt.xlabel('Age')
plt.ylabel('Fare')
plt.show()
def linear(data, slope):
A Linear Function Method
return data * slope
def chi_sq(data, model, std, dof=1):
Function to Determine The chi-squared statistic
return sum(((data - model)/std)**2) / (len(data) - dof)
slopes = np.linspace(0,2,100)
chi_results = []
for s in slopes:
model_fare = linear(valid_age,s)
chi_results.append(chi_sq(valid_fare, model_fare, valid_fare.std(), dof=1))
chi_results = np.array(chi_results)
print("Best Chi_Squared: {}".format(chi_results[chi_results.argmin()]))
print("Best Slope: {}".format(slopes[chi_results.argmin()]))
plt.figure(figsize=(7,4))
plt.scatter(td.Age,td.Fare)
plt.xlim(0,80)
plt.ylim(0,150)
plt.plot(td.Age,linear(td.Age,slopes[chi_results.argmin()]))
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: The data we care about for this hypothesis(Survived, Fare) has no NaN values so no need to modify.
Step3: Hypothesis
Step4: Based off this graph it is clear that these two distributions are best compared using a Mann-Whitney U-test
Step5: Based off the high U-stat and the very low p-value we can reject the null hypothesis that there is no difference in fare payed between the survivors and the dead.
Step6: There are NaN ages which must be dealt with. In this case they will be ignored.
Step7: Visual This Data With a Scatter Plot focusing on the highest density area.
Step10: Create a Linear Function and chi-squared statistic function. These will be used to find the best slope for the linear model.
Step11: The inital range of (-20,20,1) was narrowed down to (0,2,100) based off chi-squared being closer to 1.
Step12: Visualize the linear model over the data.
|
14,507 | <ASSISTANT_TASK:>
Python Code:
%%bash
# example of the input file structure and naming: a plain folder with unzipped backward and forward fastq files
ls ../../data/raw/fastq/ | head -n 20
from IPython.display import Image, display
img1 = Image("../../data/processed/fastqc_results/raw/quality_summary_all_samples_1.png",height=400,width=200)
img2 = Image("../../data/processed/fastqc_results/raw/quality_summary_all_samples_2.png",height=100,width=400)
print("Fastqc results of uncleaned fastq-files:")
display(img1)
display(img2)
%%bash
source activate secapr_env
secapr clean_reads -h
%%bash
cat ../../data/raw/adapter_info.txt
from IPython.display import Image, display
img1 = Image("../../data/processed/fastqc_results/cleaned_default_settings/quality_summary_all_samples_1.png",height=400,width=200)
img2 = Image("../../data/processed/fastqc_results/cleaned_default_settings/quality_summary_all_samples_2.png",height=100,width=400)
print("Fastqc results of fastq-files cleaned with default settings:")
display(img1,img2)
from IPython.display import Image, display
img1 = Image("../../data/processed/fastqc_results/custom_settings/quality_summary_all_samples_1.png",height=400,width=200)
img2 = Image("../../data/processed/fastqc_results/custom_settings/quality_summary_all_samples_2.png",height=100,width=400)
print("Fastqc results of fastq-files cleaned with default settings:")
display(img1,img2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Quality-check your raw (and dirty) reads
Step2: The two plots produced by the R-script show summary statistics for each individual test (tests shown on x-axis). The test names carry 3-letter acronyms, and the corresponding full test-name can be found by opening one of the html files. The first plot shows how many occurrences of each test-result (fail,pass,warn) were found for each test among all samples (per-test basis). The second plot shows for each sample (y-axis) which test had which result (per-sample basis). Eventually we want to get rid of all the red in these plots (see below).
Step3: a) Prepare config file
Step4: b) Run secapr clean_reads function
Step5: We ran secapr clean_reads with default settings and we see a clear improve in comparison to the quality test results of the raw reads (see plots further up in this document). However, there are still quite a few failed tests and I'm convinced we can do better than that. Check the secapr clean_reads documentation (by adding -h to the command) in order to see the available options and try some different settings in order to see if and how the results improve. It helps to check out in one of the html files what the different tests mean and try to find a settings in secapr clean_reads that could be taking care of the specific problem. Preferably all samples should pass all tests (there may still be some warnings) before you continue with further processing of the reads. Below we show an example of how the results can be further improved
|
14,508 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pyJHTDB
t = np.linspace(0, 1, 64)
x = np.zeros((t.shape[0], t.shape[0], 3), np.float32)
x[:, :, 0] = t[np.newaxis, :]
x[:, :, 1] = t[:, np.newaxis]
x[:, :, 2] = .0
lJHTDB = pyJHTDB.libJHTDB()
lJHTDB.initialize()
#Add token
auth_token = "edu.jhu.pha.turbulence.testing-201311" #Replace with your own token here
lJHTDB.add_token(auth_token)
import pyJHTDB.dbinfo
T = pyJHTDB.dbinfo.isotropic1024coarse['time'][-1]
time = np.random.random()*T
u = lJHTDB.getData(
time,
x,
sinterp = 4,
getFunction='getVelocity')
ubox = lJHTDB.getBoxFilter(
time,
x,
field = 'velocity',
filter_width = 5*(2*np.pi / 1024))
lJHTDB.finalize()
e = np.sum(u**2, axis = 2)
ebox = np.sum(ubox**2, axis = 2)
import matplotlib.pyplot as plt
fig = plt.figure(figsize = (10, 5))
a = fig.add_subplot(121)
a.set_axis_off()
a.imshow(e,
extent = [t[0], t[-1] - t[0], t[0], t[-1] - t[0]],
interpolation = 'none')
a = fig.add_subplot(122)
a.imshow(ebox,
extent = [t[0], t[-1] - t[0], t[0], t[-1] - t[0]],
interpolation = 'none')
lJHTDB.initialize()
x, t = lJHTDB.getPosition(
starttime = 0.1,
endtime = 0.2,
dt = 0.001,
point_coords = 2*np.pi * np.random.random((20, 3)),
steps_to_keep = 50)
lJHTDB.finalize()
fig = plt.figure(figsize = (10, 5))
a = fig.add_subplot(111)
a.plot(x[:, 0], x[:, 1])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I'm going to create a 2D grid of points, and then get the values of the velocity at those points.
Step2: Since the dataset I'm gonna use is the isotropic turbulence dataset, it doesn't really matter what value I choose for the z coordinates, if it's fixed.
Step3: Now we have the velocity stored in u, and we're gonna compute the energy and make a nice plot of it.
Step4: Next, get some trajectories.
Step5: Now, plot trajectories. Not spectactular because they're not that long, but this is the way a simple plot would work for long trajectories as well.
|
14,509 | <ASSISTANT_TASK:>
Python Code:
import urllib.request
rm_site = 'http://www.repeatmasker.org'
fn = 'ce10.fa.out.gz'
url = '%s/genomes/ce10/RepeatMasker-rm405-db20140131/%s' % (rm_site, fn)
urllib.request.urlretrieve(url, fn)
import gzip
import itertools
fh = gzip.open(fn, 'rt')
for ln in itertools.islice(fh, 10):
print(ln, end='')
class Repeat(object):
def __init__(self, ln):
# parse fields
(self.swsc, self.pctdiv, self.pctdel, self.pctins, self.refid,
self.ref_i, self.ref_f, self.ref_remain, self.orient, self.rep_nm,
self.rep_cl, self.rep_prior, self.rep_i, self.rep_f, self.unk) = ln.split()
# int-ize the reference coordinates
self.ref_i, self.ref_f = int(self.ref_i), int(self.ref_f)
def parse_repeat_masker_db(fn):
reps = []
with gzip.open(fn) if fn.endswith('.gz') else open(fn) as fh:
fh.readline() # skip header
fh.readline() # skip header
fh.readline() # skip header
while True:
ln = fh.readline()
if len(ln) == 0:
break
reps.append(Repeat(ln.decode('UTF8')))
return reps
reps = parse_repeat_masker_db('ce10.fa.out.gz')
ucsc_site = 'http://hgdownload.cse.ucsc.edu/goldenPath'
fn = 'chromFa.tar.gz'
urllib.request.urlretrieve("%s/ce10/bigZips/%s" % (ucsc_site, fn), fn)
!tar zxvf chromFa.tar.gz
from collections import defaultdict
def parse_fasta(fns):
ret = defaultdict(list)
for fn in fns:
with open(fn, 'rt') as fh:
for ln in fh:
if ln[0] == '>':
name = ln[1:].rstrip()
else:
ret[name].append(ln.rstrip())
for k, v in ret.items():
ret[k] = ''.join(v)
return ret
genome = parse_fasta(['chrI.fa', 'chrII.fa', 'chrIII.fa', 'chrIV.fa', 'chrM.fa', 'chrV.fa', 'chrX.fa'])
genome['chrI'][:1000] # printing just the first 1K nucleotides
def extract_repeat(rep, genome):
assert rep.refid in genome
return genome[rep.refid][rep.ref_i-1:rep.ref_f]
extract_repeat(reps[0], genome)
extract_repeat(reps[1], genome)
extract_repeat(reps[2], genome)
chapaevs = filter(lambda x: 'DNA/CMC-Chapaev' == x.rep_cl, reps)
[extract_repeat(chapaev, genome) for chapaev in chapaevs]
from operator import attrgetter
' '.join(map(attrgetter('rep_cl'), reps[:60]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Above are the first several lines of the .out.gz file for the roundworm (C. elegans). The columns have headers, which are somewhat helpful. More detail is available in the RepeatMasker documentation under "How to read the results". (Note that in addition to the 14 fields descrived in the documentation, there's also a 15th ID field.)
Step2: We can parse a file into a list of Repeat objects
Step3: Extracting repeats from the genome in FASTA format
Step4: Let's load chromosome I into a string so that we can see the sequences of the repeats.
Step5: Note the combination of lowercase and uppercase. Actually, that relates to our discussion here. The lowercase stretches are repeats! The UCSC genome sequences use the lowercase/uppercase distinction to make it clear where the repeats are -- and they know this because they ran RepeatMasker on the genome beforehand. In this case, the two repeats you can see are both simple hexamer repeats. Also, note that their position in the genome corresponds to the first two rows of the RepeatMasker database that we printed above.
Step6: Let's specifically try to extract a repeat from the DNA/CMC-Chapaev family.
Step7: How are repeats related?
|
14,510 | <ASSISTANT_TASK:>
Python Code:
BUCKET='ai-analytics-solutions-kfpdemo' # CHANGE to a bucket you own
import tensorflow as tf
import tensorflow_hub as tfhub
import os
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=[None,None,3]))
model.add(tfhub.KerasLayer("https://tfhub.dev/google/efficientnet/b4/feature-vector/1", name='image_embeddings'))
model.summary()
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
def serve(filename):
img = tf.io.read_file(filename[0])
img = tf.io.decode_image(img, channels=3)
img = tf.cast(img, tf.float32) / 255.0
#img = tf.image.resize(img, [380, 380])
return model(img)
path='gs://{}/effnet_image_embedding'.format(BUCKET)
tf.saved_model.save(model, path, signatures={'serving_default': serve})
!saved_model_cli show --all --dir gs://$BUCKET/effnet_image_embedding
%%bigquery
CREATE OR REPLACE MODEL advdata.effnet_image_embed
OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/effnet_image_embedding/*')
%%bigquery
SELECT output_0 FROM
ML.PREDICT(MODEL advdata.effnet_image_embed,(
SELECT 'gs://gcs-public-data--met/634108/0.jpg' AS filename))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Embedding model for images
Step2: The model on TensorFlow Hub expects images of a certain size, and provided as normalized arrays.
Step3: Loading model into BigQuery
Step4: From the BigQuery web console, click on "schema" tab for the newly loaded model. You will see that the input is a string called filename and the output is called output_0. The model is computationally expensive.
|
14,511 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import tensorflow as tf
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
sess = tf.InteractiveSession()
x = tf.constant([True, False, False], tf.bool)
y = tf.constant([True, True, False], tf.bool)
x = tf.constant([True, False, False], tf.bool)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTE on notation
Step2: Q5. Given x, return the truth value of NOT x element-wise.
|
14,512 | <ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
urlretrieve('http://sthiele.github.io/data/queens.lp','queens.lp')
urlretrieve('http://sthiele.github.io/data/facts.lp','facts.lp')
from pyasp.asp import *
goptions = ''
soptions = ' 2'
solver = Gringo4Clasp(gringo_options=goptions, clasp_options=soptions)
result = solver.run(['queens.lp', 'facts.lp'], collapseTerms=True, collapseAtoms=False)
print(result)
newfacts = TermSet()
newterm1 = Term('d', ["11"])
newfacts.add(newterm1)
newterm2 = Term('d', ["12"])
newfacts.add(newterm2)
result = solver.run(['queens.lp', 'facts.lp', newfacts.to_file()], collapseTerms=True, collapseAtoms=False)
print(result)
count=1
for s in result :
print('Solution '+str(count)+':')
print(' ', end=' ')
for a in s :
args= ",".join(a.args())
print(a.pred(),'(',args,')',sep='',end=' ')
print()
count+=1
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the pyasp library.
Step2: Create a solver object.
Step3: Start the solver with some input.
Step4: The result is a list of the solutions as TermSets.
Step5: Create your own set of facts.
Step6: Now the result contains 2 solutions to the 12-queens problem.
Step7: Parse and pretty print your solutions.
|
14,513 | <ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
encoded[:100]
len(vocab)
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
batches = get_batches(encoded, 10, 10)
x, y = next(batches)
encoded.shape
x.shape
encoded
print('x\n', x[:10, :])
print('\ny\n', y[:10, :])
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, (batch_size, num_steps), name='inputs')
targets = tf.placeholder(tf.int32, (batch_size, num_steps), name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
def lstm_cell(lstm_size, keep_prob):
cell = tf.contrib.rnn.BasicLSTMCell(lstm_size, reuse=tf.get_variable_scope().reuse)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# # Use a basic LSTM cell
# lstm = tf.contrib.rnn.BasicLSTMCell(batch_size, reuse=tf.get_variable_scope().reuse)
# # Add dropout to the cell outputs
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell(lstm_size, keep_prob) for _ in range(num_layers)], state_is_tuple=True)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
# https://stackoverflow.com/questions/42669578/tensorflow-1-0-valueerror-attempt-to-reuse-rnncell-with-a-different-variable-s
# def lstm_cell():
# cell = tf.contrib.rnn.NASCell(state_size, reuse=tf.get_variable_scope().reuse)
# return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=0.8)
# rnn_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)], state_is_tuple = True)
# outputs, current_state = tf.nn.dynamic_rnn(rnn_cells, x, initial_state=rnn_tuple_state)
# MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)])
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output, axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.add(tf.matmul(x, softmax_w), softmax_b)
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='prediction')
return out, logits
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state, scope='layer')
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
tf.train.get_checkpoint_state('checkpoints')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
Step8: LSTM Cell
Step9: RNN Output
Step10: Training loss
Step11: Optimizer
Step12: Build the network
Step13: Hyperparameters
Step14: Time for training
Step15: Saved checkpoints
Step16: Sampling
Step17: Here, pass in the path to a checkpoint and sample from the network.
|
14,514 | <ASSISTANT_TASK:>
Python Code:
import cv2
import numpy as np
from scipy import misc
i = misc.ascent()
import matplotlib.pyplot as plt
plt.grid(False)
plt.gray()
plt.axis('off')
plt.imshow(i)
plt.show()
i_transformed = np.copy(i)
size_x = i_transformed.shape[0]
size_y = i_transformed.shape[1]
# This filter detects edges nicely
# It creates a convolution that only passes through sharp edges and straight
# lines.
#Experiment with different values for fun effects.
#filter = [ [0, 1, 0], [1, -4, 1], [0, 1, 0]]
# A couple more filters to try for fun!
filter = [ [-1, -2, -1], [0, 0, 0], [1, 2, 1]]
#filter = [ [-1, 0, 1], [-2, 0, 2], [-1, 0, 1]]
# If all the digits in the filter don't add up to 0 or 1, you
# should probably do a weight to get it to do so
# so, for example, if your weights are 1,1,1 1,2,1 1,1,1
# They add up to 10, so you would set a weight of .1 if you want to normalize them
weight = 1
for x in range(1,size_x-1):
for y in range(1,size_y-1):
convolution = 0.0
convolution = convolution + (i[x - 1, y-1] * filter[0][0])
convolution = convolution + (i[x, y-1] * filter[0][1])
convolution = convolution + (i[x + 1, y-1] * filter[0][2])
convolution = convolution + (i[x-1, y] * filter[1][0])
convolution = convolution + (i[x, y] * filter[1][1])
convolution = convolution + (i[x+1, y] * filter[1][2])
convolution = convolution + (i[x-1, y+1] * filter[2][0])
convolution = convolution + (i[x, y+1] * filter[2][1])
convolution = convolution + (i[x+1, y+1] * filter[2][2])
convolution = convolution * weight
if(convolution<0):
convolution=0
if(convolution>255):
convolution=255
i_transformed[x, y] = convolution
# Plot the image. Note the size of the axes -- they are 512 by 512
plt.gray()
plt.grid(False)
plt.imshow(i_transformed)
#plt.axis('off')
plt.show()
new_x = int(size_x/2)
new_y = int(size_y/2)
newImage = np.zeros((new_x, new_y))
for x in range(0, size_x, 2):
for y in range(0, size_y, 2):
pixels = []
pixels.append(i_transformed[x, y])
pixels.append(i_transformed[x+1, y])
pixels.append(i_transformed[x, y+1])
pixels.append(i_transformed[x+1, y+1])
newImage[int(x/2),int(y/2)] = max(pixels)
# Plot the image. Note the size of the axes -- now 256 pixels instead of 512
plt.gray()
plt.grid(False)
plt.imshow(newImage)
#plt.axis('off')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we can use the pyplot library to draw the image so we know what it looks like.
Step2: The image is stored as a numpy array, so we can create the transformed image by just copying that array. Let's also get the dimensions of the image so we can loop over it later.
Step3: Now we can create a filter as a 3x3 array.
Step4: Now let's create a convolution. We will iterate over the image, leaving a 1 pixel margin, and multiply out each of the neighbors of the current pixel by the value defined in the filter.
Step5: Now we can plot the image to see the effect of the convolution!
Step6: This code will show a (2, 2) pooling. The idea here is to iterate over the image, and look at the pixel and it's immediate neighbors to the right, beneath, and right-beneath. Take the largest of them and load it into the new image. Thus the new image will be 1/4 the size of the old -- with the dimensions on X and Y being halved by this process. You'll see that the features get maintained despite this compression!
|
14,515 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from time import time
import numpy as np
import pandas as pd
import random
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from pandas.tseries.offsets import *
import simulated_data
# parameters of simulated data generation
n_series = 6
# lenghts of subject and reference time periods
refh = 12
subh = 1
# probability to correctly classify sample based purely on luck
chance = refh/(subh+refh)
# how much better than luck we want to be to say we detected an anomaly. Default is 5%
cut = chance + (1-chance) * 0.05
print('chance:',chance, '\tcut:', cut)
ref = refh * Hour()
sub = subh * Hour()
# number of training epochs
epochs=60
df = simulated_data.get_simulated_data()
# df = simulated_data.get_simulated_fixed_data()
df.head()
ax = df.plot(figsize=(20,7))
ax.set_xlabel("time", fontsize=14)
def getModel():
model = Sequential()
model.add(Dense(units=n_series, input_shape=(n_series,), activation='relu' ))
# model.add(Dropout(0.5))
model.add(Dense(units=n_series, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(units=1, activation='sigmoid') )
model.compile(loss='binary_crossentropy',optimizer='rmsprop', metrics=['accuracy'])
# model.compile(loss='hinge', optimizer='sgd', metrics=['binary_accuracy'])
# model.compile(loss='mse',optimizer='rmsprop', metrics=['accuracy'])
# model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['binary_accuracy'])
return model
def plotHist(hist):
es=len(hist.history['loss'])
x = np.linspace(0,es-1,es)
plt.plot(x, hist.history['loss'], '--', linewidth=2, label='loss')
plt.plot(x, hist.history['acc'], '-', linewidth=2, label='acc')
plt.legend()
plt.show()
def check_for_anomaly(ref, sub, count):
y_ref = pd.DataFrame([0] * ref.shape[0])
y_ref.index=ref.index
X_ref=ref
del X_ref['flag']
del X_ref['score']
y_sub = pd.DataFrame([1] * sub.shape[0])
y_sub.index=sub.index
X_sub=sub
del X_sub['flag']
del X_sub['score']
# separate Reference and Subject into Train and Test
X_ref_train, X_ref_test, y_ref_train, y_ref_test = train_test_split(X_ref, y_ref, test_size=0.3, random_state=42)
X_sub_train, X_sub_test, y_sub_train, y_sub_test = train_test_split(X_sub, y_sub, test_size=0.3, random_state=42)
# combine training ref and sub samples
X_train = pd.concat([X_ref_train, X_sub_train])
y_train = pd.concat([y_ref_train, y_sub_train])
# combine testing ref and sub samples
X_test = pd.concat([X_ref_test, X_sub_test])
y_test = pd.concat([y_ref_test, y_sub_test])
X_train = X_train.reset_index(drop=True)
y_train = y_train.reset_index(drop=True)
X_train_s, y_train_s = shuffle(X_train, y_train)
m=getModel()
hist = m.fit(X_train_s.values, y_train_s.values, epochs=epochs, verbose=0, shuffle=True, batch_size=256)
loss_and_metrics = m.evaluate(X_test.values, y_test.values)#, batch_size=256)
#print(loss_and_metrics)
if loss_and_metrics[1] > cut:# or not count%5:
plotHist(hist)
return loss_and_metrics[1]
df['score']=0.5
#find min and max timestamps
start = df.index.min()
end = df.index.max()
#round start
start.seconds=0
start.minutes=0
# loop over them
ti=start+ref+sub
count=0
while ti < end + 1 * Minute():
print(count)
startt = time()
ref_start = ti-ref-sub
ref_end = ti-sub
ref_df = df[(df.index >= ref_start) & (df.index < ref_end)]
sub_df = df[(df.index >= ref_end) & (df.index < ti)]
score = check_for_anomaly(ref_df, sub_df, count)
df.loc[(df.index>=ref_end) & (df.index<=ti),['score']] = score
print('\n',ti,"\trefes:" , ref_df.shape[0], "\tsubjects:", sub_df.shape[0], '\tscore:', score)
ti = ti + sub
count=count+1
endt=time()
print("took:", endt-startt)
# if count>2: break
ax = df.plot(figsize=(20,7))
ax.set_xlabel("time", fontsize=14)
plt.savefig('ANN_simulated_score.png')
fig, ax = plt.subplots(figsize=(20,7))
ax.set_xlabel("time", fontsize=14)
df.loc[:,'Detected'] = 0
df.loc[df.score>cut,'Detected']=1
df.head()
ax.plot(df.flag, 'r')
ax.plot(df.score,'g')
ax.fill( df.Detected, 'b', alpha=0.3)
ax.legend(loc='upper left')
plt.show()
fig.savefig('ANN_simulated_shaded.png')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: parameters to set
Step2: generate normal data
Step3: plot timeseries
Step4: create NN
Step5: This function actually checks for anomaly in one time window. It receives both referent period and subject period (one under investigation). If splits in samples in training and testing parts, shuffle them and trains model. If anomaly has been detected it plots ROC. It returns both loss and accuracy.
Step6: Looping over time intervals
Step7: Plots all the series, now it includes AUC values
Step8: Plots auc and shades periods were anomaly has been detected
|
14,516 | <ASSISTANT_TASK:>
Python Code:
def divide(numerator, denominator):
result = numerator/denominator
print("result = %f" % result)
divide(1.0, 0)
def divide1(numerator, denominator):
try:
result = numerator/denominator
print("result = %f" % result)
except:
print("You can't divide by 0!")
divide1(1.0, 'a')
divide1(1.0, 2)
divide1("x", 2)
def divide2(numerator, denominator):
try:
result = numerator / denominator
print("result = %f" % result)
except (ZeroDivisionError, TypeError) as err:
print("Got an exception: %s" % err)
divide2(1, "X")
divide2("x, 2)
# Handle division by 0 by using a small number
SMALL_NUMBER = 1e-3
def divide3(numerator, denominator):
try:
result = numerator/denominator
except ZeroDivisionError:
result = numerator/SMALL_NUMBER
print("result = %f" % result)
except Exception as err:
print("Different error than division by zero:", err)
divide3(1,0)
divide3("1",0)
import pandas as pd
def validateDF(df):
"
:param pd.DataFrame df: should have a column named "hours"
if not "hours" in df.columns:
raise ValueError("DataFrame should have a column named 'hours'.")
df = pd.DataFrame({'hours': range(10) })
validateDF(df)
df = pd.DataFrame({'years': range(10) })
validateDF(df)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Why didn't we catch this SyntaxError?
Step3: What do you do when you get an exception?
|
14,517 | <ASSISTANT_TASK:>
Python Code:
filename = 'resultat.nc'
import numpy as np
import matplotlib.pyplot as plt
from pylab import *
import cartopy.crs as ccrs
from netCDF4 import Dataset
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
data = Dataset(filename)
longitude=data.variables['longitude'][:]
latitude=data.variables['latitude'][:]
altitude=data.variables['altitude'][:]
Time = data.variables['Time'][:]
Ls = data.variables['Ls'][:]
dafirst = Time[0]
daint = Time[1] - dafirst
dalast = dafirst + (len(Time)-1)*daint
year = 0.
add = np.linspace(dafirst,dalast,num=len(Time)) ; add[0] = 0.
for iii in range(1,len(Ls)):
if Ls[iii] - Ls[iii-1] < 0: year = year+1.
add[iii] = year*360.
Ls_true = add + Ls
# Paramètres utilisateurs -----------------------------------------
earthtopo = False # ajouter les traits de côte actuels
varname = 'tsurf'
vmin = 120
vmax = 280
# Code ------------------------------------------------------------
dataplt = data.variables[varname][:,:,:]
fig = plt.figure(figsize=(12,8))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
if (earthtopo): ax.coastlines(resolution="110m",linewidth=1)
gl = ax.gridlines(linestyle='--',color='black',
draw_labels=True)
gl.xlabels_top = False
gl.ylabels_right = False
clevs = np.linspace(vmin,vmax,29)
plt.contourf(longitude, latitude, np.mean(dataplt[:,:,:],axis=0),
clevs, transform=ccrs.PlateCarree(),cmap="jet")
plt.title(r"Température de surface moyenne", size=14)
cb = plt.colorbar(ax=ax, orientation="vertical", pad=0.02, aspect=16, shrink=0.8)
cb.set_label(r'K',size=12,rotation=0,labelpad=15)
cb.ax.tick_params(labelsize=10)
plt.show()
def psatw(temp):
# METHOD GOFF GRATCH (HygroLP) - OVER WATER
# -----------------------------------------
log10ew = -7.90298*(373.16/temp-1) \
+ 5.02808 * np.log10(373.16/temp) \
- 1.3816e-7 * (10**(11.344 * (1-temp/373.16))-1) \
+ 8.1328e-3 * (10**(-3.49149 *(373.16/temp-1))-1) \
+ np.log10(1013.246)
return 100 * (10**(log10ew))
def psati(temp):
# METHOD GOFF GRATCH (HygroLP) - OVER ICE
# ---------------------------------------
log10ei = -9.09718*(273.16/temp-1) \
- 3.56654*np.log10(273.16/temp) \
+ 0.876793*(1-temp/273.16) \
+ np.log10(6.1071)
return 100 * (10**(log10ei))
tzero = 273.15
temp = np.linspace(-80+tzero,tzero,81)
plt.yscale('log')
plt.plot(temp,psatw(temp))
plt.plot(temp,psati(temp))
plt.show()
# Paramètres utilisateurs -----------------------------------------
earthtopo = False # ajouter les traits de côte actuels
ph2oatmo = 0.05e-2*610. # assumed mean water vapor partial pressure
vmin = 0.
vmax = 1.
# Code ------------------------------------------------------------
tsurfnc = data.variables['tsurf'][:,:,:]
dataplt = ph2oatmo/psati(tsurfnc)
fig = plt.figure(figsize=(12,8))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
if (earthtopo): ax.coastlines(resolution="110m",linewidth=1)
gl = ax.gridlines(linestyle='--',color='black',
draw_labels=True)
gl.xlabels_top = False
gl.ylabels_right = False
clevs = np.linspace(vmin,vmax,21)
plt.contourf(longitude, latitude, np.mean(dataplt[:,:,:],axis=0),
clevs, transform=ccrs.PlateCarree(),cmap="jet")
plt.title(r"Saturation ratio", size=14)
cb = plt.colorbar(ax=ax, orientation="vertical", pad=0.02, aspect=16, shrink=0.8)
cb.set_label(r'Pa',size=12,rotation=0,labelpad=15)
cb.ax.tick_params(labelsize=10)
plt.show()
# Paramètres utilisateurs -----------------------------------------
earthtopo = False # ajouter les traits de côte actuels
year_user = 1 # année de simulation à regarder
Ls_user = 90. # longitude solaire choisie
varname = 'tsurf'
vmin = 80.
vmax = 280.
# Code ------------------------------------------------------------
Ls_true_user = year_user*360. + Ls_user
Ls_ind = np.where(abs(Ls_true-Ls_true_user)==
abs(Ls_true-Ls_true_user).min())[0]
print("La valeur la plus proche trouvée est Ls = "
+ str(Ls_true[Ls_ind]-year_user*360.)
+ " pour l'année " + str(year_user))
# Code ------------------------------------------------------------
var = data.variables[varname][:,:,:]
dataplt = var
fig = plt.figure(figsize=(12,8))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
if (earthtopo): ax.coastlines(resolution="110m",linewidth=1)
gl = ax.gridlines(linestyle='--',color='black',
draw_labels=True)
gl.xlabels_top = False
gl.ylabels_right = False
clevs = np.linspace(vmin,vmax,29)
plt.contourf(longitude, latitude, np.squeeze(dataplt[Ls_ind,:,:]),
clevs, transform=ccrs.PlateCarree(),cmap="jet")
plt.title(r"Température de surface", size=14)
cb = plt.colorbar(ax=ax, orientation="vertical", pad=0.02, aspect=16, shrink=0.8)
cb.set_label(r'K',size=12,rotation=0,labelpad=15)
cb.ax.tick_params(labelsize=10)
plt.show()
# Paramètres utilisateurs -----------------------------------------
earthtopo = False # ajouter les traits de côte actuels
year_user = 1 # année de simulation à regarder
Ls_user = 270. # longitude solaire choisie
ph2oatmo = 0.05e-2*610. # assumed mean water vapor partial pressure
vmin = 0.
vmax = 1.
# Code ------------------------------------------------------------
Ls_true_user = year_user*360. + Ls_user
Ls_ind = np.where(abs(Ls_true-Ls_true_user)==
abs(Ls_true-Ls_true_user).min())[0]
print("La valeur la plus proche trouvée est Ls = "
+ str(Ls_true[Ls_ind]-year_user*360.)
+ " pour l'année " + str(year_user))
# Code ------------------------------------------------------------
tsurfnc = data.variables['tsurf'][:,:,:]
dataplt = ph2oatmo/psati(tsurfnc)
fig = plt.figure(figsize=(12,8))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
if (earthtopo): ax.coastlines(resolution="110m",linewidth=1)
gl = ax.gridlines(linestyle='--',color='black',
draw_labels=True)
gl.xlabels_top = False
gl.ylabels_right = False
clevs = np.linspace(vmin,vmax,29)
plt.contourf(longitude, latitude, np.squeeze(dataplt[Ls_ind,:,:]),
clevs, transform=ccrs.PlateCarree(),cmap="jet")
plt.title(r"Saturation ratio", size=14)
cb = plt.colorbar(ax=ax, orientation="vertical", pad=0.02, aspect=16, shrink=0.8)
cb.set_label(r'NU',size=12,rotation=0,labelpad=15)
cb.ax.tick_params(labelsize=10)
plt.show()
def psatco2(temp):
return 1.382 * 1e12 * np.exp(-3182.48/temp)
temp = np.linspace(100,200,81)
plt.yscale('log')
plt.plot(temp,psatco2(temp))
plt.show()
# Paramètres utilisateurs -----------------------------------------
earthtopo = False # ajouter les traits de côte actuels
pco2atmo = 610. # CO2 pressure
vmin = 0.
vmax = 1.
# Code ------------------------------------------------------------
tsurfnc = data.variables['tsurf'][:,:,:]
dataplt = pco2atmo/psatco2(tsurfnc)
fig = plt.figure(figsize=(12,8))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
if (earthtopo): ax.coastlines(resolution="110m",linewidth=1)
gl = ax.gridlines(linestyle='--',color='black',
draw_labels=True)
gl.xlabels_top = False
gl.ylabels_right = False
clevs = np.linspace(vmin,vmax,21)
plt.contourf(longitude, latitude, np.mean(dataplt[:,:,:],axis=0),
clevs, transform=ccrs.PlateCarree(),cmap="jet")
plt.title(r"Saturation ratio", size=14)
cb = plt.colorbar(ax=ax, orientation="vertical", pad=0.02, aspect=16, shrink=0.8)
cb.set_label(r'Pa',size=12,rotation=0,labelpad=15)
cb.ax.tick_params(labelsize=10)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Carte en moyenne temporelle sur la totalité de l'expérience
Step2: Carte en moyenne temporelle de $p_{sat}$ pour $H_2O$
Step3: Carte à $L_s$ donné de $p_{sat}$ pour $H_2O$
Step4: Carte en moyenne temporelle de $p_{sat}$ pour $CO_2$
|
14,518 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from numba import njit
arr2d = np.arange(20 * 30, dtype=float).reshape(20,30)
%%timeit
np.sum(arr2d)
def py_sum(arr):
M, N = arr.shape
sum = 0.0
for i in range(M):
for j in range(N):
sum += arr[i,j]
return sum
%%timeit
py_sum(arr2d)
fast_sum = njit(py_sum)
%%timeit -n1 -r1
fast_sum(arr2d)
%%timeit
fast_sum(arr2d)
fast_sum.signatures
fast_sum.inspect_types()
data = np.random.randn(2000, 2000)
def busca_min(malla):
minimosx = []
minimosy = []
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
busca_min(data)
%%timeit
busca_min(data)
stats = %prun -s cumtime -rq busca_min(data)
stats.print_stats()
%load_ext line_profiler
stats = %lprun -f busca_min -r busca_min(data)
stats.print_stats()
mx, my = busca_min(data)
mx.size / data.size
mx.size
def busca_min_np(malla):
minimos = np.zeros_like(malla, dtype=bool)
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimos[i, j] = True
return np.nonzero(minimos)
np.testing.assert_array_equal(busca_min(data)[0], busca_min_np(data)[0])
np.testing.assert_array_equal(busca_min(data)[1], busca_min_np(data)[1])
%timeit busca_min_np(data)
busca_min_jit = njit(busca_min)
busca_min_jit(data)
%timeit busca_min_jit(data)
busca_min_np_jit = njit(busca_min_np)
busca_min_np_jit(data)
@njit
def busca_min_np2_jit(malla):
minimos = np.zeros_like(malla, np.bool_) # <-- Cambiar esta línea
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimos[i, j] = True
return np.nonzero(minimos)
busca_min_np2_jit(data)
%timeit busca_min_np2_jit(data)
%matplotlib inline
import matplotlib.pyplot as plt
from numpy import sin, pi
# Constants
R_a = 287.05287 # J/(Kg·K)
g0 = 9.80665 # m/s^2
T0 = 288.15 # K
p0 = 101325.0 # Pa
alpha = np.array([-6.5e-3, 0.0]) # K / m
# Computed constants
T1 = T0 + alpha[0] * 11000.0
p1 = p0 * (T0 / (T0 + alpha[0] * 11000.0)) ** (g0 / (R_a * alpha[0]))
def atm(h):
Standard atmosphere temperature, pressure and density.
Parameters
----------
h : array-like
Geopotential altitude, m.
h = np.atleast_1d(h).astype(float)
scalar = (h.size == 1)
assert len(h.shape) == 1
T = np.empty_like(h)
p = np.empty_like(h)
rho = np.empty_like(h)
# Actually compute the values
_atm(h, T, p, rho)
if scalar:
T = T[0]
p = p[0]
rho = rho[0]
return T, p, rho
@njit
def _atm(h, T, p, rho):
for ii in range(h.size):
if 0.0 <= h[ii] < 11000.0:
T[ii] = T0 + alpha[0] * h[ii]
p[ii] = p0 * (T0 / (T0 + alpha[0] * h[ii])) ** (g0 / (R_a * alpha[0]))
rho[ii] = p[ii] / (R_a * T[ii])
elif 11000.0 <= h[ii] <= 20000.0:
T[ii] = T1 # + alpha[1] * (h[ii] - 11000.0)
p[ii] = p1 * np.exp(-g0 * (h[ii] - 11000.0) / (R_a * T1))
rho[ii] = p[ii] / (R_a * T[ii])
# aeropython: preserve
h = np.linspace(0, 20000)
T, p, _ = atm(h)
fig, ax1 = plt.subplots()
l1, = ax1.plot(T - 273, h, color="C0")
ax1.set_xlabel("T (°C)")
ax2 = ax1.twiny()
l2, = ax2.plot(p, h, color="C1")
ax2.set_xlabel("p (Pa)")
ax1.legend((l1, l2), ["Temperature", "Pressure"], loc=0)
ax1.grid()
@njit
def a_mn_point(P, a, b, xi, eta, mm, nn):
Navier series coefficient for concentrated load.
return 4 * P * sin(mm * pi * xi / a) * sin(nn * pi * eta / b) / (a * b)
@njit
def plate_displacement(xx, yy, ww, a, b, P, xi, eta, D, max_m, max_n):
max_i, max_j = ww.shape
for mm in range(1, max_m):
for nn in range(1, max_n):
for ii in range(max_i):
for jj in range(max_j):
a_mn = a_mn_point(P, a, b, xi, eta, mm, nn)
ww[ii, jj] += (a_mn / (mm**2 / a**2 + nn**2 / b**2)**2
* sin(mm * pi * xx[ii, jj] / a)
* sin(nn * pi * yy[ii, jj] / b)
/ (pi**4 * D))
# aeropython: preserve
# Plate geometry
a = 1.0 # m
b = 1.0 # m
h = 50e-3 # m
# Material properties
E = 69e9 # Pa
nu = 0.35
# Series terms
max_m = 16
max_n = 16
# Computation points
# NOTE: With an odd number of points the center of the place is included in
# the grid
NUM_POINTS = 101
# Load
P = 10e3 # N
xi = 3 * a / 4
eta = a / 2
# Flexural rigidity
D = h**3 * E / (12 * (1 - nu**2))
# ---
# Set up domain
x = np.linspace(0, a, num=NUM_POINTS)
y = np.linspace(0, b, num=NUM_POINTS)
xx, yy = np.meshgrid(x, y)
# Compute displacement field
ww = np.zeros_like(xx)
plate_displacement(xx, yy, ww, a, b, P, xi, eta, D, max_m, max_n)
# Print maximum displacement
w_max = abs(ww).max()
print("Maximum displacement = %14.12f mm" % (w_max * 1e3))
print("alpha = %7.5f" % (w_max / (P * a**2 / D)))
print("alpha * P a^2 / D = %6.4f mm" % (0.01160 * P * a**2 / D * 1e3))
plt.contourf(xx, yy, ww)
plt.colorbar()
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = './css/aeropython.css'
HTML(open(css_file, "r").read())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ¿Impresionado? La primera vez que hemos llamado a la función, Python ha generado el código correspondiente al tipo de datos que le hemos pasado. Podemos verlo aquí
Step2: E imprimir el código generado así
Step3: Entendiendo numba
Step4: Y copiemos directamente la función original
Step5: Paso 0
Step6: Parece que está habiendo demasiadas llamadas a list.append, aunque representan un porcentaje pequeño del tiempo de ejecución.
Step7: Paso 1
Step8: Tenemos que más de un 10 % de los elementos de la matriz cumplen la condición de ser «mínimos locales», así que no es nada despreciable. Esto en nuestro ejemplo hace un total de más de 400 000 elementos
Step9: En lugar de esto, lo que vamos a hacer va a ser crear otro array, de la misma forma que nuestros datos, y almacenar un valor True en aquellos elementos que cumplan la condición de mínimo local. De esta forma cumplimos también una de las reglas de oro de Software Carpentry
Step10: Encima puedo aprovechar la estupenda función nonzero de NumPy. Compruebo que las salidas son iguales
Step11: Y evalúo el rendimiento de la nueva función
Step12: Como era de esperar, los tiempos son parecidos, porque no he optimizado el cuello de botella que son las comprobaciones de los arrays. Al menos, ya no tenemos dos objetos en memoria que van a crecer de manera aleatoria
Step13: ¿Qué pasa si hacemos lo mismo con la versión que no utiliza listas?
Step14: Obtenemos un error porque numba no reconoce la función np.zeros_like con los argumentos que le hemos pasado. Si acudimos a la documentación http
Step15: Lo hemos conseguido
Step17: La atmósfera estándar
Step19: Solución de Navier de una placa plana
|
14,519 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
# Uncomment to see where your variables get placed (see below)
# tf.debugging.set_log_device_placement(True)
my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])
my_variable = tf.Variable(my_tensor)
# Variables can be all kinds of types, just like tensors
bool_variable = tf.Variable([False, False, False, True])
complex_variable = tf.Variable([5 + 4j, 6 + 1j])
print("Shape: ", my_variable.shape)
print("DType: ", my_variable.dtype)
print("As NumPy: ", my_variable.numpy())
print("A variable:", my_variable)
print("\nViewed as a tensor:", tf.convert_to_tensor(my_variable))
print("\nIndex of highest value:", tf.argmax(my_variable))
# This creates a new tensor; it does not reshape the variable.
print("\nCopying and reshaping: ", tf.reshape(my_variable, [1,4]))
a = tf.Variable([2.0, 3.0])
# This will keep the same dtype, float32
a.assign([1, 2])
# Not allowed as it resizes the variable:
try:
a.assign([1.0, 2.0, 3.0])
except Exception as e:
print(f"{type(e).__name__}: {e}")
a = tf.Variable([2.0, 3.0])
# Create b based on the value of a
b = tf.Variable(a)
a.assign([5, 6])
# a and b are different
print(a.numpy())
print(b.numpy())
# There are other versions of assign
print(a.assign_add([2,3]).numpy()) # [7. 9.]
print(a.assign_sub([7,9]).numpy()) # [0. 0.]
# Create a and b; they will have the same name but will be backed by
# different tensors.
a = tf.Variable(my_tensor, name="Mark")
# A new variable with the same name, but different value
# Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark")
# These are elementwise-unequal, despite having the same name
print(a == b)
step_counter = tf.Variable(1, trainable=False)
with tf.device('CPU:0'):
# Create some tensors
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]])
with tf.device('GPU:0'):
# Element-wise multiply
k = a * b
print(k)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 変数の概要
Step2: 変数の作成
Step3: 変数の外観と動作はテンソルに似ており、実際にデータ構造が tf.Tensor で裏付けられています。テンソルのように dtype と形状を持ち、NumPy にエクスポートできます。
Step4: ほとんどのテンソル演算は期待どおりに変数を処理しますが、変数は変形できません。
Step5: 上記のように、変数はテンソルによって裏付けられています。テンソルは tf.Variable.assign を使用して再割り当てできます。assign を呼び出しても、(通常は)新しいテンソルは割り当てられません。代わりに、既存テンソルのメモリが再利用されます。
Step6: 演算でテンソルのような変数を使用する場合、通常は裏付けているテンソルで演算します。
Step7: ライフサイクル、命名、監視
Step8: 変数名は、モデルの保存と読み込みを行う際に維持されます。デフォルトでは、モデル内の変数は一意の変数名を自動的に取得するため、必要がない限り自分で割り当てる必要はありません。
Step9: 変数とテンソルの配置
Step10: あるデバイスで変数またはテンソルの場所を設定し、別のデバイスで計算を行うことができます。この処理ではデバイス間でデータをコピーする必要があるため、遅延が発生します。
|
14,520 | <ASSISTANT_TASK:>
Python Code:
import numpy as np # for np.allclose() to check that S-params are similar
import skrf as rf
rf.stylely()
# reference LC circuit made in Designer
LC_designer = rf.Network('designer_capacitor_30_80MHz_simple.s2p')
# scikit-rf: manually connecting networks
line = rf.media.DefinedGammaZ0(frequency=LC_designer.frequency, z0=50)
LC_manual = line.inductor(24e-9) ** line.capacitor(70e-12)
# scikit-rf: using Circuit builder
port1 = rf.Circuit.Port(frequency=LC_designer.frequency, name='port1', z0=50)
port2 = rf.Circuit.Port(frequency=LC_designer.frequency, name='port2', z0=50)
cap = rf.Circuit.SeriesImpedance(frequency=LC_designer.frequency, name='cap', z0=50,
Z=1/(1j*LC_designer.frequency.w*70e-12))
ind = rf.Circuit.SeriesImpedance(frequency=LC_designer.frequency, name='ind', z0=50,
Z=1j*LC_designer.frequency.w*24e-9)
# NB: it is also possible to create 2-port lumped elements like:
# line = rf.media.DefinedGammaZ0(frequency=LC_designer.frequency, z0=50)
# cap = line.capacitor(70e-12, name='cap')
# ind = line.inductor(24e-9, name='ind')
connections = [
[(port1, 0), (cap, 0)],
[(cap, 1), (ind, 0)],
[(ind, 1), (port2, 0)]
]
circuit = rf.Circuit(connections)
LC_from_circuit = circuit.network
# testing the equivalence of the results
print(np.allclose(LC_designer.s, LC_manual.s))
print(np.allclose(LC_designer.s, LC_from_circuit.s))
circuit.plot_graph(network_labels=True, edge_labels=True, port_labels=True)
# Reference results from ANSYS Designer
LCC_designer = rf.Network('designer_capacitor_30_80MHz_adv.s2p')
# scikit-rf: usual way, but this time this is more tedious to deal with connection and port number
freq = LCC_designer.frequency
line = rf.media.DefinedGammaZ0(frequency=freq, z0=50)
elements1 = line.resistor(1e-2) ** line.inductor(24e-9) ** line.capacitor(70e-12)
elements2 = line.resistor(20e6)
T_in = line.tee()
T_out = line.tee()
ntw = rf.connect(T_in, 1, elements1, 0)
ntw = rf.connect(ntw, 2, elements2, 0)
ntw = rf.connect(ntw, 1, T_out, 1)
ntw = rf.innerconnect(ntw, 1, 2)
LCC_manual = ntw ** line.shunt_capacitor(50e-12)
# scikit-rf: using Circuit builder
freq = LCC_designer.frequency
port1 = rf.Circuit.Port(frequency=freq, name='port1', z0=50)
port2 = rf.Circuit.Port(frequency=freq, name='port2', z0=50)
line = rf.media.DefinedGammaZ0(frequency=freq, z0=50)
cap = line.capacitor(70e-12, name='cap')
ind = line.inductor(24e-9, name='ind')
res_series = line.resistor(1e-2, name='res_series')
res_parallel = line.resistor(20e6, name='res_parallel')
cap_shunt = line.capacitor(50e-12, name='cap_shunt')
ground = rf.Circuit.Ground(frequency=freq, name='ground', z0=50)
connections = [
[(port1, 0), (res_series, 0), (res_parallel, 0)],
[(res_series, 1), (cap, 0)],
[(cap, 1), (ind, 0)],
[(ind, 1), (cap_shunt, 0), (res_parallel, 1), (port2, 0)],
[(cap_shunt, 1), (ground, 0)],
]
circuit = rf.Circuit(connections)
LCC_from_circuit = circuit.network
# testing the equivalence of the results
print(np.allclose(LCC_designer.s, LCC_manual.s))
print(np.allclose(LCC_designer.s, LCC_from_circuit.s))
circuit.plot_graph(network_labels=True, edge_labels=True, port_labels=True)
# Reference result calculated from Designer
passband_designer = rf.Network('designer_bandpass_filter_450_550MHz.s2p')
# scikit-rf: the filter by cascading all lumped-elements
freq = passband_designer.frequency
passband_manual = line.shunt_capacitor(25.406e-12) ** line.shunt_inductor(4.154e-9) ** \
line.capacitor(2.419e-12) ** line.inductor(43.636e-9) ** \
line.shunt_capacitor(25.406e-12) ** line.shunt_inductor(4.154e-9)
# scikit-rf: the filter with the Circuit builder
freq = passband_designer.frequency
line = rf.media.DefinedGammaZ0(frequency=freq)
C1 = line.capacitor(25.406e-12, name='C1')
C2 = line.capacitor(2.419e-12, name='C2')
C3 = line.capacitor(25.406e-12, name='C3')
L1 = line.inductor(4.154e-9, name='L1')
L2 = line.inductor(43.636e-9, name='L2')
L3 = line.inductor(4.154e-9, name='L3')
port1 = rf.Circuit.Port(frequency=freq, name='port1', z0=50)
port2 = rf.Circuit.Port(frequency=freq, name='port2', z0=50)
ground1 = rf.Circuit.Ground(frequency=freq, name='ground1', z0=50)
ground2 = rf.Circuit.Ground(frequency=freq, name='ground2', z0=50)
ground3 = rf.Circuit.Ground(frequency=freq, name='ground3', z0=50)
ground4 = rf.Circuit.Ground(frequency=freq, name='ground4', z0=50)
connections = [
[(port1, 0), (C1, 0), (L1, 0), (C2, 0)],
[(C2, 1), (L2, 0)],
[(L2, 1), (C3, 0), (L3, 0), (port2, 0)],
# grounding must be done on ground ntw having different names
[(C1, 1), (ground1, 0)],
[(C3, 1), (ground2, 0)],
[(L1, 1), (ground3, 0)],
[(L3, 1), (ground4, 0)],
]
circuit = rf.Circuit(connections)
passband_circuit = circuit.network
passband_circuit.name = 'Pass-band circuit'
passband_circuit.plot_s_db(m=0, n=0, lw=2)
passband_circuit.plot_s_db(m=1, n=0, lw=2)
passband_designer.plot_s_db(m=0, n=0, lw=2, ls='-.')
passband_designer.plot_s_db(m=1, n=0, lw=2, ls='-.')
circuit.plot_graph(network_labels=True, port_labels=True, edge_labels=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: LC Series Circuit
Step2: A More Advanced Equivalent Model
Step3: Pass band filter
|
14,521 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
tf.__version__
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=False)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
data.test.labels[0:5]
data.train.labels[0:5]
data.test.cls = data.test.labels #np.array([label.argmax() for label in data.test.labels])
data.train.cls = data.train.labels #np.array([label.argmax() for label in data.train.labels])
data.test.cls[0:5]
data.train.cls[0:5]
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
x = tf.placeholder(tf.float32, [None, img_size_flat])
y_true = tf.placeholder(tf.int64, [None])
y_true_cls = tf.placeholder(tf.int64, [None])
weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
biases = tf.Variable(tf.zeros([num_classes]))
logits = tf.matmul(x, weights) + biases
y_pred = tf.nn.softmax(logits)
y_pred_cls = tf.argmax(y_pred, dimension=1)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session = tf.Session()
session.run(tf.initialize_all_variables())
batch_size = 1000
def optimize(num_iterations):
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
# Note that the placeholder for y_true_cls is not set
# because it is not used during training.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
feed_dict_test = {x: data.test.images,
y_true: data.test.labels,
y_true_cls: data.test.cls}
def print_accuracy():
# Use TensorFlow to compute the accuracy.
acc = session.run(accuracy, feed_dict=feed_dict_test)
# Print the accuracy.
print("Accuracy on test-set: {0:.1%}".format(acc))
def print_confusion_matrix():
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the predicted classifications for the test-set.
cls_pred = session.run(y_pred_cls, feed_dict=feed_dict_test)
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
# Make various adjustments to the plot.
plt.tight_layout()
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
def plot_example_errors():
# Use TensorFlow to get a list of boolean values
# whether each test-image has been correctly classified,
# and a list for the predicted class of each image.
correct, cls_pred, logits_view, y_pred_view = session.run([correct_prediction, y_pred_cls, logits, y_pred],
feed_dict=feed_dict_test)
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
logits_view = logits_view[incorrect]
y_pred_view = y_pred_view[incorrect]
np.set_printoptions(suppress=True)
np.set_printoptions(precision=3)
# Print logits and softmax (y_pred) of logits, ir order
for i in range(9):
print( "Logits: %s" % (np.array( logits_view[i]) ) )
print( "Softmx: %s" % (np.array( y_pred_view[i]) ) )
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
def plot_weights():
# Get the values for the weights from the TensorFlow variable.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Create figure with 3x4 sub-plots,
# where the last 2 sub-plots are unused.
fig, axes = plt.subplots(3, 4)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Only use the weights for the first 10 sub-plots.
if i<10:
# Get the weights for the i'th digit and reshape it.
# Note that w.shape == (img_size_flat, 10)
image = w[:, i].reshape(img_shape)
# Set the label for the sub-plot.
ax.set_xlabel("Weights: {0}".format(i))
# Plot the image.
ax.imshow(image, vmin=w_min, vmax=w_max, cmap='seismic')
# Remove ticks from each sub-plot.
ax.set_xticks([])
ax.set_yticks([])
print_accuracy()
plot_example_errors()
optimize(num_iterations=1)
print_accuracy()
plot_example_errors()
plot_weights()
# We have already performed 1 iteration.
optimize(num_iterations=9)
print_accuracy()
plot_example_errors()
plot_weights()
# We have already performed 10 iterations.
optimize(num_iterations=990)
print_accuracy()
plot_example_errors()
plot_weights()
print_confusion_matrix()
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step2: Load Data
Step3: The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step4: One-Hot Encoding
Step5: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
Step6: We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
Step7: Data dimensions
Step8: Helper-function for plotting images
Step9: Plot a few images to see if data is correct
Step10: TensorFlow Graph
Step11: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step12: Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
Step13: Variables to be optimized
Step14: The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.
Step15: Model
Step16: Now logits is a matrix with num_images rows and num_classes columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
Step17: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
Step18: Cost-function to be optimized
Step19: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
Step20: Optimization method
Step21: Performance measures
Step22: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
Step23: TensorFlow Run
Step24: Initialize variables
Step25: Helper-function to perform optimization iterations
Step26: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
Step27: Helper-functions to show performance
Step28: Function for printing the classification accuracy on the test-set.
Step29: Function for printing and plotting the confusion matrix using scikit-learn.
Step30: Function for plotting examples of images from the test-set that have been mis-classified.
Step31: Helper-function to plot the model weights
Step32: Performance before any optimization
Step33: Performance after 1 optimization iteration
Step34: The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
Step35: Performance after 10 optimization iterations
Step36: Performance after 1000 optimization iterations
Step37: The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
Step38: We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
Step39: We are now done using TensorFlow, so we close the session to release its resources.
|
14,522 | <ASSISTANT_TASK:>
Python Code:
!nvidia-smi
import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
logdir = '/root/pipeline/logs/tensorflow'
import numpy as np
import matplotlib.pyplot as plt
import datetime
from tensorflow.python.framework import ops
from tensorflow.python.platform import gfile
from IPython.display import clear_output, Image, display, HTML
matrix1 = tf.placeholder("float",name="matrix1")
matrix2 = tf.placeholder("float",name="matrix2")
product = tf.matmul(matrix1, matrix2)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
result = sess.run(product,feed_dict={matrix1: [[3., 3.]], matrix2: [[6.],[6.]]})
print result
sess.close()
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
with tf.device("/gpu:0"):
result = sess.run(product,feed_dict={matrix1: [[3., 3.]], matrix2: [[6.],[6.]]})
print result
state = tf.Variable(0, name="counter")
one = tf.constant(1)
new_value = tf.add(state, one)
update = tf.assign(state, new_value)
init_op = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init_op)
print sess.run(state)
for _ in range(3):
sess.run(update)
print sess.run(state)
%matplotlib inline
x_batch = np.linspace(-1, 1, 101)
y_batch = x_batch * 2 + np.random.randn(*x_batch.shape) * 0.3
plt.scatter(x_batch, y_batch)
x = tf.placeholder(tf.float32, shape=(None,), name="x")
y = tf.placeholder(tf.float32, shape=(None,), name="y")
w = tf.Variable(np.random.normal(), name="W")
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
y_pred = tf.mul(w, x)
y0 = sess.run(y_pred, {x: x_batch})
plt.figure(1)
plt.scatter(x_batch, y_batch)
plt.plot(x_batch, y0)
cost = tf.reduce_mean(tf.square(y_pred - y))
summary_op = tf.scalar_summary("cost", cost)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(cost)
summary_writer = tf.train.SummaryWriter(logdir, sess.graph_def)
for t in range(30):
cost_t, summary, _ = sess.run([cost, summary_op, train_op], {x: x_batch, y: y_batch})
summary_writer.add_summary(summary, t)
print cost_t.mean()
y_pred_batch = sess.run(y_pred, {x: x_batch})
plt.figure(1)
plt.scatter(x_batch, y_batch)
plt.plot(x_batch, y_pred_batch)
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
tmp_def = rename_nodes(sess.graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
import tensorflow.examples.tutorials.mnist.input_data as input_data
#import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
x_image = tf.reshape(x, [-1,28,28,1])
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.initialize_all_variables())
for i in range(100):
batch = mnist.train.next_batch(50)
if i%10 == 0:
train_accuracy = accuracy.eval(session=sess, feed_dict={
x:batch[0], y_: batch[1], keep_prob: 1.0})
print "step %d, training accuracy %g"%(i, train_accuracy)
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print "test accuracy %g"%accuracy.eval(session=sess, feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})
tmp_def = rename_nodes(sess.graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
sess.close()
ops.reset_default_graph()
from tensorflow.models.rnn import rnn_cell, seq2seq
sess = tf.InteractiveSession()
seq_length = 5
batch_size = 64
vocab_size = 7
embedding_dim = 50
memory_dim = 100
enc_inp = [tf.placeholder(tf.int32, shape=(None,),
name="inp%i" % t)
for t in range(seq_length)]
labels = [tf.placeholder(tf.int32, shape=(None,),
name="labels%i" % t)
for t in range(seq_length)]
weights = [tf.ones_like(labels_t, dtype=tf.float32)
for labels_t in labels]
dec_inp = ([tf.zeros_like(enc_inp[0], dtype=np.int32, name="GO")]
+ enc_inp[:-1])
prev_mem = tf.zeros((batch_size, memory_dim))
cell = rnn_cell.GRUCell(memory_dim)
dec_outputs, dec_memory = seq2seq.embedding_rnn_seq2seq(enc_inp, dec_inp, cell, vocab_size, vocab_size)
loss = seq2seq.sequence_loss(dec_outputs, labels, weights, vocab_size)
tf.scalar_summary("loss", loss)
magnitude = tf.sqrt(tf.reduce_sum(tf.square(dec_outputs[1])))
tf.scalar_summary("magnitude at t=1", magnitude)
summary_op = tf.merge_all_summaries()
logdir = '~/'
summary_writer = tf.train.SummaryWriter(logdir, sess.graph_def)
learning_rate = 0.05
momentum = 0.9
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum)
train_op = optimizer.minimize(loss)
def train_batch(batch_size):
X = [np.random.choice(vocab_size, size=(seq_length,), replace=False)
for _ in range(batch_size)]
Y = X[:]
X = np.array(X).T
Y = np.array(Y).T
feed_dict = {enc_inp[t]: X[t] for t in range(seq_length)}
feed_dict.update({labels[t]: Y[t] for t in range(seq_length)})
_, loss_t, summary = sess.run([train_op, loss, summary_op], feed_dict)
return loss_t, summary
with tf.device('/gpu:0'):
sess.run(tf.initialize_all_variables())
for t in range(500):
loss_t, summary = train_batch(batch_size)
summary_writer.add_summary(summary, t)
summary_writer.flush()
X_batch = [np.random.choice(vocab_size, size=(seq_length,), replace=False)
for _ in range(10)]
X_batch = np.array(X_batch).T
feed_dict = {enc_inp[t]: X_batch[t] for t in range(seq_length)}
dec_outputs_batch = sess.run(dec_outputs, feed_dict)
print(X_batch)
[logits_t.argmax(axis=1) for logits_t in dec_outputs_batch]
tmp_def = rename_nodes(sess.graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Multiply 2 matrices
Step2: Sessions must be closed to release resources. We may use the 'with' syntax to close sessions automatically when completed.
Step3: Here we have included a device reference, which will determine which GPU to use for operations. Indexing of devices starts at 0.
Step4: Linear Regression
Step5: We can initialize input Ops using the placeholder function
Step6: We also create a variable for the weights and note that a NumPy array is convertible to a Tensor.
Step7: Our approach here is to perform gradient descent to update a predictor, y_pred, using the least squares cost function. Updating y_pred is simply done through a matrix multiplication similar to what we have performed earlier.
Step12: The initial predictor has little relation to the data.
Step13: Check you're able to navigate around TensorBoard and navigate to the items below visualizing the graph, weights, and gradient descent parameters.
Step14: We may now define a helper function calling the convolution with a stride of one and zero padded to match the input and output size and standard 2x2 max pooling layers. Under the hood, the TensorFlow functions use the NVIDIA cuDNN (CUDA Deep Neural Network) library to perform assembly optimized implementations on the GPU.
Step15: Convolutional + Pooling Layers
Step16: Regularization / Dropout Layer Avoids Overfitting
Step17: Softmax Layer Produces Class Probabilities
Step18: We apply a Dropout layer, which undersamples the neurons during training to regularize (reduce overfitting) of our model.
Step19: Now try tuning the model for better performance. There are many options
Step20: For each time point, we define an associated Tensor and label. Finally, a weights constant is invariant with respect to time.
Step21: We have defined a decoder input with the name "GO" and dropped the final value of the encoder. We now initialize the seq2seq embedding structure with the previously defined values and apply a loss function that is the cross-entropy across each item in the sequence.
Step22: We specify the outputs during training as the loss and the magnitude of activations.
Step23: We specify the learning rate and momentum to our momentum operator.
Step24: What would happen if we tripled our learning rate and momentum? (answer at end).
Step25: We can now test our lower dimensional autoencoder by passing data through the embedding to determine if the similar input was recovered.
|
14,523 | <ASSISTANT_TASK:>
Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
numbers = [int(number) for number in numbers_str.split(',')]
max(numbers)
sorted(numbers)[10:]
threes = []
for item in numbers:
if item %3 == 0:
threes.append(item)
sorted(threes)
from math import sqrt
[sqrt(item) for item in numbers if item < 100]
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
[planet['name'] for planet in planets if int(planet['diameter']) > 4]
sum(planet['mass'] for planet in planets)
[planet['name'] for planet in planets if 'giant' in planet['type']]
#[planet['name'] for planet in sorted(planets)]
#sorted(iterable[, key='mass'][, reverse])
from operator import itemgetter
sorted(planet['name'] for planet in planets(key=itemgetter('moons'))
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
[line for line in poem_lines if re.search(r"\b\w\w\w\w\b \b\w\w\w\w\b", line)]
[line for line in poem_lines if re.search(r"\b\w\w\w\w\w\b(?!..)", line)]
all_lines = " ".join(poem_lines)
re.findall(r"I (\b\w+\b)", all_lines)
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
menu = []
for item in entrees:
dishes = {}
new["entree"] =
new['']
#groupmatch3
'name' = entrees.findall
'price' = entrees.findall(r"$\w+\b")
'vegetarian' = entrees.findall("- v")
#match.group
#(??)$
#
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: Problem set #3
Step11: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
Step12: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step13: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step14: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step15: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step16: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
|
14,524 | <ASSISTANT_TASK:>
Python Code:
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
import numpy as np
from scipy import stats
import cotede
output_notebook()
# Number of samples
N = 3000
# True mean and standard deviation of this dataset
mu, sigma = 0, 1
# Let's fix the random seed so everyone gets the same result
np.random.seed(42)
t = np.arange(N)
x = np.random.normal(mu, sigma, N)
# w = np.blackman(11)
# x = np.convolve(x, w, 'same')
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t, x, size=8, line_color="orange", fill_color="orange", fill_alpha=0.5)
show(p) # show the results
def plot_hist(hist, edges):
Plot an histogram
Create an histogram from the output of numpy.hist().
We will create several histograms in this notebook so let's save this as a function to
reuse this code.
#title = 'test'
# p = figure(title=title, tools='', background_fill_color="#fafafa")
p = figure(plot_width=750, plot_height=300,
tools='', background_fill_color="#fafafa")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:],
fill_color="navy", line_color="white", alpha=0.5)
# p.line(x, pdf, line_color="#ff8888", line_width=4, alpha=0.7, legend_label="PDF")
# p.line(x, cdf, line_color="orange", line_width=2, alpha=0.7, legend_label="CDF")
p.y_range.start = 0
# p.legend.location = "center_right"
# p.legend.background_fill_color = "#fefefe"
p.xaxis.axis_label = 'x'
p.yaxis.axis_label = 'Pr(x)'
p.grid.grid_line_color="white"
return p
hist, edges = np.histogram(x, density=True, bins=50)
p = plot_hist(hist, edges)
show(p)
mu_estimated, sigma_estimated = stats.norm.fit(x)
print("Estimated mean: {:.3f}, and standard deviation: {:.3f}".format(mu_estimated, sigma_estimated))
x_ref = np.linspace(x.min(), x.max(), 1000)
pdf = stats.norm.pdf(x_ref, loc=mu_estimated, scale=sigma_estimated)
# sf = stats.norm.sf(x_ref, loc=mu_estimated, scale=sigma_estimated)
p = plot_hist(hist, edges)
p.line(x_ref, pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
# p.line(x_ref, sf, line_color="red", line_width=8, alpha=0.7, legend_label="SF")
show(p)
N_bad = 5
idx = np.random.permutation(x.size)[:N_bad]
x[idx] = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
print(sorted(x[idx]))
idx_good = [tn not in idx for tn in t]
# A time series with the data
p = figure(plot_width=750, plot_height=300, title="Some bad measurements")
p.circle(t[idx_good], x[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], x[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
# p.line([0, N], 2*[-6 * sigma], line_color="orange", line_width=3, alpha=0.7)
# p.line([0, N], 2*[6 * sigma], line_color="orange", line_width=3, alpha=0.7)
show(p) # show the results
mu_estimated, sigma_estimated = stats.norm.fit(x)
print("Estimated mean: {:.3f}, and standard deviation: {:.3f}".format(mu_estimated, sigma_estimated))
x_ref = np.linspace(x.min(), x.max(), 1000)
pdf = stats.norm.pdf(x_ref, loc=mu_estimated, scale=sigma_estimated)
p = plot_hist(hist, edges)
p.line(x_ref, pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
# p.line(x_ref, sf, line_color="red", line_width=8, alpha=0.7, legend_label="SF")
p.triangle(x[idx], 0.05, size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
x2 = x + 2 * np.sin(2 * np.pi * t/1000)
x2[idx] = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t[idx_good], x2[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], x2[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p)
mu_estimated, sigma_estimated = stats.norm.fit(x2)
print("Estimated mean: {:.3f}, and standard deviation: {:.3f}".format(mu_estimated, sigma_estimated))
x_ref = np.linspace(x.min(), x.max(), 1000)
pdf = stats.norm.pdf(x_ref, loc=mu_estimated, scale=sigma_estimated)
hist, edges = np.histogram(x2, density=True, bins=50)
p = plot_hist(hist, edges)
p.line(x_ref, pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
# p.line(x_ref, sf, line_color="red", line_width=8, alpha=0.7, legend_label="SF")
p.triangle(x2[idx], 0.05, size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
import cotede.qctests
y_gradient = cotede.qctests.gradient(x2)
# A time series with the data
p = figure(plot_width=750, plot_height=300, title="Spike")
p.circle(t[idx_good], y_gradient[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], y_gradient[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p) # show the results
import cotede.qctests
y_spike = np.abs(cotede.qctests.tukey53H(x2))
# A time series with the data
p = figure(plot_width=750, plot_height=300, title="Spike")
p.circle(t[idx_good], y_spike[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], y_spike[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p) # show the results
gradient_mu, gradient_sigma = stats.norm.fit(y_gradient[np.isfinite(y_gradient)])
gradient_mu, gradient_sigma
gradient_mu, gradient_sigma = stats.norm.fit(y_gradient[np.isfinite(y_gradient)])
y_ref = np.linspace(np.nanmin(y_gradient), np.nanmax(y_gradient), 50)
gradient_pdf = stats.norm.pdf(y_ref, loc=gradient_mu, scale=gradient_sigma)
gradient_hist, gradient_edges = np.histogram(y_gradient[np.isfinite(y_gradient)], density=True, bins=50)
p = plot_hist(gradient_hist, gradient_edges)
p.line(y_ref, gradient_pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
p.triangle(y_gradient[idx], 0.05, size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
spike_mu, spike_sigma = stats.norm.fit(y_spike[np.isfinite(y_spike)])
y_ref = np.linspace(np.nanmin(y_spike), np.nanmax(y_spike), 50)
spike_pdf = stats.norm.pdf(y_ref, loc=spike_mu, scale=spike_sigma)
spike_hist, spike_edges = np.histogram(y_spike[np.isfinite(y_spike)], density=True, bins=50)
p = plot_hist(spike_hist, spike_edges)
p.line(y_ref, spike_pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF")
p.triangle(y_spike[idx], 0.05, size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
y_gradient = cotede.qctests.gradient(x2)
p = figure(plot_width=750, plot_height=300, title="Spike")
p.circle(y[idx_good], y_gradient[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(y[idx], y_gradient[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p) # show the results
x3 = x/20 + 2 * np.sin(2 * np.pi * t/2000)
# x2[idx] = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t[idx_good], x2[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], x2[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p)
x3 = x/20 + 2 * np.cos(2 * np.pi * t/6000)
x3[1150:1250] += np.random.normal(0, .2, 100)
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t[idx_good], x3[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
# p.triangle(t[idx], x3[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p) # show the results
y4 = cotede.qctests.rate_of_change(x3)
p = figure(plot_width=750, plot_height=300)
p.circle(t, y4, size=8, line_color="green", fill_color="green", fill_alpha=0.3)
# p.triangle(t[idx], x3[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
show(p)
y.compressed()
import matplotlib.pyplot as plt
plt.hist(y)
spike_hist
stats.norm.pdf(x[idx], loc=mu_estimated, scale=sigma_estimated)
pdf = stats.norm.cdf(x_ref, loc=mu_estimated, scale=sigma_estimated)
pdf
from seabird import fCNV
!pip install seabird
data = fCNV('/Users/castelao/work/science/articles/cotedepaper/data/dPIRX010.cnv')
p = figure(plot_width=500, plot_height=600)
p.circle(data['TEMP'], -data['PRES'], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
show(p)
plt.hist(cotede.qctests.rate_of_change(data['TEMP']), 50)
# Number of samples
N = 300
N_bad = 24
# True mean and standard deviation of this dataset
mu, sigma = 0, 0.1
# Let's fix the random seed so everyone gets the same result
np.random.seed(42)
t = np.arange(N)
noise = np.random.normal(mu, sigma, N)
x = 3 * np.sin(2 * np.pi * t / 190 + 0.3) + noise
chunk = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
x[160:160+chunk.size] += chunk
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t, x, size=8, line_color="orange", fill_color="orange", fill_alpha=0.5, legend_label="Good values")
# p.triangle(data["epoch"][idx_bad], data["water_level"][idx_bad], size=12, line_color="red", fill_color="red", fill_alpha=0.8, legend_label="Bad values")
show(p)
# Number of samples
N = 3000
# True mean and standard deviation of this dataset
mu, sigma = 0, 1
# Let's fix the random seed so everyone gets the same result
np.random.seed(42)
t = np.arange(N)
x = np.random.normal(mu, sigma, N)
x = np.cumsum(x-np.mean(x))
np.mean(x)
# A time series with the data
p = figure(plot_width=750, plot_height=300)
p.circle(t, x, size=8, line_color="orange", fill_color="orange", fill_alpha=0.5)
show(p) # show the results
N_bad = 5
idx = np.random.permutation(x.size)[:N_bad]
x[idx] = np.random.uniform(mu-10*sigma, mu+10*sigma, N_bad)
print(sorted(x[idx]))
x[idx]
idx_good = [tn not in idx for tn in t]
# A time series with the data
p = figure(plot_width=750, plot_height=300, title="Some bad measurements")
p.circle(t[idx_good], x[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3)
p.triangle(t[idx], x[idx], size=12, line_color="red", fill_color="red", fill_alpha=0.8)
# p.line([0, N], 2*[-6 * sigma], line_color="orange", line_width=3, alpha=0.7)
# p.line([0, N], 2*[6 * sigma], line_color="orange", line_width=3, alpha=0.7)
show(p) # show the results
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Synthetic data
Step3: How does this dataset look like?
Step4: Data Distribution
Step5: We know that this dataset has a normal distribution, so we can approximate it to a Gaussian.
Step6: Bad data
Step7: Climatology Test
Step8: Most of the bad data is clearly distinct from the good data pattern, but is inside the feasible range so the climatology can't do much to distinguish the good from bad data.
Step9: The spike projects the original data in a new space, and this projection is commonly called "feature" in the Machine Learning world. Note that the spike feature allow to better distinguish the good data from bad data.
Step10: Climatology Test
|
14,525 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import linalg
from matplotlib import pyplot as plt
%matplotlib inline
A = np.array([[1, 0.5],[0.5, 1]])
x = np.array([1.,0.])
A = np.array([[1., 0.5,-0.1],[0.5, 1.,10.0],[2.,3.,5.]])
x = np.array([1.,0.,0.])
print("A =\n",A)
print("x =",x)
def power_iteration(A, x, k, verbose=False):
Program 12.1 Power iteration
Computes dominant eigenvector of square matrix
Input: matrix A, initial (nonzero) vector x, number of steps k
Output: dominant eigenvalue lam, eigenvector u
if verbose: print("Power Iteration Method\n%s"%('='*80))
for j in range(k):
u = x/np.linalg.norm(x)
x = np.dot(A, u)
lam = np.dot(u, x) #not really necessary to compute it at each iteration
if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,lam,str(u.T)))
u = x/np.linalg.norm(x)
if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,lam,str(u.T)))
return (lam, u)
# Testing algorithm
lam, u = power_iteration(A, x, 20, verbose=True)
print("lambda = {0}".format(lam))
print("u (dominant eigenvector) = {0}".format(u))
def inverse_power_iteration(A, x, s, k, verbose=False):
Program 12.2 Inverse Power iteration
Computes eigenvector of square matrix nearest to input s
Input: matrix A, initial (nonzero) vector x, shift s, number of steps k
Output: dominant eigenvalue lam, eigenvector of inv(A-sI)
if verbose: print("Inverse Power Iteration Method\n%s"%('='*80))
As = A - s*np.eye(*A.shape)
for j in range(k):
u = x/np.linalg.norm(x)
x = np.linalg.solve(As, u) # Critical line!
lam = np.dot(u.T, x)
if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,1./lam+s,str(u.T)))
u = x/np.linalg.norm(x)
if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,1./lam+s,str(u.T)))
return (1./lam+s, u)
# Testing algoritm
lam, u = inverse_power_iteration(A, x, s=1./4, k=10, verbose=True)
print("lambda = {0}".format(lam))
print("v = {0}".format(u))
def rqi(A, x, k, verbose=False):
Program 12.3 Rayleigh Quotient Iteration
Input: matrix A, initial (nonzero) vector x, number of steps k
Output: eigenvalue lam, eigenvector of inv(A-sI)
if verbose: print("Rayleigh Quotient Iteration\n%s"%('='*80))
for j in range(k):
u = x/np.linalg.norm(x)
lam = np.dot(u.T, np.dot(A, u))
try:
x = np.linalg.solve(A -lam*np.eye(*A.shape), u)
except numpy.linalg.LinAlgError:
break
if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,lam,str(u.T)))
u = x/np.linalg.norm(x)
lam = float(np.dot(u.T, np.dot(A, u)))
if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,lam,str(u.T)))
return (lam, u)
# Testing algorithm
lam, v = rqi(A, x, k=2)
print("lambda = {0}".format(lam))
print("v = {0}".format(v))
# Full matrices
from scipy import linalg as LA
N = 3
Aux = np.random.rand(N,N)
A = Aux + Aux.T # symmetric, so we'll deal with real eigs.
print(LA.eigvals(A)) # Only the eigenvalues, A not necessarily symmetric
print("*"*80)
print(LA.eigvalsh(A)) # Only the eigenvalues, A symmetric
print("*"*80)
print(LA.eig(A)) # All the eigenvalues and eigenvectors, A not necessarily symmetric
print("*"*80)
print(LA.eigh(A)) # All the eigenvalues and eigenvectors, A symmetric (faster)
print("*"*80)
lambdas, V = LA.eigh(A) # All the eigenvalues and eigenvectors, A symmetric (faster)
l1 = lambdas[0]
v1 = V[:,0]
print(l1)
print(v1)
print(np.dot(A, v1))
print(l1*v1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matriz y vector de prueba
Step3: <div id='pi' />
Step5: <div id='invpi' />
Step7: <div id='rq' />
Step8: Preguntas
Step9: <div id='sp' />
|
14,526 | <ASSISTANT_TASK:>
Python Code:
import pyensae
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import pyensae
import pyensae.datasource
pyensae.datasource.download_data("velib_vanves.zip", website = "xd")
import pandas
df = pandas.read_csv("velib_vanves.txt",sep="\t")
df.head(n=2)
from pyensae.sql import import_flatfile_into_database
import_flatfile_into_database("velib_vanves.db3", "velib_vanves.txt", add_key="key")
import os
os.listdir(".")
try:
from pymyinstall.installcustom import install_sqlitespy
exe = install_sqlitespy()
except:
# we skip an exception
# the website can be down...
exe = None
exe
if exe:
from pyquickhelper import run_cmd
run_cmd("SQLiteSpy.exe velib_vanves.db3")
from pyquickhelper.helpgen import NbImage
NbImage('img_nb_sqlitespy.png')
sql = SELECT * FROM velib_vanves WHERE key IN ({0})
import random
from pyquickhelper.loghelper import noLOG
from pyensae.sql import Database
db = Database("velib_vanves.db3", LOG = noLOG)
db.connect()
mx = db.execute_view("SELECT MAX(key) FROM velib_vanves")[0][0]
rnd_ids = [ random.randint(1,mx) for i in range(0,100) ] # liste de 100 id aléatoires
strids = ",".join( str(_) for _ in rnd_ids )
res = db.execute_view(sql.format (strids))
df = db.to_df(sql.format (strids))
db.close()
df.head()[["key","last_update","available_bike_stands","available_bikes"]]
with open("temp_big_file.txt","w") as f :
f.write("c1\tc2\tc3\n")
for i in range(0,10000000):
x = [ i, random.random(), random.random() ]
s = [ str(_) for _ in x ]
f.write( "\t".join(s) + "\n" )
os.stat("temp_big_file.txt").st_size
import pandas,time
t = time.perf_counter()
df = pandas.read_csv("temp_big_file.txt",sep="\t")
print("duration (s)",time.perf_counter()-t)
t = time.perf_counter()
df.to_pickle("temp_big_file.bin")
print("duration (s)",time.perf_counter()-t)
t = time.perf_counter()
df = pandas.read_pickle("temp_big_file.bin")
print("duration (s)",time.perf_counter()-t)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mix SQLite and DataFrame
Step2: As this file is small (just an example), let's see how it looks like with a DataFrame.
Step3: Then we import it into a SQLite3 database. The following function automatically guesses the table schema.
Step4: We check the database exists
Step5: On Windows, you can use SQLiteSpy to visualize the created table. We use pymysintall to download it.
Step6: We just need to run it (see run_cmd).
Step7: You should be able to see something like (on Windows)
Step9: It is easier to use that tool to extract a sample of the data. Once it is ready, you can execute the SQL query in Python and converts the results into a DataFrame. The following code extracts a random sample from the original sets.
Step10: <h3 id="mem">Memory Dump</h3>
Step11: It is slow considering that many datasets contain many more features. But we can speed it up by doing a kind of memory dump with to_pickle.
Step12: And we reload it with read_pickle
|
14,527 | <ASSISTANT_TASK:>
Python Code:
organism = "E. Coli"
treatment = "salt stress"
todays_headline = "Python bioformaticians among top paid professionals in the country"
print todays_headline
print workshop_venue
workshop_venue = "MSU Baroda"
print workshop_venue
print organism + treatment
print organism + " in " + treatment
experiment = organism + " in " + treatment
print experiment
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here organism, treatment, todays_headline are all variable names
Step2: If you try to print or anyway use variable in which you have not stored any value, you will get an error
Step3: Now lets do something more interesting with variables
Step4: WOW, strings got joined but not in a very readable way!
Step5: Now thats better, we have a better sentence like structure
|
14,528 | <ASSISTANT_TASK:>
Python Code:
from __future__ import division
%pylab inline
import numpy as np
_=np.random.seed(123456)
import numpy as np
from scipy import stats
rv = stats.beta(3,2)
xsamples = rv.rvs(50)
%matplotlib inline
from matplotlib.pylab import subplots
fig,ax = subplots()
fig.set_size_inches(8,4)
_=ax.hist(xsamples,normed=True,color='gray')
ax2 = ax.twinx()
_=ax2.plot(np.linspace(0,1,100),rv.pdf(np.linspace(0,1,100)),lw=3,color='k')
_=ax.set_xlabel('$x$',fontsize=28)
_=ax2.set_ylabel(' $y$',fontsize=28,rotation='horizontal')
fig.tight_layout()
#fig.savefig('fig-statistics/Bootstrap_001.png')
yboot = np.random.choice(xsamples,(100,50))
yboot_mn = yboot.mean()
np.std(yboot.mean(axis=1)) # approx sqrt(1/1250)
fig,ax = subplots()
fig.set_size_inches(8,4)
_=ax.hist(yboot.mean(axis=1),normed=True,color='gray')
_=ax.set_title('Bootstrap std of sample mean %3.3f vs actual %3.3f'%
(np.std(yboot.mean(axis=1)),np.sqrt(1/1250.)))
fig.tight_layout()
#fig.savefig('fig-statistics/Bootstrap_002.png')
import sympy as S
import sympy.stats
for i in range(50): # 50 samples
# load sympy.stats Beta random variables
# into global namespace using exec
execstring = "x%d = S.stats.Beta('x'+str(%d),3,2)"%(i,i)
exec(execstring)
# populate xlist with the sympy.stats random variables
# from above
xlist = [eval('x%d'%(i)) for i in range(50) ]
# compute sample mean
sample_mean = sum(xlist)/len(xlist)
# compute expectation of sample mean
sample_mean_1 = S.stats.E(sample_mean)
# compute 2nd moment of sample mean
sample_mean_2 = S.stats.E(S.expand(sample_mean**2))
# standard deviation of sample mean
# use sympy sqrt function
sigma_smn = S.sqrt(sample_mean_2-sample_mean_1**2) # 1/sqrt(1250)
print sigma_smn
import numpy as np
np.random.seed(123)
from scipy import stats
import numpy as np
p= 0.25 # true head-up probability
x = stats.bernoulli(p).rvs(10)
print x
phat = x.mean()
print phat
print (1-2*phat)**2*(phat)**2/10.
phat_b=np.random.choice(x,(50,10)).mean(1)
print np.var(phat_b*(1-phat_b))
import sympy as S
from sympy.stats import E, Bernoulli
xdata =[Bernoulli(i,p) for i in S.symbols('x:10')]
ph = sum(xdata)/float(len(xdata))
g = ph*(1-ph)
print E(g**2) - E(g)**2
rv = stats.norm(0,2)
xsamples = rv.rvs(45)
# estimate mean and var from xsamples
mn_ = np.mean(xsamples)
std_ = np.std(xsamples)
# bootstrap from assumed normal distribution with
# mn_,std_ as parameters
rvb = stats.norm(mn_,std_) #plug-in distribution
yboot = rvb.rvs(1000)
# MLE-Plugin Variance of the sample mean
print 2*(std_**2)**2/9. # MLE plugin
# Bootstrap variance of the sample mean
print yboot.var()
# True variance of sample mean
print 2*(2**2)**2/9.
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As we have seen, outside of some toy problems, it can be very difficult or
Step2: Because this is simulation data, we already know that the
Step3: <!-- dom
Step4: and the bootstrap estimate is therefore,
Step5: Figure shows the distribution of computed
Step6: <!-- dom
Step7: Programming Tip.
Step8: The maximum likelihood estimator of $p$ is $\hat{p}=\sum X_i/n$,
Step9: Then, plugging this into the delta method approximant above,
Step10: Now, let's try this using the bootstrap estimate of the variance
Step11: This shows that the delta method's estimated variance
Step12: Programming Tip.
Step13: This case is generally representative --- the delta method tends
Step14: <!-- @@@CODE src-statistics/Bootstrap.py from-to
|
14,529 | <ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/empire_psexec_dcerpc_tcp_svcctl.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
df = spark.sql(
'''
SELECT o.`@timestamp`, o.Hostname, o.SubjectUserName, o.SubjectUserName, o.ServiceName, a.IpAddress
FROM sdTable o
INNER JOIN (
SELECT Hostname,TargetUserName,TargetLogonId,IpAddress
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 4624
AND LogonType = 3
AND NOT TargetUserName LIKE "%$"
) a
ON o.SubjectLogonId = a.TargetLogonId
WHERE LOWER(o.Channel) = "security"
AND o.EventID = 4697
'''
)
df.show(10,False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Security Dataset
Step2: Analytic I
|
14,530 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import sklearn
X, y = load_data()
assert type(X) == np.ndarray
assert type(y) == np.ndarray
# fit, then predict X
from sklearn.svm import SVR
svr_rbf = SVR(kernel='rbf')
svr_rbf.fit(X, y)
predict = svr_rbf.predict(X)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,531 | <ASSISTANT_TASK:>
Python Code:
# Import modules
import math
import sympy as sym
import numpy as np
import scipy
import matplotlib.pyplot as plt
import plotly
import plotly.plotly as ply
import plotly.figure_factory as ply_ff
from IPython.display import Math
from IPython.display import display
# Startup plotly
plotly.offline.init_notebook_mode(connected=True)
''' Fix MathJax issue '''
# The polling here is to ensure that plotly.js has already been loaded before
# setting display alignment in order to avoid a race condition.
from IPython.core.display import display, HTML
display(HTML(
'<script>'
'var waitForPlotly = setInterval( function() {'
'if( typeof(window.Plotly) !== "undefined" ){'
'MathJax.Hub.Config({ SVG: { font: "STIX-Web" }, displayAlign: "center" });'
'MathJax.Hub.Queue(["setRenderer", MathJax.Hub, "SVG"]);'
'clearInterval(waitForPlotly);'
'}}, 250 );'
'</script>'
))
# Parameters
x = 2
h = 0.1
# Symbolic computation
sym_x = sym.Symbol('x')
sym_deri_x1 = sym.diff(1 / sym_x, sym_x)
sym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf()
# Approximation
f = lambda x : 1 / x
deri_x1 = (f(x + h) - f(x)) / h
# Comparison
print('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )
# Parameters
x = 2
h = 0.1
f = lambda x : 1 / x
# Symbolic computation
sym_x = sym.Symbol('x')
sym_deri_x1 = sym.diff(1 / sym_x, sym_x)
sym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf()
# Approximation
deri_x1 = (f(x + h) - f(x - h)) / (2 * h)
# Comparison
print('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )
# Parameters
f = lambda x : math.exp(x)
real_value = 1
h_msg = "$10^{-%d}$"
twp_deri_x1 = lambda x, h : ( f(x + h) - f(x) ) / h
thp_deri_x1 = lambda x, h : ( f(x + h) - f(x - h) ) / (2 * h)
data = [
["h",
"$f'(x) \\approx \\frac{e^{x+h} - e^x}{h}$",
"error",
"$f'(x) \\approx \\frac{e^{x+h} - e^{x-h}}{2h}$",
"error"],
]
for i in range(1,10):
h = pow(10, -i)
twp_deri_x1_value = twp_deri_x1(0, h)
thp_deri_x1_value = thp_deri_x1(0, h)
row = ["", "", "", "", ""]
row[0] = h_msg %i
row[1] = '%.14f' %twp_deri_x1_value
row[2] = '%.14f' %abs(twp_deri_x1_value - real_value)
row[3] = '%.14f' %thp_deri_x1_value
row[4] = '%.14f' %abs(thp_deri_x1_value - real_value)
data.append(row)
table = ply_ff.create_table(data)
plotly.offline.iplot(table, show_link=False)
sym.init_printing(use_latex=True)
x = sym.Symbol('x')
dx = sym.diff(sym.exp(sym.sin(x)), x)
Math('Derivative : %s' %sym.latex(dx) )
# Apply Trapezoid Rule
trapz = scipy.integrate.trapz([np.log(1), np.log(2)], [1, 2])
# Evaluate the error term of Trapezoid Rule
sym_x = sym.Symbol('x')
expr = sym.diff(sym.log(sym_x), sym_x, 2)
trapz_err = abs(expr.subs(sym_x, 1).evalf() / 12)
# Print out results
print('Trapezoid rule : %f and upper bound error : %f' %(trapz, trapz_err) )
# Apply Simpson's Rule
area = scipy.integrate.simps([np.log(1), np.log(1.5), np.log(2)], [1, 1.5, 2])
# Evaluate the error term
sym_x = sym.Symbol('x')
expr = sym.diff(sym.log(sym_x), sym_x, 4)
simps_err = abs( pow(0.5, 5) / 90 * expr.subs(sym_x, 1).evalf() )
# Print out results
print('Simpson\'s rule : %f and upper bound error : %f' %(area, simps_err) )
# Apply composite Trapezoid Rule
x = np.linspace(1, 2, 5)
y = np.log(x)
trapz = scipy.integrate.trapz(y, x)
# Error term
sym_x = sym.Symbol('x')
expr = sym.diff(sym.log(sym_x), sym_x, 2)
trapz_err = abs( (2 - 1) * pow(0.25, 2) / 12 * expr.subs(sym_x, 1).evalf() )
print('Trapezoid Rule : %f, error = %f' %(trapz, trapz_err) )
# Apply composite Trapezoid Rule
x = np.linspace(1, 2, 9)
y = np.log(x)
area = scipy.integrate.simps(y, x)
# Error term
sym_x = sym.Symbol('x')
expr = sym.diff(sym.log(sym_x), sym_x, 4)
simps_err = abs( (2 - 1) * pow(0.125, 4) / 180 * expr.subs(sym_x, 1).evalf() )
print('Simpson\'s Rule : %f, error = %f' %(area, simps_err) )
# Parameters
m = 10
h = (1 - 0) / m
f = lambda x : np.sin(x) / x
mids = np.arange(0 + h/2, 1, h)
# Apply composite midpoint rule
area = h * np.sum(f(mids))
# Error term
sym_x = sym.Symbol('x')
expr = sym.diff(sym.sin(sym_x) / sym_x, sym_x, 2)
mid_err = abs( (1 - 0) * pow(h, 2) / 24 * expr.subs(sym_x, 1).evalf() )
# Print out
print('Composite Midpoint Rule : %.8f, error = %.8f' %(area, mid_err) )
def romberg(f, a, b, step):
R = np.zeros(step * step).reshape(step, step)
R[0][0] = (b - a) * (f(a) + f(b)) / 2
for j in range(1, step):
h = (b - a) / pow(2, j)
summ = 0
for i in range(1, pow(2, j - 1) + 1):
summ += h * f(a + (2 * i - 1) * h)
R[j][0] = 0.5 * R[j - 1][0] + summ
for k in range(1, j + 1):
R[j][k] = ( pow(4, k) * R[j][k - 1] - R[j - 1][k - 1] ) / ( pow(4, k) - 1 )
return R[step - 1][step - 1]
f = lambda x : np.log(x)
result = romberg(f, 1, 2, 4)
print('Romberg Integration : %f' %(result) )
f = lambda x : np.log(x)
result = scipy.integrate.romberg(f, 1, 2, show=True)
print('Romberg Integration : %f' %(result) )
''' Use Trapezoid Rule '''
def adaptive_quadrature(f, a, b, tol):
return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0)
def adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep):
c = (a + b) / 2
S = lambda x, y : (y - x) * (f(x) + f(y)) / 2
if abs( S(a, b) - S(a, c) - S(c, b) ) < 3 * tol * (b - a) / (orig_b - orig_a) or deep > 20 :
return S(a, c) + S(c, b)
else:
return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1)
''' Use Simpon's Rule '''
def adaptive_quadrature(f, a, b, tol):
return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0)
def adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep):
c = (a + b) / 2
S = lambda x, y : (y - x) * ( f(x) + 4 * f((x + y) / 2) + f(y) ) / 6
if abs( S(a, b) - S(a, c) - S(c, b) ) < 15 * tol or deep > 20 :
return S(a, c) + S(c, b)
else:
return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1)
f = lambda x : 1 + np.sin(np.exp(3 * x))
val = adaptive_quadrature(f, -1, 1, tol=1e-12)
print(val)
poly = scipy.special.legendre(2)
# Find roots of polynomials
comp = scipy.linalg.companion(poly)
roots = scipy.linalg.eig(comp)[0]
f = lambda x : np.exp(-np.power(x, 2) / 2)
quad = scipy.integrate.quadrature(f, -1, 1)
print(quad[0])
# Parametes
a = -1
b = 1
deg = 3
f = lambda x : np.exp( -np.power(x, 2) / 2 )
x, w = scipy.special.p_roots(deg) # Or use numpy.polynomial.legendre.leggauss
quad = np.sum(w * f(x))
print(quad)
# Parametes
a = 1
b = 2
deg = 4
f = lambda t : np.log( ((b - a) * t + b + a) / 2) * (b - a) / 2
x, w = scipy.special.p_roots(deg)
np.sum(w * f(x))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 5.1 Numerical Differentiation
Step2: Three-point centered-difference formula
Step3: Three-point centered-difference formula for second derivative
Step4: Extrapolation for order n formula
Step5: 5.2 Newton-Cotes Formulas For Numerical Integration
Step6: Composite Trapezoid Rule
Step7: Midpoint Rule
Step8: 5.3 Romberg Integration
Step9: Example
Step10: 5.4 Adaptive Quadrature
Step11: Example
Step12: 5.5 Gaussian Quadrature
Step13: Example
Step14: Example
|
14,532 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
%matplotlib inline
import os
from six.moves import urllib
import numpy as np
import pandas as pd
import warnings
from matplotlib import pyplot as plt
import seaborn as sns
from IPython.core.pylabtools import figsize
figsize(11, 9)
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
def load_and_preprocess_radon_dataset(state='MN'):
Preprocess Radon dataset as done in "Bayesian Data Analysis" book.
We filter to Minnesota data (919 examples) and preprocess to obtain the
following features:
- `log_uranium_ppm`: Log of soil uranium measurements.
- `county`: Name of county in which the measurement was taken.
- `floor`: Floor of house (0 for basement, 1 for first floor) on which the
measurement was taken.
The target variable is `log_radon`, the log of the Radon measurement in the
house.
ds = tfds.load('radon', split='train')
radon_data = tfds.as_dataframe(ds)
radon_data.rename(lambda s: s[9:] if s.startswith('feat') else s, axis=1, inplace=True)
df = radon_data[radon_data.state==state.encode()].copy()
# For any missing or invalid activity readings, we'll use a value of `0.1`.
df['radon'] = df.activity.apply(lambda x: x if x > 0. else 0.1)
# Make county names look nice.
df['county'] = df.county.apply(lambda s: s.decode()).str.strip().str.title()
# Remap categories to start from 0 and end at max(category).
county_name = sorted(df.county.unique())
df['county'] = df.county.astype(
pd.api.types.CategoricalDtype(categories=county_name)).cat.codes
county_name = list(map(str.strip, county_name))
df['log_radon'] = df['radon'].apply(np.log)
df['log_uranium_ppm'] = df['Uppm'].apply(np.log)
df = df[['idnum', 'log_radon', 'floor', 'county', 'log_uranium_ppm']]
return df, county_name
radon, county_name = load_and_preprocess_radon_dataset()
# We'll use the following directory to store our preprocessed dataset.
CACHE_DIR = os.path.join(os.sep, 'tmp', 'radon')
# Save processed data. (So we can later read it in R.)
if not tf.gfile.Exists(CACHE_DIR):
tf.gfile.MakeDirs(CACHE_DIR)
with tf.gfile.Open(os.path.join(CACHE_DIR, 'radon.csv'), 'w') as f:
radon.to_csv(f, index=False)
radon.head()
fig, ax = plt.subplots(figsize=(22, 5));
county_freq = radon['county'].value_counts()
county_freq.plot(kind='bar', color='#436bad');
plt.xlabel('County index')
plt.ylabel('Number of radon readings')
plt.title('Number of radon readings per county', fontsize=16)
county_freq = np.array(zip(county_freq.index, county_freq.values)) # We'll use this later.
fig, ax = plt.subplots(ncols=2, figsize=[10, 4]);
radon['log_radon'].plot(kind='density', ax=ax[0]);
ax[0].set_xlabel('log(radon)')
radon['floor'].value_counts().plot(kind='bar', ax=ax[1]);
ax[1].set_xlabel('Floor');
ax[1].set_ylabel('Count');
fig.subplots_adjust(wspace=0.25)
suppressMessages({
library('bayesplot')
library('data.table')
library('dplyr')
library('gfile')
library('ggplot2')
library('lattice')
library('lme4')
library('plyr')
library('rstanarm')
library('tidyverse')
RequireInitGoogle()
})
data = read_csv(gfile::GFile('/tmp/radon/radon.csv'))
head(data)
# https://github.com/stan-dev/example-models/wiki/ARM-Models-Sorted-by-Chapter
radon.model <- lmer(log_radon ~ 1 + floor + (0 + log_uranium_ppm | county), data = data)
summary(radon.model)
qqmath(ranef(radon.model, condVar=TRUE))
write.csv(as.data.frame(ranef(radon.model, condVar = TRUE)), '/tmp/radon/lme4_fit.csv')
fit <- stan_lmer(log_radon ~ 1 + floor + (0 + log_uranium_ppm | county), data = data)
fit
color_scheme_set("red")
ppc_dens_overlay(y = fit$y,
yrep = posterior_predict(fit, draws = 50))
color_scheme_set("brightblue")
ppc_intervals(
y = data$log_radon,
yrep = posterior_predict(fit),
x = data$county,
prob = 0.8
) +
labs(
x = "County",
y = "log radon",
title = "80% posterior predictive intervals \nvs observed log radon",
subtitle = "by county"
) +
panel_bg(fill = "gray95", color = NA) +
grid_lines(color = "white")
# Write the posterior samples (4000 for each variable) to a CSV.
write.csv(tidy(as.matrix(fit)), "/tmp/radon/stan_fit.csv")
with tf.gfile.Open('/tmp/radon/lme4_fit.csv', 'r') as f:
lme4_fit = pd.read_csv(f, index_col=0)
lme4_fit.head()
posterior_random_weights_lme4 = np.array(lme4_fit.condval, dtype=np.float32)
lme4_prior_scale = np.array(lme4_fit.condsd, dtype=np.float32)
print(posterior_random_weights_lme4.shape, lme4_prior_scale.shape)
with tf.Session() as sess:
lme4_dist = tfp.distributions.Independent(
tfp.distributions.Normal(
loc=posterior_random_weights_lme4,
scale=lme4_prior_scale),
reinterpreted_batch_ndims=1)
posterior_random_weights_lme4_final_ = sess.run(lme4_dist.sample(4000))
posterior_random_weights_lme4_final_.shape
with tf.gfile.Open('/tmp/radon/stan_fit.csv', 'r') as f:
samples = pd.read_csv(f, index_col=0)
samples.head()
posterior_random_weights_cols = [
col for col in samples.columns if 'b.log_uranium_ppm.county' in col
]
posterior_random_weights_final_stan = samples[
posterior_random_weights_cols].values
print(posterior_random_weights_final_stan.shape)
# Handy snippet to reset the global graph and global session.
with warnings.catch_warnings():
warnings.simplefilter('ignore')
tf.reset_default_graph()
try:
sess.close()
except:
pass
sess = tf.InteractiveSession()
inv_scale_transform = lambda y: np.log(y) # Not using TF here.
fwd_scale_transform = tf.exp
def _make_weights_prior(num_counties, dtype):
Returns a `len(log_uranium_ppm)` batch of univariate Normal.
raw_prior_scale = tf.get_variable(
name='raw_prior_scale',
initializer=np.array(inv_scale_transform(1.), dtype=dtype))
return tfp.distributions.Independent(
tfp.distributions.Normal(
loc=tf.zeros(num_counties, dtype=dtype),
scale=fwd_scale_transform(raw_prior_scale)),
reinterpreted_batch_ndims=1)
make_weights_prior = tf.make_template(
name_='make_weights_prior', func_=_make_weights_prior)
def _make_log_radon_likelihood(random_effect_weights, floor, county,
log_county_uranium_ppm, init_log_radon_stddev):
raw_likelihood_scale = tf.get_variable(
name='raw_likelihood_scale',
initializer=np.array(
inv_scale_transform(init_log_radon_stddev), dtype=dtype))
fixed_effect_weights = tf.get_variable(
name='fixed_effect_weights', initializer=np.array([0., 1.], dtype=dtype))
fixed_effects = fixed_effect_weights[0] + fixed_effect_weights[1] * floor
random_effects = tf.gather(
random_effect_weights * log_county_uranium_ppm,
indices=tf.to_int32(county),
axis=-1)
linear_predictor = fixed_effects + random_effects
return tfp.distributions.Normal(
loc=linear_predictor, scale=fwd_scale_transform(raw_likelihood_scale))
make_log_radon_likelihood = tf.make_template(
name_='make_log_radon_likelihood', func_=_make_log_radon_likelihood)
def joint_log_prob(random_effect_weights, log_radon, floor, county,
log_county_uranium_ppm, dtype):
num_counties = len(log_county_uranium_ppm)
rv_weights = make_weights_prior(num_counties, dtype)
rv_radon = make_log_radon_likelihood(
random_effect_weights,
floor,
county,
log_county_uranium_ppm,
init_log_radon_stddev=radon.log_radon.values.std())
return (rv_weights.log_prob(random_effect_weights)
+ tf.reduce_sum(rv_radon.log_prob(log_radon), axis=-1))
# Specify unnormalized posterior.
dtype = np.float32
log_county_uranium_ppm = radon[
['county', 'log_uranium_ppm']].drop_duplicates().values[:, 1]
log_county_uranium_ppm = log_county_uranium_ppm.astype(dtype)
def unnormalized_posterior_log_prob(random_effect_weights):
return joint_log_prob(
random_effect_weights=random_effect_weights,
log_radon=dtype(radon.log_radon.values),
floor=dtype(radon.floor.values),
county=np.int32(radon.county.values),
log_county_uranium_ppm=log_county_uranium_ppm,
dtype=dtype)
# Set-up E-step.
step_size = tf.get_variable(
'step_size',
initializer=np.array(0.2, dtype=dtype),
trainable=False)
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
num_leapfrog_steps=2,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=None),
state_gradients_are_stopped=True)
init_random_weights = tf.placeholder(dtype, shape=[len(log_county_uranium_ppm)])
posterior_random_weights, kernel_results = tfp.mcmc.sample_chain(
num_results=3,
num_burnin_steps=0,
num_steps_between_results=0,
current_state=init_random_weights,
kernel=hmc)
# Set-up M-step.
loss = -tf.reduce_mean(kernel_results.accepted_results.target_log_prob)
global_step = tf.train.get_or_create_global_step()
learning_rate = tf.train.exponential_decay(
learning_rate=0.1,
global_step=global_step,
decay_steps=2,
decay_rate=0.99)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss, global_step=global_step)
# Initialize all variables.
init_op = tf.initialize_all_variables()
# Grab variable handles for diagnostic purposes.
with tf.variable_scope('make_weights_prior', reuse=True):
prior_scale = fwd_scale_transform(tf.get_variable(
name='raw_prior_scale', dtype=dtype))
with tf.variable_scope('make_log_radon_likelihood', reuse=True):
likelihood_scale = fwd_scale_transform(tf.get_variable(
name='raw_likelihood_scale', dtype=dtype))
fixed_effect_weights = tf.get_variable(
name='fixed_effect_weights', dtype=dtype)
init_op.run()
w_ = np.zeros([len(log_county_uranium_ppm)], dtype=dtype)
%%time
maxiter = int(1500)
num_accepted = 0
num_drawn = 0
for i in range(maxiter):
[
_,
global_step_,
loss_,
posterior_random_weights_,
kernel_results_,
step_size_,
prior_scale_,
likelihood_scale_,
fixed_effect_weights_,
] = sess.run([
train_op,
global_step,
loss,
posterior_random_weights,
kernel_results,
step_size,
prior_scale,
likelihood_scale,
fixed_effect_weights,
], feed_dict={init_random_weights: w_})
w_ = posterior_random_weights_[-1, :]
num_accepted += kernel_results_.is_accepted.sum()
num_drawn += kernel_results_.is_accepted.size
acceptance_rate = num_accepted / num_drawn
if i % 100 == 0 or i == maxiter - 1:
print('global_step:{:>4} loss:{: 9.3f} acceptance:{:.4f} '
'step_size:{:.4f} prior_scale:{:.4f} likelihood_scale:{:.4f} '
'fixed_effect_weights:{}'.format(
global_step_, loss_.mean(), acceptance_rate, step_size_,
prior_scale_, likelihood_scale_, fixed_effect_weights_))
%%time
posterior_random_weights_final, kernel_results_final = tfp.mcmc.sample_chain(
num_results=int(15e3),
num_burnin_steps=int(1e3),
current_state=init_random_weights,
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
num_leapfrog_steps=2,
step_size=step_size))
[
posterior_random_weights_final_,
kernel_results_final_,
] = sess.run([
posterior_random_weights_final,
kernel_results_final,
], feed_dict={init_random_weights: w_})
print('prior_scale: ', prior_scale_)
print('likelihood_scale: ', likelihood_scale_)
print('fixed_effect_weights: ', fixed_effect_weights_)
print('acceptance rate final: ', kernel_results_final_.is_accepted.mean())
x = posterior_random_weights_final_ * log_county_uranium_ppm
I = county_freq[:, 0]
x = x[:, I]
cols = np.array(county_name)[I]
pw = pd.DataFrame(x)
pw.columns = cols
fig, ax = plt.subplots(figsize=(25, 4))
ax = pw.boxplot(rot=80, vert=True);
nrows = 17
ncols = 5
fig, ax = plt.subplots(nrows, ncols, figsize=(18, 21), sharey=True, sharex=True)
with warnings.catch_warnings():
warnings.simplefilter('ignore')
ii = -1
for r in range(nrows):
for c in range(ncols):
ii += 1
idx = county_freq[ii, 0]
sns.kdeplot(
posterior_random_weights_final_[:, idx] * log_county_uranium_ppm[idx],
color='blue',
alpha=.3,
shade=True,
label='TFP',
ax=ax[r][c])
sns.kdeplot(
posterior_random_weights_final_stan[:, idx] *
log_county_uranium_ppm[idx],
color='red',
alpha=.3,
shade=True,
label='Stan/rstanarm',
ax=ax[r][c])
sns.kdeplot(
posterior_random_weights_lme4_final_[:, idx] *
log_county_uranium_ppm[idx],
color='#F4B400',
alpha=.7,
shade=False,
label='R/lme4',
ax=ax[r][c])
ax[r][c].vlines(
posterior_random_weights_lme4[idx] * log_county_uranium_ppm[idx],
0,
5,
color='#F4B400',
linestyle='--')
ax[r][c].set_title(county_name[idx] + ' ({})'.format(idx), y=.7)
ax[r][c].set_ylim(0, 5)
ax[r][c].set_xlim(-1., 1.)
ax[r][c].get_yaxis().set_visible(False)
if ii == 2:
ax[r][c].legend(bbox_to_anchor=(1.4, 1.7), fontsize=20, ncol=3)
else:
ax[r][c].legend_.remove()
fig.subplots_adjust(wspace=0.03, hspace=0.1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Mixed-Effect Regression in {TF Probability, R, Stan}
Step3: 2 Hierarchical Linear Model
Step4: 3.1 Know Thy Data
Step5: Conclusions
Step6: 5 HLM In Stan
Step7: Note
Step8: Note
Step9: Retrieve the point estimates and conditional standard deviations for the group random effects from lme4 for visualization later.
Step10: Draw samples for the county weights using the lme4 estimated means and standard deviations.
Step11: We also retrieve the posterior samples of the county weights from the Stan fit.
Step12: This Stan example shows how one would implement LMER in a style closer to TFP, i.e., by directly specifying the probabilistic model.
Step13: 6.1 Specify Model
Step15: The following function constructs our prior, $p(\beta|\sigma_C)$ where $\beta$ denotes the random-effect weights and $\sigma_C$ the standard deviation.
Step16: The following function constructs our likelihood, $p(y|x,\omega,\beta,\sigma_N)$ where $y,x$ denote response and evidence, $\omega,\beta$ denote fixed- and random-effect weights, and $\sigma_N$ the standard deviation.
Step17: Finally we use the prior and likelihood generators to construct the joint log-density.
Step18: 6.2 Training (Stochastic Approximation of Expectation Maximization)
Step19: We now complete the E-step setup by creating an HMC transition kernel.
Step20: We now set-up the M-step. This is essentially the same as an optimization one might do in TF.
Step21: We conclude with some housekeeping tasks. We must tell TF that all variables are initialized. We also create handles to our TF variables so we can print their values at each iteration of the procedure.
Step22: 6.3 Execute
Step23: Looks like after ~1500 steps, our estimates of the parameters have stabilized.
Step24: We now construct a box and whisker diagram of the $\beta_c \log(\text{UraniumPPM}_c)$ random-effect. We'll order the random-effects by decreasing county frequency.
Step25: From this box and whisker diagram, we observe that the variance of the county-level $\log(\text{UraniumPPM})$ random-effect increases as the county is less represented in the dataset. Intutively this makes sense--we should be less certain about the impact of a certain county if we have less evidence for it.
|
14,533 | <ASSISTANT_TASK:>
Python Code:
from astropy.io import fits
import numpy as np
import matplotlib.pyplot as plt
from skimage import measure
from astropy.visualization import astropy_mpl_style
plt.style.use(astropy_mpl_style)
class Blob:
Class that defines a 'blob' in an image: the contour of a set of pixels
with values above a given threshold.
def __init__(self, x, y):
Define a counter by its contour lines (an list of points in the xy
plane), the contour centroid, and its enclosed area.
Parameters
----------
x : list or array_like
x-values of blob contour.
y : list or array_like
y-values of blob contour.
self.x = x
self.y = y
self.xc = np.mean(x)
self.yc = np.mean(y)
# Find the area inside the contour
self.area = 0.
n = len(x)
for i in range(0, n):
self.area += 0.5*(y[i]+y[i-1])*(x[i]-x[i-1])
def distance(self, blob):
Calculate the distance between the centroid of this blob contour and
another one in the xy plane.
Parameters
----------
blob : Blob
A second blob.
Returns
-------
dist : float
Euclidean distance between two blob centroids.
return np.sqrt((self.xc - blob.xc)**2 + (self.yc-blob.yc)**2)
class BlobGroup:
A list of blobs that is grouped or associated in some way, i.e., if
their contour centroids are relatively close together.
def __init__(self):
Initialize a list of stored blobs and the bounding rectangle which
defines the group.
self.blobs = []
self.xmin = 1e10
self.xmax = -1e10
self.ymin = 1e10
self.ymax = -1e10
def addBlob(self, blob):
Add a blob to the group and enlarge the bounding rectangle of the
group.
self.blobs.append(blob)
self.xmin = min(self.xmin, blob.x.min())
self.xmax = max(self.xmax, blob.x.max())
self.ymin = min(self.ymin, blob.y.min())
self.ymax = max(self.ymax, blob.y.max())
self.cov = None
def getBoundingBox(self):
Get the bounding rectangle of the group.
return (self.xmin, self.xmax, self.ymin, self.ymax)
def getSquareBoundingBox(self):
Get the bounding rectangle, redefined to give it a square aspect
ratio.
xmin, xmax, ymin, ymax = (self.xmin, self.xmax, self.ymin, self.ymax)
xL = np.abs(xmax - xmin)
yL = np.abs(ymax - ymin)
if xL > yL:
ymin -= 0.5*(xL-yL)
ymax += 0.5*(xL-yL)
else:
xmin -= 0.5*(yL-xL)
xmax += 0.5*(yL-xL)
return (xmin, xmax, ymin, ymax)
def getSubImage(self, image):
Given an image, extract the section of the image corresponding to
the bounding box of the blob group.
ny,nx = image.shape
x0,x1,y0,y1 = self.getBoundingBox()
# Account for all the weird row/column magic in the image table...
i0,i1 = [ny - int(t) for t in (y1,y0)]
j0,j1 = [int(t) for t in (x0,x1)]
# Add a pixel buffer around the bounds, and check the ranges
buf = 1
i0 = 0 if i0-buf < 0 else i0-buf
i1 = ny-1 if i1 > ny-1 else i1+buf
j0 = 0 if j0-buf < 0 else j0-buf
j1 = nx-1 if j1 > nx-1 else j1+buf
return image[i0:i1, j0:j1]
def getRawMoment(self, image, p, q):
Calculate the image moment given by
M_{ij}=\sum_x\sum_y x^p y^q I(x,y)
where I(x,y) is the image intensity at location x,y.
nx,ny = image.shape
Mpq = 0.
if p == 0 and q == 0:
Mpq = np.sum(image)
else:
for i in range(0,nx):
x = 0.5 + i
for j in range(0,ny):
y = 0.5 + j
Mpq += x**p * y**q * image[i,j]
return Mpq
def getCovariance(self, image):
Get the raw moments of the image region inside the bounding box
defined by this blob group and calculate the image covariance
matrix.
if self.cov is None:
subImage = self.getSubImage(image).transpose()
M00 = self.getRawMoment(subImage, 0, 0)
M10 = self.getRawMoment(subImage, 1, 0)
M01 = self.getRawMoment(subImage, 0, 1)
M11 = self.getRawMoment(subImage, 1, 1)
M20 = self.getRawMoment(subImage, 2, 0)
M02 = self.getRawMoment(subImage, 0, 2)
xbar = M10/M00
ybar = M01/M00
self.cov = np.vstack([[M20/M00 - xbar*xbar, M11/M00 - xbar*ybar],
[M11/M00 - xbar*ybar, M02/M00 - ybar*ybar]])
return self.cov
def getPrincipalMoments(self, image):
Return the maximum and minimum eigenvalues of the covariance matrix,
as well as the angle theta between the maximum eigenvector and the
x-axis.
cov = self.getCovariance(image)
u20 = cov[0,0]
u11 = cov[0,1]
u02 = cov[1,1]
theta = 0.5 * np.arctan2(2*u11, u20-u02)
l1 = 0.5*(u20+u02) + 0.5*np.sqrt(4*u11**2 + (u20-u02)**2)
l2 = 0.5*(u20+u02) - 0.5*np.sqrt(4*u11**2 + (u20-u02)**2)
return l1, l2, theta
def findBlobs(image, threshold, minArea=2.):
Pass through an image and find a set of blobs/contours above a set
threshold value. The minArea parameter is used to exclude blobs with an
area below this value.
blobs = []
ny, nx = image.shape
# Find contours using the Marching Squares algorithm in the scikit package.
contours = measure.find_contours(image, threshold)
for contour in contours:
x = contour[:,1]
y = ny - contour[:,0]
blob = Blob(x, y)
if blob.area >= minArea:
blobs.append(blob)
return blobs
def groupBlobs(blobs, maxDist):
Given a list of blobs, group them by distance between the centroids of
any two blobs. If the centroids are more distant than maxDist, create a
new blob group.
n = len(blobs)
groups = []
if n >= 1:
# Single-pass clustering algorithm: make the first blob the nucleus of
# a blob group. Then loop through each blob and add either add it to
# this group (depending on the distance measure) or make it the
# nucleus of a new blob group
bg = BlobGroup()
bg.addBlob(blobs[0])
groups.append(bg)
for i in range(1, n):
bi = blobs[i]
isGrouped = False
for group in groups:
# Calculate distance measure for a blob and a blob group:
# blob just has to be < maxDist from any other blob in the group
for bj in group.blobs:
if bi.distance(bj) < maxDist:
group.addBlob(bi)
isGrouped = True
break
if not isGrouped:
bg = BlobGroup()
bg.addBlob(bi)
groups.append(bg)
return groups
hdus = fits.open('/global/project/projectdirs/desi/spectro/redux/daily/preproc/20191208/00031136/preproc-z3-00031136.fits')
for hdu in hdus:
print(hdu.header['EXTNAME'])
img = hdus['IMAGE'].data
mask = hdus['MASK'].data
readnoise = hdus['READNOISE'].data
plt.subplots(1,1, figsize=(12,9), tight_layout=True)
plt.imshow(img, cmap='gray', vmin=0, vmax=2000)
plt.colorbar()
plt.subplots(1,1, figsize=(12,9), tight_layout=True)
plt.imshow(mask, cmap='gray', vmin=0, vmax=1)
plt.colorbar()
blobs = findBlobs(mask, threshold=0.5, minArea=2)
groups = groupBlobs(blobs, maxDist=30.)
plt.subplots(1,1, figsize=(12,9), tight_layout=True)
plt.imshow(mask, cmap='gray', vmin=0, vmax=1)
for blob in blobs:
plt.plot(blob.x, blob.y, linewidth=2, color='#00dd00')
plt.colorbar()
plt.subplots(1,1, figsize=(12,9), tight_layout=True)
plt.imshow(mask, cmap='gray', vmin=0, vmax=1.)
for i, group in enumerate(groups):
if len(group.blobs) > 5:
for blob in group.blobs:
plt.plot(blob.x, blob.y, linewidth=2, color='#00dd00')
plt.colorbar()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Blob Class
Step14: BlobGroup Class
Step17: Find and Group Blobs
Step18: Run on Preproc Data
Step19: Find and Group Blobs
Step20: Plot Largest Blob Groups
|
14,534 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-2', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
14,535 | <ASSISTANT_TASK:>
Python Code:
%%bash
if [ ! -d ./FATS ]; then
git clone https://github.com/isadoranun/FATS ./FATS
fi
cd ./FATS;
git pull origin master;
%%bash
cd ./FATS;
git log --name-status HEAD^..HEAD;
%%bash
cd ./FATS;
cat requirements.txt;
%%bash
python --version
%%bash
uname -srvmoio
%%bash
pylint --version
%%bash
pip freeze | grep caniusepython3
%%bash
caniusepython3 --projects FATS
%%bash
sloccount FATS/FATS
%%bash
flake8 --version
%%bash
pylint --py3k ./FATS/FATS/
%%bash
cd ./FATS;
coverage erase
coverage run --source=FATS -m py.test
coverage report
%%bash
flake8 FATS/FATS --count
import sys
import time as tmod
import warnings
import numpy as np
warnings.simplefilter("ignore")
sys.path.insert(0, "./FATS/")
import FATS
#We open the ligth curve in two different bands
lc_B = FATS.ReadLC_MACHO('lc/lc_1.3444.614.B.txt')
lc_R = FATS.ReadLC_MACHO('lc/lc_1.3444.614.R.txt')
#We import the data
[mag, time, error] = lc_B.ReadLC()
[mag2, time2, error2] = lc_R.ReadLC()
#We preprocess the data
preproccesed_data = FATS.Preprocess_LC(mag, time, error)
[mag, time, error] = preproccesed_data.Preprocess()
preproccesed_data = FATS.Preprocess_LC(mag2, time2, error2)
[mag2, time2, error2] = preproccesed_data.Preprocess()
#We synchronize the data
if len(mag) != len(mag2):
[aligned_mag, aligned_mag2, aligned_time, aligned_error, aligned_error2] = \
FATS.Align_LC(time, time2, mag, mag2, error, error2)
lc = np.array([mag, time, error, mag2, aligned_mag, aligned_mag2, aligned_time, aligned_error, aligned_error2])
EXCLUDE = [
'Freq1_harmonics_amplitude_0','Freq1_harmonics_amplitude_1',
'Freq1_harmonics_amplitude_2','Freq1_harmonics_amplitude_3',
'Freq2_harmonics_amplitude_0','Freq2_harmonics_amplitude_1',
'Freq2_harmonics_amplitude_2','Freq2_harmonics_amplitude_3',
'Freq3_harmonics_amplitude_0','Freq3_harmonics_amplitude_1',
'Freq3_harmonics_amplitude_2','Freq3_harmonics_amplitude_3',
'Freq1_harmonics_amplitude_0','Freq1_harmonics_rel_phase_0',
'Freq1_harmonics_rel_phase_1','Freq1_harmonics_rel_phase_2',
'Freq1_harmonics_rel_phase_3','Freq2_harmonics_rel_phase_0',
'Freq2_harmonics_rel_phase_1','Freq2_harmonics_rel_phase_2',
'Freq2_harmonics_rel_phase_3','Freq3_harmonics_rel_phase_0',
'Freq3_harmonics_rel_phase_1','Freq3_harmonics_rel_phase_2',
'Freq3_harmonics_rel_phase_3', "Period_fit", "Psi_eta", "Psi_CS"]
iterations = 1000
times_pls = []
fs = FATS.FeatureSpace(
Data='all', excludeList=EXCLUDE)
for _ in range(iterations):
start = tmod.time()
fs.calculateFeature(lc)
times_pls.append(tmod.time() - start)
times = []
fs = FATS.FeatureSpace(
Data='all', excludeList=EXCLUDE + ["PeriodLS"])
for _ in range(iterations):
start = tmod.time()
fs.calculateFeature(lc)
times.append(tmod.time() - start)
msg =
Total iterations: {iterations}
With PeriodLS:
- Total: {total_pls}
- Minimun: {min_pls}
- Maximun: {max_pls}
- Mean: {mean_pls}
- Std: {std_pls}
Without PeriodLS:
- Total: {total}
- Minimun: {min}
- Maximun: {max}
- Mean: {mean}
- Std: {std}
.format(
iterations=iterations,
total_pls=np.sum(times_pls), min_pls=np.min(times_pls),
max_pls=np.max(times_pls), mean_pls=np.mean(times_pls),
std_pls=np.std(times_pls),
total=np.sum(times), min=np.min(times),
max=np.max(times), mean=np.mean(times),
std=np.std(times))
print(msg)
import sys
import time as tmod
import warnings
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import pandas as pd
warnings.simplefilter("ignore")
import FATS
iterations = 1000
lc_size = 1000
random = np.random.RandomState(42)
results = {
"StetsonK": np.empty(iterations),
"StetsonJ": np.empty(iterations),
"AndersonDarling": np.empty(iterations)}
for it in range(iterations):
fs = FATS.FeatureSpace(featureList=list(results.keys()))
# a simple time array from 0 to 99 with steps of 0.01
time = np.arange(0, 100, 100./lc_size).shape
# create 1000 magnitudes with mu 0 and std 1
mags = random.normal(size=lc_size)
# create 1000 magnitudes with difference <= 0.1% than mags
mags2 = mags * random.uniform(0, 0.01, mags.size)
# create two errors for the magnitudes equivalent to the 0.001%
# of the magnitudes
errors = random.normal(scale=0.00001, size=lc_size)
errors2 = random.normal(scale=0.00001, size=lc_size)
lc = np.array([
mags, # magnitude
time, # time
errors, # error
mags, # magnitude2
mags, # aligned_magnitude
mags, # aligned_magnitude2
time, # aligned_time
errors, # aligned_error
errors # aligned_error2
])
fs.calculateFeature(lc)
for k, v in fs.result("dict").items():
results[k][it] = v
df = pd.DataFrame(results).describe()
print df
import datetime
datetime.datetime.now().isoformat()
%%bash
git commit -am "test ruuned";
git pull origin master;
git push origin master;
git push github master;
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A.2. Requirements
Step2: A.3. Python Version
Step3: A.4. uname -srvmoio
Step4: A.5. Pylint Version
Step5: A.6. caniusepython3 version
Step6: A.7. Sloccount
Step7: A.8. flake8
Step8: B. Fats Status
Step9: B.2. Unit-Testing and Coverage
Step10: B.3. Code Style (with flake8)
Step12: C. Performance of the FATS Lomb-Scargle Method
Step13: D. Features expected values
|
14,536 | <ASSISTANT_TASK:>
Python Code:
import sys
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
try:
import matplotlib
if matplotlib.__version__ < "1.4.3":
!pip install --upgrade matplotlib
except:
!pip install --user matplotlib
from docplex.cp.model import CpoModel
from sys import stdout
from collections import namedtuple
NB_HOUSES = 5
MAX_AMOUNT_OF_PERIODS = 318
HOUSES = range(1, NB_HOUSES + 1)
period_domain = (0, MAX_AMOUNT_OF_PERIODS)
Task = (namedtuple("Task", ["name", "duration"]))
TASKS = {Task("masonry", 35),
Task("carpentry", 15),
Task("plumbing", 40),
Task("ceiling", 15),
Task("roofing", 5),
Task("painting", 10),
Task("windows", 5),
Task("facade", 10),
Task("garden", 5),
Task("moving", 5),
}
TaskPrecedence = (namedtuple("TaskPrecedence", ["beforeTask", "afterTask"]))
TASK_PRECEDENCES = {TaskPrecedence("masonry", "carpentry"),
TaskPrecedence("masonry", "plumbing"),
TaskPrecedence("masonry", "ceiling"),
TaskPrecedence("carpentry", "roofing"),
TaskPrecedence("ceiling", "painting"),
TaskPrecedence("roofing", "windows"),
TaskPrecedence("roofing", "facade"),
TaskPrecedence("plumbing", "facade"),
TaskPrecedence("roofing", "garden"),
TaskPrecedence("plumbing", "garden"),
TaskPrecedence("windows", "moving"),
TaskPrecedence("facade", "moving"),
TaskPrecedence("garden", "moving"),
TaskPrecedence("painting", "moving"),
}
WORKERS = {"Joe", "Jack", "Jim"}
Skill = (namedtuple("Skill", ["worker", "task", "level"]))
SKILLS = {Skill("Joe", "masonry", 9),
Skill("Joe", "carpentry", 7),
Skill("Joe", "ceiling", 5),
Skill("Joe", "roofing", 6),
Skill("Joe", "windows", 8),
Skill("Joe", "facade", 5),
Skill("Joe", "garden", 5),
Skill("Joe", "moving", 6),
Skill("Jack", "masonry", 5),
Skill("Jack", "plumbing", 7),
Skill("Jack", "ceiling", 8),
Skill("Jack", "roofing", 7),
Skill("Jack", "painting", 9),
Skill("Jack", "facade", 5),
Skill("Jack", "garden", 5),
Skill("Jim", "carpentry", 5),
Skill("Jim", "painting", 6),
Skill("Jim", "windows", 5),
Skill("Jim", "garden", 9),
Skill("Jim", "moving", 8)
}
def find_tasks(name):
return next(t for t in TASKS if t.name == name)
def find_skills(worker, task):
return next(s for s in SKILLS if (s.worker == worker) and (s.task == task))
def find_max_level_skill(task):
st = [s for s in SKILLS if s.task == task]
return next(sk for sk in st if sk.level == max([s.level for s in st]))
mdl = CpoModel(name="HouseBuilding")
tasks = {} # dict of interval variable for each house and task
for house in HOUSES:
for task in TASKS:
tasks[(house, task)] = mdl.interval_var(start=period_domain,
end=period_domain,
size=task.duration,
name="house {} task {}".format(house, task))
wtasks = {} # dict of interval variable for each house and skill
for house in HOUSES:
for skill in SKILLS:
iv = mdl.interval_var(name='H' + str(house) + '-' + skill.task + '(' + skill.worker + ')')
iv.set_optional()
wtasks[(house, skill)] = iv
for h in HOUSES:
for p in TASK_PRECEDENCES:
mdl.add(mdl.end_before_start(tasks[(h, find_tasks(p.beforeTask))], tasks[(h, find_tasks(p.afterTask))]))
for h in HOUSES:
for t in TASKS:
mdl.add(mdl.alternative(tasks[(h, t)], [wtasks[(h, s)] for s in SKILLS if (s.task == t.name)], 1))
for w in WORKERS:
mdl.add(mdl.no_overlap([wtasks[(h, s)] for h in HOUSES for s in SKILLS if s.worker == w]))
obj = mdl.sum([s.level * mdl.presence_of(wtasks[(h, s)]) for s in SKILLS for h in HOUSES])
mdl.add(mdl.maximize(obj))
# Solve the model
print("\nSolving model....")
msol = mdl.solve(TimeLimit=10)
print("Solve status: " + msol.get_solve_status())
if msol.is_solution():
stdout.write("Solve time: " + str(msol.get_solve_time()) + "\n")
# Sort tasks in increasing begin order
ltasks = []
for hs in HOUSES:
for tsk in TASKS:
(beg, end, dur) = msol[tasks[(hs, tsk)]]
ltasks.append((hs, tsk, beg, end, dur))
ltasks = sorted(ltasks, key = lambda x : x[2])
# Print solution
print("\nList of tasks in increasing start order:")
for tsk in ltasks:
print("From " + str(tsk[2]) + " to " + str(tsk[3]) + ", " + tsk[1].name + " in house " + str(tsk[0]))
else:
stdout.write("No solution found\n")
POP_UP_GRAPHIC=False
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
if not POP_UP_GRAPHIC:
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
def compact_name(name,n): return name[:n]
if msol and visu.is_visu_enabled():
workers_colors = {}
workers_colors["Joe"] = 'lightblue'
workers_colors["Jack"] = 'violet'
workers_colors["Jim"] = 'lightgreen'
visu.timeline('Solution per houses', 0, MAX_AMOUNT_OF_PERIODS)
for h in HOUSES:
visu.sequence(name="house " + str(h))
for s in SKILLS:
wt = msol.get_var_solution(wtasks[(h,s)])
if wt.is_present():
color = workers_colors[s.worker]
wtname = compact_name(s.task,2)
visu.interval(wt, color, wtname)
visu.show()
def compact_house_task(name):
loc, task = name[1:].split('-', 1)
return task[0].upper() + loc
if msol and visu.is_visu_enabled():
visu.timeline('Solution per workers', 0, MAX_AMOUNT_OF_PERIODS)
for w in WORKERS:
visu.sequence(name=w)
for h in HOUSES:
for s in SKILLS:
if s.worker == w:
wt = msol.get_var_solution(wtasks[(h,s)])
if wt.is_present():
ml = find_max_level_skill(s.task).level
if s.level == ml:
color = 'lightgreen'
else:
color = 'salmon'
wtname = compact_house_task(wt.get_name())
visu.interval(wt, color, wtname)
visu.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.
Step 2
Step2: Now, we need to import all required modeling functions that are provided by the <i>docplex.cp</i> package
Step3: Step 3
Step4: All tasks must start and end between 0 and the max amount of periods
Step5: For each task type in the house building project, the following table shows the duration of the task in days along with the tasks that must be finished before the task can start. A worker can only work on one task at a time; each task, once started, may not be interrupted.
Step6: The tasks precedences
Step7: There are three workers with varying skill levels in regard to the ten tasks. If a worker has a skill level of zero for a task, he may not be assigned to the task.
Step8: Workers Name and level for each of there skill
Step9: Utility functions
Step10: find_skills
Step11: find_max_level_skill
Step12: Step 4
Step13: Define the decision variables
Step14: <h5><i><font color=blue>Concept
Step15: Express the business constraints
Step16: <h5>Alternative workers</h5>
Step17: <h5>No overlap constraint</h5>
Step18: Express the objective
Step19: Solve the model
Step20: Step 5
Step21: Import graphical tools
Step22: Draw solution
Step23: The purpose of this function is to compact the names of the different tasks with the aim of making the graphical display readable. </p>
Step24: Green-like color when task is using the most skilled worker
|
14,537 | <ASSISTANT_TASK:>
Python Code:
from collections import OrderedDict # For recording the model specification
import pandas as pd # For file input/output
import numpy as np # For vectorized math operations
import statsmodels.tools.numdiff as numdiff # For numeric hessian
import scipy.linalg # For matrix inversion
import pylogit as pl # For choice model estimation
from pylogit import nested_logit as nl # For nested logit convenience funcs
# Load the raw swiss metro data
# Note the .dat files are tab delimited text files
swissmetro_wide = pd.read_table("../data/swissmetro.dat", sep='\t')
# Select obervations whose choice is known (i.e. CHOICE != 0)
# **AND** whose PURPOSE is either 1 or 3
include_criteria = (swissmetro_wide.PURPOSE.isin([1, 3]) &
(swissmetro_wide.CHOICE != 0))
# Use ".copy()" so that later on, we avoid performing operations
# on a view of a dataframe as opposed to on an actual dataframe
clean_sm_wide = swissmetro_wide.loc[include_criteria].copy()
# Look at how many observations we have after removing unwanted
# observations
final_num_obs = clean_sm_wide.shape[0]
num_obs_statement = "The cleaned number of observations is {:,.0f}."
print (num_obs_statement.format(final_num_obs))
# Create a custom id column that ignores the fact that this is a
# panel/repeated-observations dataset, and start the "custom_id" from 1
clean_sm_wide["custom_id"] = np.arange(clean_sm_wide.shape[0], dtype=int) + 1
# Look at the columns of the swissmetro data
clean_sm_wide.columns
# Create the list of individual specific variables
ind_variables = clean_sm_wide.columns.tolist()[:15]
# Specify the variables that vary across individuals **AND**
# across some or all alternatives
alt_varying_variables = {u'travel_time': dict([(1, 'TRAIN_TT'),
(2, 'SM_TT'),
(3, 'CAR_TT')]),
u'travel_cost': dict([(1, 'TRAIN_CO'),
(2, 'SM_CO'),
(3, 'CAR_CO')]),
u'headway': dict([(1, 'TRAIN_HE'),
(2, 'SM_HE')]),
u'seat_configuration': dict([(2, "SM_SEATS")])}
# Specify the availability variables
availability_variables = dict(zip(range(1, 4), ['TRAIN_AV', 'SM_AV', 'CAR_AV']))
# Determine the columns that will denote the
# new column of alternative ids, and the columns
# that denote the custom observation ids and the
# choice column
new_alt_id = "mode_id"
obs_id_column = "custom_id"
choice_column = "CHOICE"
# Perform the desired conversion
long_swiss_metro = pl.convert_wide_to_long(clean_sm_wide,
ind_variables,
alt_varying_variables,
availability_variables,
obs_id_column,
choice_column,
new_alt_id_name=new_alt_id)
# Look at the first 9 rows of the long-format dataframe
long_swiss_metro.head(9).T
# Scale both the travel time and travel cost by 100
long_swiss_metro["travel_time_hundredth"] = (long_swiss_metro["travel_time"] /
100.0)
# Figure out which rows correspond to train or swiss metro
# alternatives for individuals with GA passes. These individuals face no
# marginal costs for a trip
train_pass_train_alt = ((long_swiss_metro["GA"] == 1) *
(long_swiss_metro["mode_id"].isin([1, 2]))).astype(int)
# Note that the (train_pass_train_alt == 0) term accounts for the
# fact that those with a GA pass have no marginal cost for the trip
long_swiss_metro["travel_cost_hundredth"] = (long_swiss_metro["travel_cost"] *
(train_pass_train_alt == 0) /
100.0)
# Specify the nesting values
nest_membership = OrderedDict()
nest_membership["Future Modes"] = [2]
nest_membership["Existing Modes"] = [1, 3]
# Create the model's specification dictionary and variable names dictionary
# NOTE: - Keys should be variables within the long format dataframe.
# The sole exception to this is the "intercept" key.
# - For the specification dictionary, the values should be lists
# or lists of lists. Within a list, or within the inner-most
# list should be the alternative ID's of the alternative whose
# utility specification the explanatory variable is entering.
example_specification = OrderedDict()
example_names = OrderedDict()
# Note that 1 is the id for the Train and 3 is the id for the Car.
# The next two lines are placing alternative specific constants in
# the utility equations for the Train and for the Car. The order
# in which these variables are placed is chosen so the summary
# dataframe which is returned will match that shown in the HTML
# file of the python biogeme example.
example_specification["intercept"] = [3, 1]
example_names["intercept"] = ['ASC Car', 'ASC Train']
# Note that the names used below are simply for consistency with
# the coefficient names given in the Python Biogeme example.
# example_specification["travel_cost_hundredth"] = [[1, 2, 3]]
# example_names["travel_cost_hundredth"] = ['B_COST']
example_specification["travel_cost_hundredth"] = [[1, 2, 3]]
example_names["travel_cost_hundredth"] = ['B_COST']
example_specification["travel_time_hundredth"] = [[1, 2, 3]]
example_names["travel_time_hundredth"] = ['B_TIME']
# Define a function that calculates the "logit" transformation of values
# between 0.0 and 1.0.
def logit(x):
Parameters
----------
x : int, float, or 1D ndarray.
If an array, all elements should be ints or floats. All
elements should be between zero and one, exclusive of 1.0.
Returns
-------
The logit of x: `np.log(x / (1.0 - x))`.
return np.log(x/(1.0 - x))
# Provide the module with the needed input arguments to create
# an instance of the MNL model class
example_nested = pl.create_choice_model(data=long_swiss_metro,
alt_id_col=new_alt_id,
obs_id_col=obs_id_column,
choice_col=choice_column,
specification=example_specification,
model_type="Nested Logit",
names=example_names,
nest_spec=nest_membership)
# Specify the initial nesting parameter values
# Note: This should be in terms of the reparameterized values used
# by PyLogit.
# Note: The '40' corresponds to scale parameter that is numerically
# indistinguishable from 1.0
# Note: 2.05 is the scale parameter that is estimated by PythonBiogeme
# so we invert it, then take the logit of this inverse to get the
# corresponding starting value to be used by PyLogit.
# Note the first value corresponds to the first nest in 'nest_spec'
# and the second value corresponds to the second nest in 'nest_spec'.
init_nests = np.array([40, logit(2.05**-1)])
# Specify the initial index coefficients used by PythonBiogeme
init_coefs = np.array([-0.167, -0.512, -0.899, -0.857])
# Create a single array of the initial values
init_values = np.concatenate((init_nests, init_coefs), axis=0)
# Start the model estimation from the pythonbiogeme initial values
# Note that the first value, in the initial values, is constrained
# to remain constant through the estimation process. This is because
# the first nest in nest_spec is a 'degenerate' nest with only one
# alternative, and the nest parameter of degenerate nests is not
# identified.
example_nested.fit_mle(init_values,
constrained_pos=[0])
# Look at the estimated coefficients and goodness-of-fit statistics
example_nested.get_statsmodels_summary()
# Note that the Mu (i.e the scale parameter) estimated by python biogeme is
# 1.0 / nest_coefficient where
# nest_coefficient = 1.0 / (1.0 + exp[-1 * estimated_nest_param])
pylogit_mu = 1.0 + np.exp(-1 * example_nested.params["Existing Modes Nest Param"])
print "PyLogit's estimated Mu is: {:,.4f}".format(pylogit_mu)
# Create objects for all of the necessary arguments that are
# needed to compute the log-likelihood of the nested logit model
# given the data used in this example
nested_design = example_nested.design
mapping_res = example_nested.get_mappings_for_fit()
choice_array = long_swiss_metro["CHOICE"].values
# Create a 'convenience' function that simply returns the log-likelihood
# given a vector of coefficients
def convenient_log_likelihood(all_coefs):
log_likelihood = nl.convenient_nested_log_likelihood(all_coefs,
nested_design,
mapping_res["rows_to_obs"],
mapping_res["rows_to_nests"],
choice_array)
return log_likelihood
# Calculate the numeric hessian
numeric_hess = numdiff.approx_hess(example_nested.params.values,
convenient_log_likelihood)
# Account for the fact that the first param is constrained
numeric_hess[0, :] = 0
numeric_hess[:, 0] = 0
numeric_hess[0, 0] = -1
# Calculate the asymptotic covariance with the numeric hessian
numeric_cov = -1 * scipy.linalg.inv(numeric_hess)
# Get the numeric standard errors
numeric_std_errs = pd.Series(np.sqrt(np.diag(numeric_cov)),
index=example_nested.params.index)
# Make sure the Future Modes Nest param has a standard error of np.nan
numeric_std_errs.loc["Future Modes Nest Param"] = np.nan
# Order the numeric standard errors according to the Python Biogeme
# output
numeric_std_errs = pd.concat([numeric_std_errs[example_nested.params.index[2:]],
numeric_std_errs[example_nested.params.index[:2]]],
axis=0)
# Display the numeric standard errors
numeric_std_errs
# Approximate the gradient using numeric differentiation
numeric_grad = numdiff.approx_fprime(example_nested.params.values,
convenient_log_likelihood)
pd.DataFrame([numeric_grad,
example_nested.gradient.values],
index=["Numeric Differentiation", "Analytic"],
columns=example_nested.params.index).T
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Load the Swissmetro Dataset
Step2: 2. Clean the dataset
Step3: 3. Create an id column that ignores the repeat observations per individual
Step4: 4. Convert the data from 'wide' format to 'long' format
Step5: 4b. Actually perform the conversion from wide to long formats
Step6: 5. Create the variables used in the Python Biogeme Nested Logit Model Example
Step7: 6. Specify and Estimate the Python Biogeme Nested Logit Model Example
Step9: 6b. Estimate the model
Step10: Also, note that the functionality of using parameter constraints is restriced to the Mixed Logit and Nested Logit models at the moment. Moreover, this functionality is only relevant when using optimization method that make use of gradient information. Gradient-free estimation methods such as 'powell's' method or 'nelder-mead' will not make use of the constrained_pos keyword argument.
Step11: Compare with PythonBiogeme
Step12: Summary
Step13: Python Biogeme Output
|
14,538 | <ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
# отключим всякие предупреждения Anaconda
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
%matplotlib inline
import seaborn as sns
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize'] = (6,4)
xx = np.linspace(0,1,50)
plt.plot(xx, [2 * x * (1-x) for x in xx], label='gini')
plt.plot(xx, [4 * x * (1-x) for x in xx], label='2*gini')
plt.plot(xx, [-x * np.log2(x) - (1-x) * np.log2(1 - x) for x in xx], label='entropy')
plt.plot(xx, [1 - max(x, 1-x) for x in xx], label='missclass')
plt.plot(xx, [2 - 2 * max(x, 1-x) for x in xx], label='2*missclass')
plt.xlabel('p+')
plt.ylabel('criterion')
plt.title('Критерии качества как функции от p+ (бинарная классификация)')
plt.legend();
# первый класс
np.random.seed(7)
train_data = np.random.normal(size=(100, 2))
train_labels = np.zeros(100)
# добавляем второй класс
train_data = np.r_[train_data, np.random.normal(size=(100, 2), loc=2)]
train_labels = np.r_[train_labels, np.ones(100)]
def get_grid(data, eps=0.01):
x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1
y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1
return np.meshgrid(np.arange(x_min, x_max, eps),
np.arange(y_min, y_max, eps))
plt.rcParams['figure.figsize'] = (10,8)
plt.scatter(train_data[:, 0], train_data[:, 1], c=train_labels, s=100,
cmap='autumn', edgecolors='black', linewidth=1.5)
plt.plot(range(-2,5), range(4,-3,-1));
from sklearn.tree import DecisionTreeClassifier
# параметр min_samples_leaf указывает, при каком минимальном количестве
# элементов в узле он будет дальше разделяться
clf_tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=17)
# обучаем дерево
clf_tree.fit(train_data, train_labels)
# немного кода для отображения разделяющей поверхности
xx, yy = get_grid(train_data)
predicted = clf_tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, predicted, cmap='autumn')
plt.scatter(train_data[:, 0], train_data[:, 1], c=train_labels, s=100,
cmap='autumn', edgecolors='black', linewidth=1.5);
# используем .dot формат для визуализации дерева
from sklearn.tree import export_graphviz
export_graphviz(clf_tree, feature_names=['x1', 'x2'],
out_file='../img/small_tree.dot', filled=True)
!dot -Tpng ../img/small_tree.dot -o ../img/small_tree.png
!rm ../img/small_tree.dot
data = pd.DataFrame({'Возраст': [17,64,18,20,38,49,55,25,29,31,33],
'Невозврат кредита': [1,0,1,0,1,0,0,1,1,0,1]})
data
data.sort_values('Возраст')
age_tree = DecisionTreeClassifier(random_state=17)
age_tree.fit(data['Возраст'].values.reshape(-1, 1), data['Невозврат кредита'].values)
export_graphviz(age_tree, feature_names=['Возраст'],
out_file='../img/age_tree.dot', filled=True)
!dot -Tpng ../img/age_tree.dot -o ../img/age_tree.png
data2 = pd.DataFrame({'Возраст': [17,64,18,20,38,49,55,25,29,31,33],
'Зарплата': [25,80,22,36,37,59,74,70,33,102,88],
'Невозврат кредита': [1,0,1,0,1,0,0,1,1,0,1]})
data2
data2.sort_values('Возраст')
data2.sort_values('Зарплата')
age_sal_tree = DecisionTreeClassifier(random_state=17)
age_sal_tree.fit(data2[['Возраст', 'Зарплата']].values, data2['Невозврат кредита'].values);
export_graphviz(age_sal_tree, feature_names=['Возраст', 'Зарплата'],
out_file='../img/age_sal_tree.dot', filled=True)
!dot -Tpng ../img/age_sal_tree.dot -o ../img/age_sal_tree.png
n_train = 150
n_test = 1000
noise = 0.1
def f(x):
x = x.ravel()
return np.exp(-x ** 2) + 1.5 * np.exp(-(x - 2) ** 2)
def generate(n_samples, noise):
X = np.random.rand(n_samples) * 10 - 5
X = np.sort(X).ravel()
y = np.exp(-X ** 2) + 1.5 * np.exp(-(X - 2) ** 2) + \
np.random.normal(0.0, noise, n_samples)
X = X.reshape((n_samples, 1))
return X, y
X_train, y_train = generate(n_samples=n_train, noise=noise)
X_test, y_test = generate(n_samples=n_test, noise=noise)
from sklearn.tree import DecisionTreeRegressor
reg_tree = DecisionTreeRegressor(max_depth=5, random_state=17)
reg_tree.fit(X_train, y_train)
reg_tree_pred = reg_tree.predict(X_test)
plt.figure(figsize=(10, 6))
plt.plot(X_test, f(X_test), "b")
plt.scatter(X_train, y_train, c="b", s=20)
plt.plot(X_test, reg_tree_pred, "g", lw=2)
plt.xlim([-5, 5])
plt.title("Decision tree regressor, MSE = %.2f" % np.sum((y_test - reg_tree_pred) ** 2))
plt.show()
df = pd.read_csv('../data/telecom_churn.csv')
df['International plan'] = pd.factorize(df['International plan'])[0]
df['Voice mail plan'] = pd.factorize(df['Voice mail plan'])[0]
df['Churn'] = df['Churn'].astype('int')
states = df['State']
y = df['Churn']
df.drop(['State', 'Churn'], axis=1, inplace=True)
df.head()
from sklearn.model_selection import train_test_split, StratifiedKFold
X_train, X_holdout, y_train, y_holdout = train_test_split(df.values, y, test_size=0.3,
random_state=17)
from sklearn.neighbors import KNeighborsClassifier
tree = DecisionTreeClassifier(max_depth=5, random_state=17)
knn = KNeighborsClassifier(n_neighbors=10)
%%time
tree.fit(X_train, y_train)
%%time
knn.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
tree_pred = tree.predict(X_holdout)
accuracy_score(y_holdout, tree_pred)
knn_pred = knn.predict(X_holdout)
accuracy_score(y_holdout, knn_pred)
from sklearn.model_selection import GridSearchCV, cross_val_score
tree_params = {'max_depth': range(1,11),
'max_features': range(4,19)}
tree_grid = GridSearchCV(tree, tree_params,
cv=5, n_jobs=-1,
verbose=True)
tree_grid.fit(X_train, y_train)
tree_grid.best_params_
tree_grid.best_score_
accuracy_score(y_holdout, tree_grid.predict(X_holdout))
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
knn_pipe = Pipeline([('scaler', StandardScaler()), ('knn', KNeighborsClassifier(n_jobs=-1))])
knn_params = {'knn__n_neighbors': range(1, 10)}
knn_grid = GridSearchCV(knn_pipe, knn_params,
cv=5, n_jobs=-1,
verbose=True)
knn_grid.fit(X_train, y_train)
knn_grid.best_params_, knn_grid.best_score_
accuracy_score(y_holdout, knn_grid.predict(X_holdout))
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=17)
print(np.mean(cross_val_score(forest, X_train, y_train, cv=5)))
forest_params = {'max_depth': range(1,11),
'max_features': range(4,19)}
forest_grid = GridSearchCV(forest, forest_params,
cv=5, n_jobs=-1,
verbose=True)
forest_grid.fit(X_train, y_train)
forest_grid.best_params_, forest_grid.best_score_
accuracy_score(y_holdout, forest_grid.predict(X_holdout))
export_graphviz(tree_grid.best_estimator_, feature_names=df.columns,
out_file='../img/churn_tree.dot', filled=True)
!dot -Tpng ../img/churn_tree.dot -o ../img/churn_tree.png
from sklearn.datasets import load_digits
data = load_digits()
X, y = data.data, data.target
X[0,:].reshape([8,8])
f, axes = plt.subplots(1, 4, sharey=True, figsize=(16,6))
for i in range(4):
axes[i].imshow(X[i,:].reshape([8,8]));
np.bincount(y)
X_train, X_holdout, y_train, y_holdout = train_test_split(X, y, test_size=0.3,
random_state=17)
tree = DecisionTreeClassifier(max_depth=5, random_state=17)
knn = KNeighborsClassifier(n_neighbors=10)
%%time
tree.fit(X_train, y_train)
%%time
knn.fit(X_train, y_train)
tree_pred = tree.predict(X_holdout)
knn_pred = knn.predict(X_holdout)
accuracy_score(y_holdout, knn_pred), accuracy_score(y_holdout, tree_pred)
tree_params = {'max_depth': [1, 2, 3, 5, 10, 20, 25, 30, 40, 50, 64],
'max_features': [1, 2, 3, 5, 10, 20 ,30, 50, 64]}
tree_grid = GridSearchCV(tree, tree_params,
cv=5, n_jobs=-1,
verbose=True)
tree_grid.fit(X_train, y_train)
tree_grid.best_params_, tree_grid.best_score_
accuracy_score(y_holdout, tree_grid.predict(X_holdout))
np.mean(cross_val_score(KNeighborsClassifier(n_neighbors=1), X_train, y_train, cv=5))
knn = KNeighborsClassifier(n_neighbors=1).fit(X_train, y_train)
accuracy_score(y_holdout, knn.predict(X_holdout))
np.mean(cross_val_score(RandomForestClassifier(random_state=17), X_train, y_train, cv=5))
rf = RandomForestClassifier(random_state=17, n_jobs=-1).fit(X_train, y_train)
accuracy_score(y_holdout, rf.predict(X_holdout))
def form_linearly_separable_data(n=500, x1_min=0, x1_max=30, x2_min=0, x2_max=30):
data, target = [], []
for i in range(n):
x1, x2 = np.random.randint(x1_min, x1_max), np.random.randint(x2_min, x2_max)
if np.abs(x1 - x2) > 0.5:
data.append([x1, x2])
target.append(np.sign(x1 - x2))
return np.array(data), np.array(target)
X, y = form_linearly_separable_data()
plt.scatter(X[:, 0], X[:, 1], c=y, cmap='autumn', edgecolors='black');
tree = DecisionTreeClassifier(random_state=17).fit(X, y)
xx, yy = get_grid(X, eps=.05)
predicted = tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, predicted, cmap='autumn')
plt.scatter(X[:, 0], X[:, 1], c=y, s=100,
cmap='autumn', edgecolors='black', linewidth=1.5)
plt.title('Easy task. Decision tree compexifies everything');
export_graphviz(tree, feature_names=['x1', 'x2'],
out_file='../img/deep_toy_tree.dot', filled=True)
!dot -Tpng ../img/deep_toy_tree.dot -o ../img/deep_toy_tree.png
knn = KNeighborsClassifier(n_neighbors=1).fit(X, y)
xx, yy = get_grid(X, eps=.05)
predicted = knn.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, predicted, cmap='autumn')
plt.scatter(X[:, 0], X[:, 1], c=y, s=100,
cmap='autumn', edgecolors='black', linewidth=1.5);
plt.title('Easy task, kNN. Not bad');
def form_noisy_data(n_obj=1000, n_feat=100, random_seed=17):
np.seed = random_seed
y = np.random.choice([-1, 1], size=n_obj)
# первый признак пропорционален целевому
x1 = 0.3 * y
# остальные признаки – шум
x_other = np.random.random(size=[n_obj, n_feat - 1])
return np.hstack([x1.reshape([n_obj, 1]), x_other]), y
X, y = form_noisy_data()
X_train, X_holdout, y_train, y_holdout = train_test_split(X, y, test_size=0.3,
random_state=17)
from sklearn.model_selection import cross_val_score
cv_scores, holdout_scores = [], []
n_neighb = [1, 2, 3, 5] + list(range(50, 550, 50))
for k in n_neighb:
knn = KNeighborsClassifier(n_neighbors=k)
cv_scores.append(np.mean(cross_val_score(knn, X_train, y_train, cv=5)))
knn.fit(X_train, y_train)
holdout_scores.append(accuracy_score(y_holdout, knn.predict(X_holdout)))
plt.plot(n_neighb, cv_scores, label='CV')
plt.plot(n_neighb, holdout_scores, label='holdout')
plt.title('Easy task. kNN fails')
plt.legend();
tree = DecisionTreeClassifier(random_state=17, max_depth=1)
tree_cv_score = np.mean(cross_val_score(tree, X_train, y_train, cv=5))
tree.fit(X_train, y_train)
tree_holdout_score = accuracy_score(y_holdout, tree.predict(X_holdout))
print('Decision tree. CV: {}, holdout: {}'.format(tree_cv_score, tree_holdout_score))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Пример
Step2: Напишем вспомогательную функцию, которая будет возвращать решетку для дальнейшей красивой визуализации.
Step3: Отобразим данные. Неформально, задача классификации в этом случае – построить какую-то "хорошую" границу, разделяющую 2 класса (красные точки от желтых). Если утрировать, то машинное обучение в этом случае сводится к тому, как выбрать хорошую разделяющую границу. Возможно, прямая будет слишком простой границей, а какая-то сложная кривая, огибающая каждую красную точку – будет слишком сложной и будем много ошибаться на новых примерах из того же распределения, из которого пришла обучающая выборка. Интуиция подсказывает, что хорошо на новых данных будет работать какая-то гладкая граница, разделяющая 2 класса, или хотя бы просто прямая (в $n$-мерном случае - гиперплоскость).
Step4: Попробуем разделить эти два класса, обучив дерево решений. В дереве будем использовать параметр max_depth, ограничивающий глубину дерева. Визуализируем полученную границу разделения класссов.
Step5: А как выглядит само построенное дерево? Видим, что дерево "нарезает" пространство на 7 прямоугольников (в дереве 7 листьев). В каждом таком прямоугольнике прогноз дерева будет константным, по превалированию объектов того или иного класса.
Step6: <img src='../img/small_tree.png'>
Step7: Отсортируем ее по возрастанию возраста.
Step8: Обучим на этих данных дерево решений (без ограничения глубины) и посмотрим на него.
Step9: Видим, что дерево задействовало 5 значений, с которыми сравнивается возраст
Step10: <img src='../img/age_tree.png'>
Step11: Если отсортировать по возрасту, то целевой класс ("Невозврат кредита") меняется (с 1 на 0 или наоборот) 5 раз. А если отсортировать по зарплате – то 7 раз. Как теперь дерево будет выбирать признаки? Посмотрим.
Step12: <img src='../img/age_sal_tree.png'>
Step13: Видим, что дерево решений аппроксимирует зависимость в данных кусочно-постоянной функцией.
Step14: Выделим 70% выборки (X_train, y_train) под обучение и 30% будут отложенной выборкой (X_holdout, y_holdout). отложенная выборка никак не будет участвовать в настройке параметров моделей, на ней мы в конце, после этой настройки, оценим качество полученной модели.
Step15: Обучим 2 модели – дерево решений и kNN, пока не знаем, какие параметры хороши, поэтому наугад
Step16: Качество прогнозов будем проверять с помощью простой метрики – доли правильных ответов
Step17: Сделаем прогнозы для отложенной выборки. Видим, что метод ближайших соседей справился намного лучше. Но это мы пока выбирали параметры наугад.
Step18: Теперь настроим параметры дерева на кросс-валидации. Настраивать будем максимальную глубину и максимальное используемое на каждом разбиении число признаков. Суть того, как работает GridSearchCV
Step19: Лучшее сочетание параметров и соответствующая средняя доля правильных ответов на кросс-валидации
Step20: Теперь попробуем настроить число соседей в алгоритме kNN.
Step21: Видим, что в этом примере дерево показало себя лучше, чем метод ближайших соседей. Более того, в данной задаче дерево проявляет себя очень хорошо, и даже случайный лес (который пока представляем просто как кучу деревьев, которые вместе работают почему-то намного лучше, чем одно дерево) в этом примере показывает долю правильных ответов не намного выше (как на кросс-валидации, так и на отложенной выборке), а обучается намного дольше.
Step22: Нарисуем получившееся дерево. Из-за того, что оно не совсем игрушечное (максимальная глубина – 6), картинка получается уже не маленькой, но по дерево можно "прогуляться", если отдельно открыть рисунок.
Step23: <img src='../img/churn_tree.png'>
Step24: Загружаем данные.
Step25: Картинки здесь представляются матрицей 8 x 8 (интенсивности белого цвета для каждого пикселя). Далее эта матрица "разворачивается" в вектор длины 64, получается признаковое описание объекта.
Step26: Нарисуем несколько рукописных цифр, видим, что они угадываются.
Step27: Посмотрим на соотношение классов в выборке, видим, что примерно поровну нулей, единиц, ..., девяток.
Step28: Выделим 70% выборки (X_train, y_train) под обучение и 30% будут отложенной выборкой (X_holdout, y_holdout). отложенная выборка никак не будет участвовать в настройке параметров моделей, на ней мы в конце, после этой настройки, оценим качество полученной модели.
Step29: Обучим дерево решений и kNN, опять параметры пока наугад берем.
Step30: Сделаем прогнозы для отложенной выборки. Видим, что метод ближайших соседей справился намного лучше. Но это мы пока выбирали параметры наугад.
Step31: Теперь так же, как раньше настроим параметры моделей на кросс-валидации, только учтем, что признаков сейчас больше, чем в прошлой задаче - 64.
Step32: Лучшее сочетание параметров и соответствующая средняя доля правильных ответов на кросс-валидации
Step33: Это уже не 66%, но и не 97%. Метод ближайших соседей на этом наборе данных работает лучше. В случае одного ближайшего соседа на кросс-валидации достигается почти 99% угадываний.
Step34: Обучим на этих же данных случайный лес, он на большинстве выборок работает лучше, чем метод ближайших соседей. Но сейчас у нас исключение.
Step35: Вы будете правы, если возразите, что мы тут не настраивали параметры RandomForestClassifier, но даже с настройкой доля правильных ответов не достигает 98%, как для у метода одного ближайшего соседа.
Step36: Однако дерево решений строит уж больно сложную границу и само по себе оказывается глубоким. Кроме того, представьте, как плохо дерево будет обобщаться на пространство вне представленного квадрата $30 \times 30$, обрамляющего обучающую выборку.
Step37: Вот такая сложная конструкция, хотя решение (хорошая разделяющая поверхность) – это всего лишь прямая $x_1 = x_2$.
Step38: <img src='../img/deep_toy_tree.png'>
Step39: Сложный случай для метода ближайших соседей
Step40: Как обычно, будем смотреть на долю правильных ответов на кросс-валидации и на отложенной выборке. Построим кривые, отражающие зависимость этих величин от параметра n_neighbors в методе ближайших соседей. Такие кривые называются кривыми валидации.
Step41: Видим, что метод ближайших соседей с евклидовой метрикой не справляется с задачей, даже если варьировать число ближайших соседей в широком диапазоне. Напротив, дерево решений легко "обнаруживает" скрытую зависимость в данных при любом ограничении на максимальную глубину.
|
14,539 | <ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, shape = [None, n_H0, n_W0, n_C0])
Y = tf.placeholder(tf.float32, shape = [None, n_y])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable("W1", [4,4,3,8], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
W2 = tf.get_variable("W2", [2,2,8,16], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides = [1,1,1,1], padding = 'SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize = [1,8,8,1], strides = [1,8,8,1], padding = 'SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1,W2, strides = [1,1,1,1], padding = 'SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize = [1,4,4,1], strides = [1,4,4,1], padding = 'SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, 6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer().minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer, cost], feed_dict={X : minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
Step5: 1.1 - Create placeholders
Step7: Expected Output
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output
|
14,540 | <ASSISTANT_TASK:>
Python Code:
import sympy as sym
sym.init_printing()
x, y = sym.symbols('x y')
expr = 3*x**2 + sym.log(x**2 + y**2 + 1)
expr
expr.subs({x: 17, y: 42}).evalf()
% timeit expr.subs({x: 17, y: 42}).evalf()
import math
f = lambda x, y: 3*x**2 + math.log(x**2 + y**2 + 1)
%timeit f(17, 42)
g = sym.lambdify([x, y], expr, modules=['math'])
g(17, 42)
%timeit g(17, 42)
import numpy as np
xarr = np.linspace(17,18,5)
h = sym.lambdify([x, y], expr)
out = h(xarr, 42)
out
z = z1, z2, z3 = sym.symbols('z:3')
expr2 = x*y*(z1+z2+z3)
func2 = sym.lambdify([x, y, z], expr2)
func2(1,2, (3,4,5))
# Vector arguments can be done as tuples when using odeint... (see video/example)
# How to efficiently deal with matrices without preconverting?
# Or just save as M, C, etc... What about pars? Third argument. Can it be dict or must it be tuple?
# How to efficiently save,
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: lambdify constructs string representation of python code and uses python eval to compile
|
14,541 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.sparse
%load_ext cython
p = 0.01
Nc, Na = 10000, 200
c = np.ones(Nc)
a = np.ones(Na)
K = np.random.random((Nc, Na)) < p
%timeit K.dot(a)
%timeit c.dot(K)
Ksp = scipy.sparse.csr_matrix(K)
%timeit scipy.sparse.csr_matrix(K)
np.all(Ksp.dot(a) == K.dot(a))
%timeit Ksp.dot(a)
np.all(Ksp.transpose(copy=False).dot(c) == c.dot(K))
%timeit Ksp.transpose(copy=False).dot(c)
csp = scipy.sparse.csr_matrix(c)
%timeit csp.dot(Ksp)
# inspired by http://stackoverflow.com/questions/17158893/does-scipy-support-multithreading-for-sparse-matrix-multiplication-when-using-mk
# and https://github.com/afedynitch/MCEq/blob/master/MCEq/kernels.py
from ctypes import POINTER,c_void_p,c_int,c_char,c_double,byref,cdll
def SpMV_viaMKL(A, x, trans=False):
Wrapper to Intel's Sparse Matrix-Vector multiplaction routine.
Handles rectangular matrices
mkl = cdll.LoadLibrary("libmkl_rt.so")
mkl.mkl_set_num_threads(byref(c_int(4)))
SpMV = mkl.mkl_dcsrmv
(m, k) = A.shape
data = A.data.ctypes.data_as(POINTER(c_double))
pb = A.indptr[:-1].ctypes.data_as(POINTER(c_int))
pe = A.indptr[1:].ctypes.data_as(POINTER(c_int))
indices = A.indices.ctypes.data_as(POINTER(c_int))
# Allocate output, using same conventions as input
insize = m if trans else k
outsize = k if trans else m
y = np.empty(outsize, dtype=np.double, order='F')
if x.size != insize:
raise Exception("x must have n entries. x.size is %d, n is %d" % (x.size, outsize))
# Check input
if x.dtype.type is not np.double:
x = x.astype(np.double, copy=True)
np_x = x.ctypes.data_as(POINTER(c_double))
np_y = y.ctypes.data_as(POINTER(c_double))
# now call MKL. This returns the answer in np_y, which links to y
alpha = c_double(1.0)
beta = c_double(0.0)
npmatd = np.chararray(6)
npmatd[0] = 'G'
npmatd[3] = 'C'
matdescra = npmatd.ctypes.data_as(POINTER(c_char))
SpMV(byref(c_char("T" if trans else "N")), byref(c_int(m)), byref(c_int(k)), byref(alpha),
matdescra, data, indices, pb, pe, np_x, byref(beta), np_y )
return y
Kfloat = K.astype(np.float)
Kfloatsp = scipy.sparse.csr_matrix(Kfloat)
np.all(SpMV_viaMKL(Kfloatsp, a) == Ksp.dot(a))
%timeit SpMV_viaMKL(Kfloatsp, a)
np.all(SpMV_viaMKL(Kfloatsp, c, True) == c.dot(K))
%timeit SpMV_viaMKL(Kfloatsp, c, True)
%prun [SpMV_viaMKL(Kfloatsp, c, True) for i in range(1000)]
%%cython -l mkl_core -l mkl_intel_lp64
cimport numpy as np
import numpy as np
cdef extern from "mkl_types.h":
ctypedef MKL_INT
cdef extern from "mkl.h" nogil:
double cblas_dasum (MKL_INT n, double *x, MKL_INT incx);
def cythonSpMV_viaMKL(np.ndarray[np.double_t] x):
Wrapper to Intel's Sparse Matrix-Vector multiplaction routine.
Handles rectangular matrices
#cdef MKL_INT n = x.shape[0]
#cdef MKL_INT incx = 1
return 2#cblas_dasum(n, &x[0], incx)
%timeit cythonSpMV_viaMKL(Kfloatsp, c, True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: Dense matrix vector multiplication
Step3: Sparse matrix vector multiplication
Step6: Sparse matrix vector multiplication using MKL
|
14,542 | <ASSISTANT_TASK:>
Python Code:
!ls -l corpus
import os
import numpy as np
import sys
import nltk
import unicodedata
from collections import Counter, namedtuple
import pickle
import numpy as np
from copy import deepcopy
%matplotlib inline
def find_text_files(basedir):
filepaths = []
for root, dirs, files in os.walk(basedir):
for file in files:
if file.endswith(".txt"):
filepaths.append(os.path.join(root, file))
return filepaths
PUNCTUATION_TRANSLATE_TABLE = {i: None \
for i in range(sys.maxunicode) \
if unicodedata.category(unichr(i)).startswith('P') and unichr(i) not in ['.', '\'']}
def fix_case(document):
words = document.split()
capitalize_counter = Counter()
lower_counter = Counter()
for idx, word in enumerate(words):
lower_word = word.lower()
if word == word.capitalize():
if idx > 0 and words[idx - 1] not in ['.', '?', '!']:
capitalize_counter[lower_word] += 1
else:
lower_counter[lower_word] += 1
for idx, word in enumerate(words):
lower_word = word.lower()
if lower_counter[lower_word] == 0 \
or float(capitalize_counter[lower_word]) / lower_counter[lower_word] > 0.75:
words[idx] = lower_word.capitalize()
else:
words[idx] = lower_word
return ' '.join(words)
def remove_punkt(document):
return document.translate(PUNCTUATION_TRANSLATE_TABLE).replace('.', ' . ')
def preprocessing(document):
document = fix_case(document)
document = remove_punkt(document)
# a long filter chain could be placed here
return document
def title_sentence(sentence):
words = sentence.split()
words[0] = words[0][0].upper() + words[0][1:]
return ' '.join(words)
def uppercase_start(document):
sentences = map(lambda sentence: sentence.strip(), document.split('.'))
sentences = [sentence for sentence in sentences if sentence != '']
return '. '.join(map(title_sentence, sentences)) + '.'
def glue_single_quote(document):
return document.replace(' \'', '\'')
def postprocessing(document):
document = uppercase_start(document)
document = glue_single_quote(document)
return document
import warnings
warnings.filterwarnings('ignore')
ngram_length = 3
text_length = 200
def read_data(path):
corpus = ''
for docpath in find_text_files(path):
with open(docpath) as doc:
doc = doc.read().decode('utf-8')
corpus += preprocessing(doc)
return corpus
def learn(corpus, ngram_length):
tokens = nltk.word_tokenize(corpus)
content_model = nltk.model.ngram.NgramModel(ngram_length, tokens)
return content_model
def generate(content_model):
# text generation without seed to get the seed
starting_words = content_model.generate(100)[-(ngram_length - 1):]
# generate text starting with random words
content = content_model.generate(text_length, starting_words)
return content
corpus = read_data('corpus')
content_model = learn(corpus, ngram_length)
content = generate(content_model)
print postprocessing(' '.join(content).encode('utf-8'))
warnings.filterwarnings('always')
from itertools import izip
def build_ngrams(text, n):
input_list = text.split()
return izip(*[input_list[i:] for i in range(n)])
list(build_ngrams('hello sad cruel cold world', 2))
class NGramDistribution(object):
def __init__(self, ngrams):
self.distribution = {}
for long_gram in ngrams:
short_gram = long_gram[0:-1]
last_word = long_gram[-1]
if short_gram not in self.distribution:
self.distribution[short_gram] = {'total': 0, 'counter': Counter()}
self.distribution[short_gram]['total'] += ngrams[long_gram]
self.distribution[short_gram]['counter'].update({last_word: ngrams[long_gram]})
@property
def counter(self):
counter_pairs = [(key, self.distribution[key]['total']) \
for key in self.distribution]
return Counter(dict(counter_pairs))
from itertools import dropwhile
def remove_rare_ngrams(counter):
lower_bound = 1
for key, count in dropwhile(lambda key_count: \
key_count[1] > lower_bound, counter.most_common()):
del counter[key]
return counter
def remove_splited_sentences(counter):
for key in counter.keys():
if key[-1] == '.':
del counter[key]
return counter
def simple_stats_filter(counter):
counter = remove_rare_ngrams(counter)
counter = remove_splited_sentences(counter)
# some others filters
# ...
return counter
from datetime import datetime
class Index(object):
def __init__(self, depth):
self.depth = depth
self.ngram = Counter()
self.normalize_document = lambda doc: doc
self.stats_filter = lambda ngram: ngram
def __reset(self):
self.__dist = None
def add_document(self, document):
normalized_document = self.normalize_document(document)
doc_counter = build_ngrams(normalized_document, self.depth + 1)
self.ngram.update(doc_counter)
self.__reset()
@property
def dist(self):
if self.__dist is not None:
return self.__dist
self.__dist = {}
current_counter = self.stats_filter(self.ngram)
for depth in reversed(range(1, self.depth + 1)):
ngram_dist = NGramDistribution(current_counter)
self.__dist[depth] = ngram_dist.distribution
current_counter = ngram_dist.counter
return self.__dist
import bisect
class MarkovChain(object):
def __init__(self, dist):
self.dist = dist
cumsum = np.cumsum([ngram['total'] for ngram in dist.values()])
self.__segments = dict(zip(cumsum, dist.keys()))
self.__sorted_keys = sorted(self.__segments.keys())
self.state = self.__start_sentence()
def __start_sentence(self):
rnd = np.random.randint(0, self.__sorted_keys[-1])
position = bisect.bisect_right(self.__sorted_keys, rnd)
return self.__segments[self.__sorted_keys[position]]
@property
def word(self):
if self.state[-1] == '.':
return ' '.join(self.state)
self.state = self.__start_sentence()
drop_word = self.state[0]
next_word = '.'
try:
next_word = np.random.choice(\
self.dist[self.state]['counter'].keys(),
p = map(lambda cnt: \
float(cnt) / self.dist[self.state]['total'],
self.dist[self.state]['counter'].values()))
except KeyError:
pass
self.state = (self.state[1], next_word)
return drop_word
def generate(self, length):
for num in xrange(length):
yield self.word
index = Index(2)
index.normalize_document = preprocessing
index.stats_filter = simple_stats_filter
for docpath in find_text_files('corpus'):
with open(docpath) as doc:
index.add_document(doc.read().decode('utf-8'))
dist = index.dist[2]
print len(index.dist[2])
with open('distribution.dat', 'w') as fh:
pickle.dump(index.dist, fh)
!ls -lh distribution.dat
restored_dist = None
with open('distribution.dat') as fh:
restored_dist = pickle.load(fh)
len(restored_dist[2])
generator = MarkovChain(dist)
content = generator.generate(11000)
print postprocessing(' '.join(content))
index = Index(2)
index.normalize_document = preprocessing
index.stats_filter = simple_stats_filter
for docpath in find_text_files('russian'):
with open(docpath) as doc:
index.add_document(doc.read().decode('utf-8'))
generator = MarkovChain(index.dist[2])
content = generator.generate(250)
print postprocessing(' '.join(content))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Подключим необходимые библиотеки. Стоит выделить nltk - она используется в основном для демонстрации чего можно ожидать от ngram модели.
Step2: Воспользуемся семинарским кодом для удаления пунктуации. Но "." нам еще пригодится.
Step3: Сделаем постпроцессинг текста сразу.
Step4: Запустим nltk генератор на основе марковских цепей на нашем корпусе и посмотрим, что от него можно ожидать. При обучении на триграммах. При этом замерим время работы отдельных частей процесса.
Step5: Текст на удивление получился довольно связным. Использование ngram модели сделала его похожим на тексты Джима Моррисона - два слова рядом стоят красиво, но общий смысл где-то за гранью человеческого понимания. Что, в принципе, и ожидалось.
Step6: По заданию, нужно хранить каскад ngram. По 1 слову, затем по 2 слова. То есть, нужно уметь из ngram получать (n-1)-gram. Еще нам требуется знать распределение продолжений ngram. Выделим этот функционал (получение производных ngram) в класс.
Step7: Не будем обращать внимание на непопулярные ngram'ы.
Step8: np.random.choice ломается, когда сумма по вектору вероятностей отлична от 1. Если мы будем выбирать лидирующую биграмму для старта преложения, то вариантов получится огромное количество (проверял с пустым stats_filter - т.е. на всех биграммах). И из-за неточности floating point арифметики сумма по всем вероятностям незначительно, но отличается от 1, что ведет к поломке функции.
Step9: Сериализация
Step10: individuals better at fork
Step11: PoC. Цель показать работу с unicode, а не какую-то качественную генерацию.
|
14,543 | <ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import tensorflow as tf
import random
from collections import Counter
import utils
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
import time
def subsample_words(words, threshold):
# This will be the probability to keep each word
keep_probs = np.random.uniform(0.0, 1.0, len(words))
total_words = len(words)
# Counting the frequency of each word
words_freqs = Counter(words)
words_freqs = {word: count/total_words for word, count in words_freqs.items()}
# Placeholder to keep the train words
keep_words = []
for idx, word in enumerate(words):
discard_prob = 1.0 - np.sqrt(threshold / words_freqs[word])
if keep_probs[idx] >= discard_prob:
keep_words.append(word)
return keep_words
## Your code here
train_words = subsample_words(int_wordswords, threshold=1e-5)
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
r = np.random.randint(1, window_size + 1)
low_idx = max(idx - r, 0)
high_idx = min(idx + r + 1, len(words) - 1)
wnd = set(words[low_idx:idx] + words[idx+1:high_idx])
return list(wnd)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels=labels,
inputs=embed, num_sampled=n_sampled, num_classes=n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Step3: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Step5: Making batches
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
Step8: Embedding
Step9: Negative sampling
Step10: Validation
Step11: Training
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
|
14,544 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
if 'google.colab' in sys.modules:
!pip install --upgrade pip
!pip install -U tfx tensorflow-model-analysis
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tfx
print('TF version: {}'.format(tf.__version__))
print('TFMA version: {}'.format(tfma.__version__))
print('TFX version: {}'.format(tfx.__version__))
PIPELINE_NAME="my_pipeline"
import os
# Set this project directory to your new tfx pipeline project.
PROJECT_DIR=os.path.join(os.path.expanduser("~"), "imported", PIPELINE_NAME)
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=penguin
%cd {PROJECT_DIR}
import sys
!{sys.executable} -m models.features_test
!tfx pipeline create --engine=local --pipeline_path=local_runner.py
# Update and run the pipeline.
!tfx pipeline update --engine=local --pipeline_path=local_runner.py \
&& tfx run create --engine=local --pipeline_name={PIPELINE_NAME}
import tensorflow as tf
import tfx
from ml_metadata import errors
from ml_metadata.proto import metadata_store_pb2
from tfx.types import artifact_utils
# TODO(b/171447278): Move these functions into TFX library.
def get_latest_executions(store, pipeline_name, component_id = None):
Fetch all pipeline runs.
if component_id is None: # Find entire pipeline runs.
run_contexts = [
c for c in store.get_contexts_by_type('run')
if c.properties['pipeline_name'].string_value == pipeline_name
]
else: # Find specific component runs.
run_contexts = [
c for c in store.get_contexts_by_type('component_run')
if c.properties['pipeline_name'].string_value == pipeline_name and
c.properties['component_id'].string_value == component_id
]
if not run_contexts:
return []
# Pick the latest run context.
latest_context = max(run_contexts,
key=lambda c: c.last_update_time_since_epoch)
return store.get_executions_by_context(latest_context.id)
def get_latest_artifacts(store, pipeline_name, component_id = None):
Fetch all artifacts from latest pipeline execution.
executions = get_latest_executions(store, pipeline_name, component_id)
# Fetch all artifacts produced from the given executions.
execution_ids = [e.id for e in executions]
events = store.get_events_by_execution_ids(execution_ids)
artifact_ids = [
event.artifact_id for event in events
if event.type == metadata_store_pb2.Event.OUTPUT
]
return store.get_artifacts_by_id(artifact_ids)
def find_latest_artifacts_by_type(store, artifacts, artifact_type):
Get the latest artifacts of a specified type.
# Get type information from MLMD
try:
artifact_type = store.get_artifact_type(artifact_type)
except errors.NotFoundError:
return []
# Filter artifacts with type.
filtered_artifacts = [aritfact for aritfact in artifacts
if aritfact.type_id == artifact_type.id]
# Convert MLMD artifact data into TFX Artifact instances.
return [artifact_utils.deserialize_artifact(artifact_type, artifact)
for artifact in filtered_artifacts]
from tfx.orchestration.experimental.interactive import visualizations
def visualize_artifacts(artifacts):
Visualizes artifacts using standard visualization modules.
for artifact in artifacts:
visualization = visualizations.get_registry().get_visualization(
artifact.type_name)
if visualization:
visualization.display(artifact)
from tfx.orchestration.experimental.interactive import standard_visualizations
standard_visualizations.register_standard_visualizations()
import pprint
from tfx.orchestration import metadata
from tfx.types import artifact_utils
from tfx.types import standard_artifacts
def preview_examples(artifacts):
Preview a few records from Examples artifacts.
pp = pprint.PrettyPrinter()
for artifact in artifacts:
print("==== Examples artifact:{}({})".format(artifact.name, artifact.uri))
for split in artifact_utils.decode_split_names(artifact.split_names):
print("==== Reading from split:{}".format(split))
split_uri = artifact_utils.get_split_uri([artifact], split)
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(split_uri, name)
for name in os.listdir(split_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames,
compression_type="GZIP")
# Iterate over the first 2 records and decode them.
for tfrecord in dataset.take(2):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
import local_runner
metadata_connection_config = metadata.sqlite_metadata_connection_config(
local_runner.METADATA_PATH)
with metadata.Metadata(metadata_connection_config) as metadata_handler:
# Search all aritfacts from the previous pipeline run.
artifacts = get_latest_artifacts(metadata_handler.store, PIPELINE_NAME)
# Find artifacts of Examples type.
examples_artifacts = find_latest_artifacts_by_type(
metadata_handler.store, artifacts,
standard_artifacts.Examples.TYPE_NAME)
# Find artifacts generated from StatisticsGen.
stats_artifacts = find_latest_artifacts_by_type(
metadata_handler.store, artifacts,
standard_artifacts.ExampleStatistics.TYPE_NAME)
# Find artifacts generated from SchemaGen.
schema_artifacts = find_latest_artifacts_by_type(
metadata_handler.store, artifacts,
standard_artifacts.Schema.TYPE_NAME)
# Find artifacts generated from ExampleValidator.
anomalies_artifacts = find_latest_artifacts_by_type(
metadata_handler.store, artifacts,
standard_artifacts.ExampleAnomalies.TYPE_NAME)
preview_examples(examples_artifacts)
visualize_artifacts(stats_artifacts)
visualize_artifacts(schema_artifacts)
visualize_artifacts(anomalies_artifacts)
!tfx pipeline update --engine=local --pipeline_path=local_runner.py \
&& tfx run create --engine=local --pipeline_name={PIPELINE_NAME}
with metadata.Metadata(metadata_connection_config) as metadata_handler:
# Search all aritfacts from the previous run of Transform component.
artifacts = get_latest_artifacts(metadata_handler.store,
PIPELINE_NAME, "Transform")
# Find artifacts of Examples type.
transformed_examples_artifacts = find_latest_artifacts_by_type(
metadata_handler.store, artifacts,
standard_artifacts.Examples.TYPE_NAME)
preview_examples(transformed_examples_artifacts)
!tfx pipeline update --engine=local --pipeline_path=local_runner.py \
&& tfx run create --engine=local --pipeline_name={PIPELINE_NAME}
# Update and run the pipeline.
!tfx pipeline update --engine=local --pipeline_path=local_runner.py \
&& tfx run create --engine=local --pipeline_name={PIPELINE_NAME}
# Install TFMA notebook extension.
!jupyter labextension install tensorflow_model_analysis@{tfma.__version__}
with metadata.Metadata(metadata_connection_config) as metadata_handler:
# Search all aritfacts from the previous pipeline run.
artifacts = get_latest_artifacts(metadata_handler.store, PIPELINE_NAME)
model_evaluation_artifacts = find_latest_artifacts_by_type(
metadata_handler.store, artifacts,
standard_artifacts.ModelEvaluation.TYPE_NAME)
if model_evaluation_artifacts:
tfma_result = tfma.load_eval_result(model_evaluation_artifacts[0].uri)
tfma.view.render_slicing_metrics(tfma_result)
# Update and run the pipeline.
!tfx pipeline update --engine=local --pipeline_path=local_runner.py \
&& tfx run create --engine=local --pipeline_name={PIPELINE_NAME}
!pip install --upgrade -q kfp
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold
# Move skaffold binary into your path
!mv skaffold /home/jupyter/.local/bin/
ENDPOINT='' # Enter your ENDPOINT here.
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/penguin/
!tfx pipeline create \
--engine=kubeflow \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
!tfx run create --engine=kubeflow --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a TFX pipeline for your data with Penguin template
Step2: Install required package
Step3: Let's check the versions of TFX.
Step4: We are ready to create a pipeline.
Step5: Copy template files.
Step6: Change the working directory context in this notebook to the project directory.
Step7: NOTE
Step8: Create a TFX pipeline in local environment.
Step9: pipeline create command registers your pipeline defined in local_runner.py
Step 2. Ingest YOUR data to the pipeline.
Step15: You should see "Component ExampleValidator is finished." if the pipeline ran successfully.
Step16: Now we can read metadata of output artifacts from MLMD.
Step17: Now we can examine outputs from each component.
Step18: By default, TFX ExampleGen divides examples into two splits, train and
Step19: These statistics are supplied to SchemaGen to construct a schema of data
Step20: This schema is automatically inferred from the output of StatisticsGen.
Step21: If any anomalies were found, you may review your data that all examples
Step 3. (Optional) Feature engineering with Transform component.
Step22: If the pipeline ran successfully, you should see "Component Transform is
Step23: Step 4. Train your model with Trainer component.
Step24: When this execution runs successfully, you have now created and run your first
Step 5. (Optional) Evaluate the model with Evaluator and publish with pusher.
Step25: Examine output of Evaluator
Step26: If installation is completed, please reload your browser to make the
Step27: Adds Pusher component to the pipeline.
Step28: You should be able to find your new model at SERVING_MODEL_DIR.
Step 6. (Optional) Deploy your pipeline to Kubeflow Pipelines on GCP.
Step29: You need to move skaffold binary to the place where your shell can find it.
Step30: You also need a Kubeflow Pipelines cluster to run the pipeline. Please
Step31: To run our code in a Kubeflow Pipelines cluster, we need to pack our code into
Step32: Set data location.
Step33: Update the data location stored at DATA_PATH in kubeflow_runner.py.
Step34: Now start an execution run with the newly created pipeline using the
|
14,545 | <ASSISTANT_TASK:>
Python Code:
from scipy import stats
import numpy as np
np.random.seed(42)
x = np.random.normal(0, 1, 1000)
y = np.random.normal(0, 1, 1000)
alpha = 0.01
s, p = stats.ks_2samp(x, y)
result = (p <= alpha)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,546 | <ASSISTANT_TASK:>
Python Code:
import numpy
import pandas
import statsmodels.formula.api as smf
import statsmodels.stats.multicomp as multi
data = pandas.read_csv('nesarc_pds.csv', low_memory=False)
# S2AQ8A - HOW OFTEN DRANK ANY ALCOHOL IN LAST 12 MONTHS (99 - Unknown)
# S2AQ8B - NUMBER OF DRINKS OF ANY ALCOHOL USUALLY CONSUMED ON DAYS WHEN DRANK ALCOHOL IN LAST 12 MONTHS (99 - Unknown)
# S2AQ3 - DRANK AT LEAST 1 ALCOHOLIC DRINK IN LAST 12 MONTHS
#setting variables you will be working with to numeric
data['S2AQ8A'] = data['S2AQ8A'].convert_objects(convert_numeric=True)
data['S2AQ8B'] = data['S2AQ8B'].convert_objects(convert_numeric=True)
data['S2AQ3'] = data['S2AQ3'].convert_objects(convert_numeric=True)
#subset data to young adults age 18 to 27 who have drank alcohol in the past 12 months
subset=data[(data['AGE']>=19) & (data['AGE']<=34) & (data['S2AQ3']==1)]
subset['S2AQ8A']=subset['S2AQ8A'].replace(99, numpy.nan)
subset['S3BD4Q2DR']=subset['S3BD4Q2DR'].replace(99, numpy.nan)
alcohol_usage_map = {
1: 365,
2: 330,
3: 182,
4: 104,
5: 52,
6: 30,
7: 12,
8: 9,
9: 5,
10: 2,
}
subset['ALCO_FREQMO'] = subset['S2AQ8A'].map(alcohol_usage_map)
#converting new variable ALCO_FREQMO to numeric
subset['ALCO_FREQMO'] = subset['ALCO_FREQMO'].convert_objects(convert_numeric=True)
subset['ALCO_NUM_EST'] = subset['ALCO_FREQMO'] * subset['S2AQ8B']
ct1 = subset.groupby('ALCO_NUM_EST').size()
subset_race = subset[['ALCO_NUM_EST', 'ETHRACE2A']].dropna()
# using ols function for calculating the F-statistic and associated p value
model1 = smf.ols(formula='ALCO_NUM_EST ~ C(ETHRACE2A)', data=subset_race)
results1 = model1.fit()
print (results1.summary())
print ('means for ALCO_NUM_EST by race')
m2= subset_race.groupby('ETHRACE2A').mean()
print (m2)
print ('standard dev for ALCO_NUM_EST by race')
sd2 = subset_race.groupby('ETHRACE2A').std()
print (sd2)
mc1 = multi.MultiComparison(subset_race['ALCO_NUM_EST'], subset_race['ETHRACE2A'])
res1 = mc1.tukeyhsd()
print(res1.summary())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then OLS regression test is run
Step2: And as Prob (F-statistics) is less than 0.05, I can discard null hypothesis.
Step3: Tukey's HSD post hoc test
|
14,547 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
14,548 | <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.stats import permutation_cluster_test
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
channel = 'MEG 1332' # include only this channel in analysis
include = [channel]
picks = mne.pick_types(raw.info, meg=False, eog=True, include=include,
exclude='bads')
event_id = 1
reject = dict(grad=4000e-13, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition1 = epochs1.get_data() # as 3D matrix
event_id = 2
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition2 = epochs2.get_data() # as 3D matrix
condition1 = condition1[:, 0, :] # take only one channel to get a 2D array
condition2 = condition2[:, 0, :] # take only one channel to get a 2D array
threshold = 6.0
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_test([condition1, condition2], n_permutations=1000,
threshold=threshold, tail=1, n_jobs=1)
times = epochs1.times
plt.close('all')
plt.subplot(211)
plt.title('Channel : ' + channel)
plt.plot(times, condition1.mean(axis=0) - condition2.mean(axis=0),
label="ERF Contrast (Event 1 - Event 2)")
plt.ylabel("MEG (T / m)")
plt.legend()
plt.subplot(212)
for i_c, c in enumerate(clusters):
c = c[0]
if cluster_p_values[i_c] <= 0.05:
h = plt.axvspan(times[c.start], times[c.stop - 1],
color='r', alpha=0.3)
else:
plt.axvspan(times[c.start], times[c.stop - 1], color=(0.3, 0.3, 0.3),
alpha=0.3)
hf = plt.plot(times, T_obs, 'g')
plt.legend((h, ), ('cluster p-value < 0.05', ))
plt.xlabel("time (ms)")
plt.ylabel("f-values")
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Compute statistic
Step4: Plot
|
14,549 | <ASSISTANT_TASK:>
Python Code:
import logging
reload(logging)
log_fmt = '%(asctime)-9s %(levelname)-8s: %(message)s'
logging.basicConfig(format=log_fmt)
# Change to info once the notebook runs ok
logging.getLogger().setLevel(logging.INFO)
%pylab inline
import copy
import os
from time import sleep
from subprocess import Popen
import pandas as pd
# Support to access the remote target
import devlib
from env import TestEnv
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
# Setup a target configuration
my_target_conf = {
# Target platform and board
"platform" : 'android',
# Add target support
"board" : 'hikey',
# Device ID
#"device" : "00b1346f0878ccb1",
# Define devlib modules to load
"modules" : [
'cpufreq' # enable CPUFreq support
],
}
my_tests_conf = {
# Folder where all the results will be collected
"results_dir" : "Android_Antutu",
# Platform configurations to test
"confs" : [
{
"tag" : "antutu",
"flags" : "ftrace", # Enable FTrace events
"sched_features" : "ENERGY_AWARE", # enable EAS
},
],
}
# Initialize a test environment using:
# the provided target configuration (my_target_conf)
# the provided test configuration (my_test_conf)
te = TestEnv(target_conf=my_target_conf, test_conf=my_tests_conf)
target = te.target
def set_performance():
target.cpufreq.set_all_governors('performance')
def set_powersave():
target.cpufreq.set_all_governors('powersave')
def set_interactive():
target.cpufreq.set_all_governors('interactive')
def set_sched():
target.cpufreq.set_all_governors('sched')
def set_ondemand():
target.cpufreq.set_all_governors('ondemand')
for cpu in target.list_online_cpus():
tunables = target.cpufreq.get_governor_tunables(cpu)
target.cpufreq.set_governor_tunables(
cpu,
'ondemand',
**{'sampling_rate' : tunables['sampling_rate_min']}
)
# CPUFreq configurations to test
confs = {
'performance' : {
'label' : 'prf',
'set' : set_performance,
},
# 'powersave' : {
# 'label' : 'pws',
# 'set' : set_powersave,
# },
'interactive' : {
'label' : 'int',
'set' : set_interactive,
},
'sched' : {
'label' : 'sch',
'set' : set_sched,
},
# 'ondemand' : {
# 'label' : 'odm',
# 'set' : set_ondemand,
# }
}
# The set of results for each comparison test
results = {}
def check_packages(pkgname):
try:
output = target.execute('pm list packages -f | grep -i {}'.format(pkgname))
except Exception:
raise RuntimeError('Package: [{}] not availabe on target'.format(pkgname))
# Check for specified PKG name being available on target
#adb -s 0123456789 shell "am kill-all"
#adb -s 0123456789 shell "am start -W -n com.antutu.ABenchMark/.ABenchMarkStart"
#adb shell "am force-stop com.antutu.ABenchMark"
#check_packages('com.futuremark.pcmark.android.benchmark')
check_packages('com.antutu.ABenchMark')
def pcmark_run(exp_dir):
# Unlock device screen (assume no password required)
target.execute('input keyevent 82')
# Start PCMark on the target device
# target.execute('monkey -p com.futuremark.pcmark.android.benchmark -c android.intent.category.LAUNCHER 1')
target.execute('am start -W -n com.antutu.ABenchMark/.ABenchMarkStart')
# Wait few seconds to make sure the app is loaded
sleep(5)
# Flush entire log
target.clear_logcat()
# Run performance workload (assume screen is vertical)
target.execute('input tap 512 200')
# Wait for completion (7 minutes in total) and collect log
log_file = os.path.join(exp_dir, 'log.txt')
# Wait 5 minutes
sleep(300)
# Start collecting the log
with open(log_file, 'w') as log:
logcat = Popen(['adb logcat', 'com.antutu.ABenchMark/.ABenchMarkStart', '*:S'],
stdout=log,
shell=True)
# Wait additional two minutes for benchmark to complete
sleep(100)
# Terminate logcat
logcat.kill()
# Get scores from logcat
score_file = os.path.join(exp_dir, 'score.txt')
os.popen('grep -o "PCMA_.*_SCORE .*" {} | sed "s/ = / /g" | sort -u > {}'.format(log_file, score_file))
# Close application
target.execute('am force-stop com.antutu.ABenchMark')
return score_file
def antutu_run(exp_dir):
!wa run antutu.yaml -f -d $exp_dir
score_file = exp_dir+"/results.csv"
print score_file
import csv
from collections import defaultdict
def experiment(governor, exp_dir):
os.system('mkdir -p {}'.format(exp_dir));
logging.info('------------------------')
logging.info('Run workload using %s governor', governor)
confs[governor]['set']()
### Run the benchmark ###
#score_file = pcmark_run(exp_dir)
score_file = antutu_run(exp_dir)
# Save the score as a dictionary
scores = dict()
#with open(score_file, 'r') as f:
# lines = f.readlines()
# for l in lines:
# info = l.split()
# scores.update({info[0] : float(info[1])})
inFile = open('/home/lubaoquan/tools/lisa/lisa/results/Android_PCMark/'+governor+'/results.csv', 'r')
inLine = csv.reader(inFile)
next(inLine, None)
collectValue = defaultdict(list)
for row in inLine:
item = row[3]
value = row[4]
# collectValue[item].append(float(value))
# for item, value in collectValue.iteritems():
if item == 'execution_time':
continue
print item, value
scores.update({item : float(value)})
# return all the experiment data
return {
'dir' : exp_dir,
'scores' : scores,
}
# Run the benchmark in all the configured governors
for governor in confs:
test_dir = os.path.join(te.res_dir, governor)
res = experiment(governor, test_dir)
results[governor] = copy.deepcopy(res)
# Create results DataFrame
data = {}
for governor in confs:
data[governor] = {}
for score_name, score in results[governor]['scores'].iteritems():
data[governor][score_name] = score
#df = pd.DataFrame.from_dict(data)
#df
#data['performance']['CPU']=12405
#data['interactive']['CPU']=11000
#data['performance']['GPU']=2434
#data['interactive']['GPU']=2100
#data['performance']['UX']=12939
#data['interactive']['UX']=11100
#data['performance']['RAM']=4358
#data['interactive']['RAM']=4100
df = pd.DataFrame.from_dict(data)
df
df.plot(kind='bar', rot=45, figsize=(16,8),
title='Antutu CPU scores vs SchedFreq governors');
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test Environment set up
Step2: Support Functions
Step3: Run Antutu and collect scores
Step4: After running the benchmark for the specified governors we can show the scores
|
14,550 | <ASSISTANT_TASK:>
Python Code:
import graphlab
products = graphlab.SFrame('amazon_baby_subset.gl/')
products
products['sentiment']
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products['perfect']
products['contains_perfect'] = products['perfect'].apply(lambda x: 1 if x >= 1 else 0)
products['contains_perfect'].sum()
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
feature_matrix.shape
sentiment
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
scores = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = 1. / (1 + np.exp(-scores))
# return predictions
return predictions
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(errors, feature)
# Return the derivative
return derivative
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
# print coefficients
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# print 'predictions', predictions
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
# print 'errors', errors
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[:, j])
# print 'derivative', derivative
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients += step_size * derivative
# print 'coefficients', coefficients
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
# Compute the scores as a dot product between feature_matrix and coefficients.
scores = np.dot(feature_matrix, coefficients)
class_predictions = graphlab.SArray(scores).apply(lambda x: 1 if x> 0 else -1)
print class_predictions
unique, counts = np.unique(class_predictions, return_counts=True)
print unique, counts
def class_predictions(score):
return 1 if score > 0 else -1
f = np.vectorize(class_predictions)
predictions = f(scores)
print predictions
unique, counts = np.unique(predictions, return_counts=True)
print unique, counts
num_mistakes = ... # YOUR CODE HERE
accuracy = ... # YOUR CODE HERE
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load review dataset
Step2: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
Step3: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
Step4: Note
Step5: Now, we will perform 2 simple data transformations
Step6: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Step7: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
Step8: Now, write some code to compute the number of product reviews that contain the word perfect.
Step9: Quiz Question. How many reviews contain the word perfect?
Step10: Convert SFrame to NumPy array
Step11: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step12: Let us convert the data into NumPy arrays.
Step13: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
Step14: Quiz Question
Step15: Estimating conditional probability with link function
Step16: Aside. How the link function works with matrix algebra
Step17: Compute derivative of log likelihood with respect to a single coefficient
Step18: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
Step19: Checkpoint
Step20: Taking gradient steps
Step21: Now, let us run the logistic regression solver.
Step22: Quiz question
Step23: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above
Step24: Quiz question
Step25: Quiz question
|
14,551 | <ASSISTANT_TASK:>
Python Code:
import jax.numpy as jnp
from jax import custom_jvp
@custom_jvp
def f(x, y):
return jnp.sin(x) * y
@f.defjvp
def f_jvp(primals, tangents):
x, y = primals
x_dot, y_dot = tangents
primal_out = f(x, y)
tangent_out = jnp.cos(x) * x_dot * y + jnp.sin(x) * y_dot
return primal_out, tangent_out
from jax import jvp, grad
print(f(2., 3.))
y, y_dot = jvp(f, (2., 3.), (1., 0.))
print(y)
print(y_dot)
print(grad(f)(2., 3.))
# Equivalent alternative using the defjvps convenience wrapper
@custom_jvp
def f(x, y):
return jnp.sin(x) * y
f.defjvps(lambda x_dot, primal_out, x, y: jnp.cos(x) * x_dot * y,
lambda y_dot, primal_out, x, y: jnp.sin(x) * y_dot)
print(f(2., 3.))
y, y_dot = jvp(f, (2., 3.), (1., 0.))
print(y)
print(y_dot)
print(grad(f)(2., 3.))
from jax import custom_vjp
@custom_vjp
def f(x, y):
return jnp.sin(x) * y
def f_fwd(x, y):
# Returns primal output and residuals to be used in backward pass by f_bwd.
return f(x, y), (jnp.cos(x), jnp.sin(x), y)
def f_bwd(res, g):
cos_x, sin_x, y = res # Gets residuals computed in f_fwd
return (cos_x * g * y, sin_x * g)
f.defvjp(f_fwd, f_bwd)
print(grad(f)(2., 3.))
import jax.numpy as jnp
def log1pexp(x):
return jnp.log(1. + jnp.exp(x))
log1pexp(3.)
from jax import jit, grad, vmap
print(jit(log1pexp)(3.))
print(jit(grad(log1pexp))(3.))
print(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))
print(grad(log1pexp)(100.))
from jax import make_jaxpr
make_jaxpr(grad(log1pexp))(100.)
from jax import custom_jvp
@custom_jvp
def log1pexp(x):
return jnp.log(1. + jnp.exp(x))
@log1pexp.defjvp
def log1pexp_jvp(primals, tangents):
x, = primals
x_dot, = tangents
ans = log1pexp(x)
ans_dot = (1 - 1/(1 + jnp.exp(x))) * x_dot
return ans, ans_dot
print(grad(log1pexp)(100.))
print(jit(log1pexp)(3.))
print(jit(grad(log1pexp))(3.))
print(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))
@custom_jvp
def log1pexp(x):
return jnp.log(1. + jnp.exp(x))
log1pexp.defjvps(lambda t, ans, x: (1 - 1/(1 + jnp.exp(x))) * t)
print(grad(log1pexp)(100.))
print(jit(log1pexp)(3.))
print(jit(grad(log1pexp))(3.))
print(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))
def f(x):
return x / (1 + jnp.sqrt(x))
print(grad(f)(0.))
@custom_jvp
def f(x):
return x / (1 + jnp.sqrt(x))
@f.defjvp
def f_jvp(primals, tangents):
x, = primals
x_dot, = tangents
ans = f(x)
ans_dot = ((jnp.sqrt(x) + 2) / (2 * (jnp.sqrt(x) + 1)**2)) * x_dot
return ans, ans_dot
print(grad(f)(0.))
@custom_jvp
def f(x):
return x / (1 + jnp.sqrt(x))
f.defjvps(lambda t, ans, x: ((jnp.sqrt(x) + 2) / (2 * (jnp.sqrt(x) + 1)**2)) * t)
print(grad(f)(0.))
from functools import partial
from jax import custom_vjp
@custom_vjp
def clip_gradient(lo, hi, x):
return x # identity function
def clip_gradient_fwd(lo, hi, x):
return x, (lo, hi) # save bounds as residuals
def clip_gradient_bwd(res, g):
lo, hi = res
return (None, None, jnp.clip(g, lo, hi)) # use None to indicate zero cotangents for lo and hi
clip_gradient.defvjp(clip_gradient_fwd, clip_gradient_bwd)
import matplotlib.pyplot as plt
from jax import vmap
t = jnp.linspace(0, 10, 1000)
plt.plot(jnp.sin(t))
plt.plot(vmap(grad(jnp.sin))(t))
def clip_sin(x):
x = clip_gradient(-0.75, 0.75, x)
return jnp.sin(x)
plt.plot(clip_sin(t))
plt.plot(vmap(grad(clip_sin))(t))
from jax.lax import while_loop
def fixed_point(f, a, x_guess):
def cond_fun(carry):
x_prev, x = carry
return jnp.abs(x_prev - x) > 1e-6
def body_fun(carry):
_, x = carry
return x, f(a, x)
_, x_star = while_loop(cond_fun, body_fun, (x_guess, f(a, x_guess)))
return x_star
def newton_sqrt(a):
update = lambda a, x: 0.5 * (x + a / x)
return fixed_point(update, a, a)
print(newton_sqrt(2.))
print(jit(vmap(newton_sqrt))(jnp.array([1., 2., 3., 4.])))
from jax import vjp
@partial(custom_vjp, nondiff_argnums=(0,))
def fixed_point(f, a, x_guess):
def cond_fun(carry):
x_prev, x = carry
return jnp.abs(x_prev - x) > 1e-6
def body_fun(carry):
_, x = carry
return x, f(a, x)
_, x_star = while_loop(cond_fun, body_fun, (x_guess, f(a, x_guess)))
return x_star
def fixed_point_fwd(f, a, x_init):
x_star = fixed_point(f, a, x_init)
return x_star, (a, x_star)
def fixed_point_rev(f, res, x_star_bar):
a, x_star = res
_, vjp_a = vjp(lambda a: f(a, x_star), a)
a_bar, = vjp_a(fixed_point(partial(rev_iter, f),
(a, x_star, x_star_bar),
x_star_bar))
return a_bar, jnp.zeros_like(x_star)
def rev_iter(f, packed, u):
a, x_star, x_star_bar = packed
_, vjp_x = vjp(lambda x: f(a, x), x_star)
return x_star_bar + vjp_x(u)[0]
fixed_point.defvjp(fixed_point_fwd, fixed_point_rev)
print(newton_sqrt(2.))
print(grad(newton_sqrt)(2.))
print(grad(grad(newton_sqrt))(2.))
print(grad(jnp.sqrt)(2.))
print(grad(grad(jnp.sqrt))(2.))
from jax import custom_jvp
import jax.numpy as jnp
# f :: a -> b
@custom_jvp
def f(x):
return jnp.sin(x)
# f_jvp :: (a, T a) -> (b, T b)
def f_jvp(primals, tangents):
x, = primals
t, = tangents
return f(x), jnp.cos(x) * t
f.defjvp(f_jvp)
from jax import jvp
print(f(3.))
y, y_dot = jvp(f, (3.,), (1.,))
print(y)
print(y_dot)
from jax import grad
print(grad(f)(3.))
print(grad(grad(f))(3.))
@custom_jvp
def f(x, y):
return x ** 2 * y
@f.defjvp
def f_jvp(primals, tangents):
x, y = primals
x_dot, y_dot = tangents
primal_out = f(x, y)
tangent_out = 2 * x * y * x_dot + x ** 2 * y_dot
return primal_out, tangent_out
print(grad(f)(2., 3.))
@custom_jvp
def f(x):
return jnp.sin(x)
f.defjvps(lambda t, ans, x: jnp.cos(x) * t)
print(grad(f)(3.))
@custom_jvp
def f(x, y):
return x ** 2 * y
f.defjvps(lambda x_dot, primal_out, x, y: 2 * x * y * x_dot,
lambda y_dot, primal_out, x, y: x ** 2 * y_dot)
print(grad(f)(2., 3.))
print(grad(f, 0)(2., 3.)) # same as above
print(grad(f, 1)(2., 3.))
@custom_jvp
def f(x, y):
return x ** 2 * y
f.defjvps(lambda x_dot, primal_out, x, y: 2 * x * y * x_dot,
None)
print(grad(f)(2., 3.))
print(grad(f, 0)(2., 3.)) # same as above
print(grad(f, 1)(2., 3.))
@custom_jvp
def f(x):
print('called f!') # a harmless side-effect
return jnp.sin(x)
@f.defjvp
def f_jvp(primals, tangents):
print('called f_jvp!') # a harmless side-effect
x, = primals
t, = tangents
return f(x), jnp.cos(x) * t
from jax import vmap, jit
print(f(3.))
print(vmap(f)(jnp.arange(3.)))
print(jit(f)(3.))
y, y_dot = jvp(f, (3.,), (1.,))
print(y_dot)
print(grad(f)(3.))
grad(grad(f))(3.)
@custom_jvp
def f(x):
if x > 0:
return jnp.sin(x)
else:
return jnp.cos(x)
@f.defjvp
def f_jvp(primals, tangents):
x, = primals
x_dot, = tangents
ans = f(x)
if x > 0:
return ans, 2 * x_dot
else:
return ans, 3 * x_dot
print(grad(f)(1.))
print(grad(f)(-1.))
from jax import custom_vjp
import jax.numpy as jnp
# f :: a -> b
@custom_vjp
def f(x):
return jnp.sin(x)
# f_fwd :: a -> (b, c)
def f_fwd(x):
return f(x), jnp.cos(x)
# f_bwd :: (c, CT b) -> CT a
def f_bwd(cos_x, y_bar):
return (cos_x * y_bar,)
f.defvjp(f_fwd, f_bwd)
from jax import grad
print(f(3.))
print(grad(f)(3.))
from jax import custom_vjp
@custom_vjp
def f(x, y):
return jnp.sin(x) * y
def f_fwd(x, y):
return f(x, y), (jnp.cos(x), jnp.sin(x), y)
def f_bwd(res, g):
cos_x, sin_x, y = res
return (cos_x * g * y, -sin_x * g)
f.defvjp(f_fwd, f_bwd)
print(grad(f)(2., 3.))
@custom_vjp
def f(x):
print("called f!")
return jnp.sin(x)
def f_fwd(x):
print("called f_fwd!")
return f(x), jnp.cos(x)
def f_bwd(cos_x, y_bar):
print("called f_bwd!")
return (cos_x * y_bar,)
f.defvjp(f_fwd, f_bwd)
print(f(3.))
print(grad(f)(3.))
from jax import vjp
y, f_vjp = vjp(f, 3.)
print(y)
print(f_vjp(1.))
from jax import jvp
try:
jvp(f, (3.,), (1.,))
except TypeError as e:
print('ERROR! {}'.format(e))
import pdb
@custom_vjp
def debug(x):
return x # acts like identity
def debug_fwd(x):
return x, x
def debug_bwd(x, g):
import pdb; pdb.set_trace()
return g
debug.defvjp(debug_fwd, debug_bwd)
def foo(x):
y = x ** 2
y = debug(y) # insert pdb in corresponding backward pass step
return jnp.sin(y)
from collections import namedtuple
Point = namedtuple("Point", ["x", "y"])
@custom_jvp
def f(pt):
x, y = pt.x, pt.y
return {'a': x ** 2,
'b': (jnp.sin(x), jnp.cos(y))}
@f.defjvp
def f_jvp(primals, tangents):
pt, = primals
pt_dot, = tangents
ans = f(pt)
ans_dot = {'a': 2 * pt.x * pt_dot.x,
'b': (jnp.cos(pt.x) * pt_dot.x, -jnp.sin(pt.y) * pt_dot.y)}
return ans, ans_dot
def fun(pt):
dct = f(pt)
return dct['a'] + dct['b'][0]
pt = Point(1., 2.)
print(f(pt))
print(grad(fun)(pt))
@custom_vjp
def f(pt):
x, y = pt.x, pt.y
return {'a': x ** 2,
'b': (jnp.sin(x), jnp.cos(y))}
def f_fwd(pt):
return f(pt), pt
def f_bwd(pt, g):
a_bar, (b0_bar, b1_bar) = g['a'], g['b']
x_bar = 2 * pt.x * a_bar + jnp.cos(pt.x) * b0_bar
y_bar = -jnp.sin(pt.y) * b1_bar
return (Point(x_bar, y_bar),)
f.defvjp(f_fwd, f_bwd)
def fun(pt):
dct = f(pt)
return dct['a'] + dct['b'][0]
pt = Point(1., 2.)
print(f(pt))
print(grad(fun)(pt))
from functools import partial
@partial(custom_jvp, nondiff_argnums=(0,))
def app(f, x):
return f(x)
@app.defjvp
def app_jvp(f, primals, tangents):
x, = primals
x_dot, = tangents
return f(x), 2. * x_dot
print(app(lambda x: x ** 3, 3.))
print(grad(app, 1)(lambda x: x ** 3, 3.))
@partial(custom_jvp, nondiff_argnums=(0, 2))
def app2(f, x, g):
return f(g((x)))
@app2.defjvp
def app2_jvp(f, g, primals, tangents):
x, = primals
x_dot, = tangents
return f(g(x)), 3. * x_dot
print(app2(lambda x: x ** 3, 3., lambda y: 5 * y))
print(grad(app2, 1)(lambda x: x ** 3, 3., lambda y: 5 * y))
@partial(custom_vjp, nondiff_argnums=(0,))
def app(f, x):
return f(x)
def app_fwd(f, x):
return f(x), x
def app_bwd(f, x, g):
return (5 * g,)
app.defvjp(app_fwd, app_bwd)
print(app(lambda x: x ** 2, 4.))
print(grad(app, 1)(lambda x: x ** 2, 4.))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Custom VJPs with jax.custom_vjp
Step2: Example problems
Step3: Since it's written in terms of jax.numpy, it's JAX-transformable
Step4: But there's a numerical stability problem lurking here
Step5: That doesn't seem right! After all, the derivative of $x \mapsto \log (1 + e^x)$ is $x \mapsto \frac{e^x}{1 + e^x}$, and so for large values of $x$ we'd expect the value to be about 1.
Step6: Stepping through how the jaxpr would be evaluated, we can see that the last line would involve multiplying values that floating point math will round to 0 and $\infty$, respectively, which is never a good idea. That is, we're effectively evaluating lambda x
Step7: Here's a defjvps convenience wrapper to express the same thing
Step8: Enforcing a differentiation convention
Step9: As a mathematical function on $\mathbb{R}$ (the full real line), $f$ is not differentiable at zero (because the limit defining the derivative doesn't exist from the left). Correspondingly, autodiff produces a nan value
Step10: But mathematically if we think of $f$ as a function on $\mathbb{R}_+$ then it is differentiable at 0 [Rudin's Principles of Mathematical Analysis Definition 5.1, or Tao's Analysis I 3rd ed. Definition 10.1.1 and Example 10.1.6]. Alternatively, we might say as a convention we want to consider the directional derivative from the right. So there is a sensible value for the Python function grad(f) to return at 0.0, namely 1.0. By default, JAX's machinery for differentiation assumes all functions are defined over $\mathbb{R}$ and thus doesn't produce 1.0 here.
Step11: Here's the convenience wrapper version
Step12: Gradient clipping
Step13: Python debugging
Step14: This is an iterative procedure for numerically solving the equation $x = f(a, x)$ for $x$, by iterating $x_{t+1} = f(a, x_t)$ until $x_{t+1}$ is sufficiently close to $x_t$. The result $x^$ depends on the parameters $a$, and so we can think of there being a function $a \mapsto x^(a)$ that is implicitly defined by equation $x = f(a, x)$.
Step15: We can vmap or jit the function as well
Step16: We can't apply reverse-mode automatic differentiation because of the while_loop, but it turns out we wouldn't want to anyway
Step17: We can check our answers by differentiating jnp.sqrt, which uses a totally different implementation
Step18: A limitation to this approach is that the argument f can't close over any values involved in differentiation. That is, you might notice that we kept the parameter a explicit in the argument list of fixed_point. For this use case, consider using the low-level primitive lax.custom_root, which allows for deriviatives in closed-over variables with custom root-finding functions.
Step19: In words, we start with a primal function f that takes inputs of type a and produces outputs of type b. We associate with it a JVP rule function f_jvp that takes a pair of inputs representing the primal inputs of type a and the corresponding tangent inputs of type T a, and produces a pair of outputs representing the primal outputs of type b and tangent outputs of type T b. The tangent outputs should be a linear function of the tangent inputs.
Step20: For automatic transposition to work, the JVP rule's output tangents must be linear as a function of the input tangents. Otherwise a transposition error is raised.
Step21: The defjvps convenience wrapper lets us define a JVP for each argument separately, and the results are computed separately then summed
Step22: Here's a defjvps example with multiple arguments
Step23: As a shorthand, with defjvps you can pass a None value to indicate that the JVP for a particular argument is zero
Step24: Calling a jax.custom_jvp function with keyword arguments, or writing a jax.custom_jvp function definition with default arguments, are both allowed so long as they can be unambiguously mapped to positional arguments based on the function signature retrieved by the standard library inspect.signature mechanism.
Step25: The custom JVP rule is invoked during differentiation, whether forward or reverse
Step26: Notice that f_jvp calls f to compute the primal outputs. In the context of higher-order differentiation, each application of a differentiation transform will use the custom JVP rule if and only if the rule calls the original f to compute the primal outputs. (This represents a kind of fundamental tradeoff, where we can't make use of intermediate values from the evaluation of f in our rule and also have the rule apply in all orders of higher-order differentiation.)
Step27: You can use Python control flow with jax.custom_jvp
Step28: Use jax.custom_vjp to define custom reverse-mode-only rules
Step29: In words, we again start with a primal function f that takes inputs of type a and produces outputs of type b. We associate with it two functions, f_fwd and f_bwd, which describe how to perform the forward- and backward-passes of reverse-mode autodiff, respectively.
Step30: Calling a jax.custom_vjp function with keyword arguments, or writing a jax.custom_vjp function definition with default arguments, are both allowed so long as they can be unambiguously mapped to positional arguments based on the function signature retrieved by the standard library inspect.signature mechanism.
Step31: Forward-mode autodiff cannot be used on the jax.custom_vjp function and will raise an error
Step32: If you want to use both forward- and reverse-mode, use jax.custom_jvp instead.
Step33: ```python
Step34: And an analogous contrived example with jax.custom_vjp
Step35: Handling non-differentiable arguments
Step36: Notice the gotcha here
Step37: jax.custom_vjp with nondiff_argnums
|
14,552 | <ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import matplotlib
matplotlib.rcParams['text.usetex'] = True
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
from matplotlib import gridspec
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger('error')
b = phoebe.default_binary()
b.set_value('period', component='binary', value=0.0897780065*u.d)
b.set_value('teff', component='primary', value=13247*u.K)
b.set_value('teff', component='secondary', value=3650*u.K)
b.set_value('requiv', component='primary', value=0.0160*u.solRad)
b.set_value('requiv', component='secondary', value=0.1669*u.solRad)
b.flip_constraint('mass@primary', solve_for='sma@binary')
b.set_value('mass', component='primary', value=0.4477*u.solMass)
b.flip_constraint('mass@secondary', solve_for='q')
b.set_value('mass', component='secondary', value=0.1501*u.solMass)
period = b.get_value('period', component='binary')
times=phoebe.linspace(-0.1*period, 0.6*period, 501)
b.add_dataset('lc', times=times, dataset='u', passband="LSST:u")
b.add_dataset('lc', times=times, dataset='g', passband="LSST:g")
b.add_dataset('lc', times=times, dataset='r', passband="LSST:r")
b.add_dataset('lc', times=times, dataset='i', passband="LSST:i")
b.set_value_all('atm', component='primary', value='blackbody')
b.set_value_all('ld_mode', component='primary', value='manual')
b.set_value_all('ld_func', component='primary', value='quadratic')
b.set_value('ld_coeffs', component='primary', dataset='u', value=[0.2665,0.2544])
b.set_value('ld_coeffs', component='primary', dataset='g', value=[0.1421,0.3693])
b.set_value('ld_coeffs', component='primary', dataset='r', value=[0.1225,0.3086])
b.set_value('ld_coeffs', component='primary', dataset='i', value=[0.1063,0.2584])
b.set_value_all('ld_mode_bol@primary','manual')
b.set_value_all('ld_func_bol@primary','quadratic')
b.set_value('ld_coeffs_bol', component='primary', value=[0.1421,0.3693])
b.set_value_all('atm', component='secondary', value='phoenix')
b.set_value('abun', component='secondary', value=-1.55)
b.set_value('incl', component='binary', value=90.0*u.deg)
b.set_value_all('ntriangles', value=10000)
b.set_value_all('intens_weighting', value='photon')
b.set_value('Rv', value=2.5)
b.set_value('Av', value=0.0)
b.run_compute(model='noext',overwrite=True)
b.set_value('Av',2.0)
b.run_compute(model='ext',overwrite=True)
uextmags=-2.5*np.log10(b['value@fluxes@u@ext@model'])
unoextmags=-2.5*np.log10(b['value@fluxes@u@noext@model'])
uextmags_norm=uextmags-uextmags.min()+1
unoextmags_norm=unoextmags-unoextmags.min()+1
uresid=uextmags_norm-unoextmags_norm
gextmags=-2.5*np.log10(b['value@fluxes@g@ext@model'])
gnoextmags=-2.5*np.log10(b['value@fluxes@g@noext@model'])
gextmags_norm=gextmags-gextmags.min()+1
gnoextmags_norm=gnoextmags-gnoextmags.min()+1
gresid=gextmags_norm-gnoextmags_norm
rextmags=-2.5*np.log10(b['value@fluxes@r@ext@model'])
rnoextmags=-2.5*np.log10(b['value@fluxes@r@noext@model'])
rextmags_norm=rextmags-rextmags.min()+1
rnoextmags_norm=rnoextmags-rnoextmags.min()+1
rresid=rextmags_norm-rnoextmags_norm
iextmags=-2.5*np.log10(b['value@fluxes@i@ext@model'])
inoextmags=-2.5*np.log10(b['value@fluxes@i@noext@model'])
iextmags_norm=iextmags-iextmags.min()+1
inoextmags_norm=inoextmags-inoextmags.min()+1
iresid=iextmags_norm-inoextmags_norm
fig=plt.figure(figsize=(12,12))
gs=gridspec.GridSpec(4,2,height_ratios=[4,1,4,1],width_ratios=[1,1])
ax=plt.subplot(gs[0,0])
ax.plot(b['value@times@u@noext@model']/7.,unoextmags_norm,color='k',linestyle="--")
ax.plot(b['value@times@u@ext@model']/7.,uextmags_norm,color='k',linestyle="-")
ax.set_ylabel('Magnitude')
ax.set_xticklabels([])
ax.set_ylim([6.2,0.95])
ax.set_title('(a) LSST u')
ax2=plt.subplot(gs[0,1])
ax2.plot(b['value@times@g@noext@model']/b['period@orbit'].quantity,gnoextmags_norm,color='k',linestyle="--")
ax2.plot(b['value@times@g@ext@model']/b['period@orbit'].quantity,gextmags_norm,color='k',linestyle="-")
ax2.set_ylabel('Magnitude')
ax2.set_xticklabels([])
ax2.set_ylim([3.2,0.95])
ax2.set_title('(b) LSST g')
ax_1=plt.subplot(gs[1,0])
ax_1.plot(b['value@times@u@noext@model']/b['period@orbit'].quantity,uresid,color='k',linestyle='-')
ax_1.set_ylabel(r'$\Delta m$')
ax_1.set_xlabel('Phase')
ax_1.set_ylim([0.05,-0.3])
ax_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax2_1=plt.subplot(gs[1,1])
ax2_1.plot(b['value@times@g@noext@model']/b['period@orbit'].quantity,gresid,color='k',linestyle='-')
ax2_1.set_ylabel(r'$\Delta m$')
ax2_1.set_xlabel('Phase')
ax2_1.set_ylim([0.05,-0.3])
ax2_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax3=plt.subplot(gs[2,0])
ax3.plot(b['value@times@r@noext@model']/b['period@orbit'].quantity,rnoextmags_norm,color='k',linestyle="--")
ax3.plot(b['value@times@r@ext@model']/b['period@orbit'].quantity,rextmags_norm,color='k',linestyle="-")
ax3.set_ylabel('Magnitude')
ax3.set_xticklabels([])
ax3.set_ylim([2.0,0.95])
ax3.set_title('(c) LSST r')
ax4=plt.subplot(gs[2,1])
ax4.plot(b['value@times@i@noext@model']/b['period@orbit'].quantity,inoextmags_norm,color='k',linestyle="--")
ax4.plot(b['value@times@i@ext@model']/b['period@orbit'].quantity,iextmags_norm,color='k',linestyle="-")
ax4.set_ylabel('Magnitude')
ax4.set_xticklabels([])
ax4.set_ylim([1.6,0.95])
ax4.set_title('(d) LSST i')
ax3_1=plt.subplot(gs[3,0])
ax3_1.plot(b['value@times@r@noext@model']/b['period@orbit'].quantity,rresid,color='k',linestyle='-')
ax3_1.set_ylabel(r'$\Delta m$')
ax3_1.set_xlabel('Phase')
ax3_1.set_ylim([0.01,-0.03])
ax3_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax4_1=plt.subplot(gs[3,1])
ax4_1.plot(b['value@times@i@noext@model']/b['period@orbit'].quantity,iresid,color='k',linestyle='-')
ax4_1.set_ylabel(r'$\Delta m$')
ax4_1.set_xlabel('Phase')
ax4_1.set_ylim([0.01,-0.03])
ax4_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax_1.axhspan(-0.0075,0.0075,color='lightgray')
ax2_1.axhspan(-0.005,0.005,color='lightgray')
ax3_1.axhspan(-0.005,0.005,color='lightgray')
ax4_1.axhspan(-0.005,0.005,color='lightgray')
plt.tight_layout()
fig.canvas.draw()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Adopt system parameters from Rebassa-Mansergas+ 2019.
Step3: Now we'll create datasets for LSST u,g,r, and i bands.
Step4: And set options for the atmospheres and limb-darkening.
Step5: We'll set the inclination to 90 degrees and set some compute options.
Step6: For comparison, we'll first compute a model with zero extinction.
Step7: And then a second model with extinction.
Step8: Finally we'll convert the output fluxes to magnitudes and format the figure.
|
14,553 | <ASSISTANT_TASK:>
Python Code:
import tensorflow.compat.v1 as tf
import numpy as np
import shutil
print(tf.__version__)
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def get_train():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
def get_test():
return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = feature_cols, model_dir = OUTDIR)
model.train(input_fn = get_train(), steps = 100);
def print_rmse(model, name, input_fn):
metrics = model.evaluate(input_fn = input_fn, steps = 1)
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', get_valid())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2> 1. Refactor the input </h2>
Step2: <h2> 2. Refactor the way features are created. </h2>
Step3: <h2> Create and train the model </h2>
Step4: <h3> Evaluate model </h3>
|
14,554 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import samplics
from samplics.sampling import SampleSize
# target coverage rates
expected_coverage = {
"Dakar": 0.849,
"Ziguinchor": 0.809,
"Diourbel": 0.682,
"Saint-Louis": 0.806,
"Tambacounda": 0.470,
"Kaolack": 0.797,
"Thies": 0.834,
"Louga": 0.678,
"Fatick": 0.766,
"Kolda": 0.637,
"Matam": 0.687,
"Kaffrine": 0.766,
"Kedougou": 0.336,
"Sedhiou": 0.742,
}
# Declare the sample size calculation parameters
sen_vaccine_wald = SampleSize(
parameter="proportion", method="wald", stratification=True
)
# calculate the sample size
sen_vaccine_wald.calculate(target=expected_coverage, half_ci=0.07)
# show the calculated sample size
print("\nCalculated sample sizes by stratum:")
sen_vaccine_wald.samp_size
sen_vaccine_wald_size = sen_vaccine_wald.to_dataframe()
sen_vaccine_wald_size
sen_vaccine_wald.calculate(target=expected_coverage, half_ci=0.07, deff=1.401 ** 2)
sen_vaccine_wald.to_dataframe()
# Target coverage rates
expected_deff = {
"Dakar": 1.100 ** 2,
"Ziguinchor": 1.100 ** 2,
"Diourbel": 1.346 ** 2,
"Saint-Louis": 1.484 ** 2,
"Tambacounda": 1.366 ** 2,
"Kaolack": 1.360 ** 2,
"Thies": 1.109 ** 2,
"Louga": 1.902 ** 2,
"Fatick": 1.100 ** 2,
"Kolda": 1.217 ** 2,
"Matam": 1.403 ** 2,
"Kaffrine": 1.256 ** 2,
"Kedougou": 2.280 ** 2,
"Sedhiou": 1.335 ** 2,
}
# Calculate sample sizes using deff at the stratum level
sen_vaccine_wald.calculate(target=expected_coverage, half_ci=0.07, deff=expected_deff)
# Convert sample sizes to a dataframe
sen_vaccine_wald.to_dataframe()
# Calculate sample sizes with a resp_rate of 94.2%
sen_vaccine_wald.calculate(
target=expected_coverage, half_ci=0.07, deff=expected_deff, resp_rate=0.942
)
# Convert sample sizes to a dataframe
sen_vaccine_wald.to_dataframe(
col_names=["region", "vaccine_coverage", "precision", "number_12_23_months"]
)
sen_vaccine_fleiss = SampleSize(
parameter="proportion", method="fleiss", stratification=True
)
sen_vaccine_fleiss.calculate(
target=expected_coverage, half_ci=0.07, deff=expected_deff, resp_rate=0.942
)
sen_vaccine_sample = sen_vaccine_fleiss.to_dataframe(
col_names=["region", "vaccine_coverage", "precision", "number_12_23_months"]
)
sen_vaccine_sample
sen_vaccine_sample["number_households"] = round(
sen_vaccine_sample["number_12_23_months"] / 0.052, 0
)
sen_vaccine_sample
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first step is to create and object using the SampleSize class with the parameter of interest, the sample size calculation method, and the stratification status. In this example, we want to calculate sample size for proportions, using wald method for a stratified design. This is achived with the following snippet of code.
Step2: SampleSize calculates the sample sizes and store the in teh samp_size attributes which is a python dictinary object. If a dataframe is better suited for the use case, the method to_dataframe() can be used to create a pandas dataframe.
Step3: The sample size calculation above assumes that the design effect (DEFF) was equal to 1. A design effect of 1 correspond to sampling design with a variance equivalent to a simple random selection of same sample size. In the context of complex sampling designs, DEFF is often different from 1. Stage sampling and unequal weights usually increase the design effect above 1. The 2017 Senegal DHS indicated a design effect equal to 1.963 (1.401^2) for basic vaccination. Hence, to calculate the sample size, we will use the design effect provided by DHS.
Step4: Since the sample design is stratified, the sample size calculation will be more precised if DEFF is specified at the stratum level which is available from the 2017 Senegal DHS provided report. Some regions have a design effect below 1. To be conservative with our sample size calculation, we will use 1.21 as the minimum design effect to use in the sample size calculation.
Step5: The sample size calculation above does not account for attrition of sample sizes due to non-response. In the 2017 Semegal DHS, the overal household and women reponse rate was abou 94.2%.
Step6: Fleiss method
Step7: At this point, we have the number of 12-23 months needed to achieve the desired precision given the expected proportions using wald or fleiss calculation methods.
|
14,555 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
s=
APRIL--this is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
stop_words='the is'
s=s.splitlines()
y=[]
for i in s:
c=i.split()
y.append(c)
y
z=[]
for j in range(len(y)):
z=z+y[j]
b=' '.join(z)
u=list(filter(punctuation_split, b))
v=''.join(u)
if isinstance(stop_words, str)== True:
stop_words=stop_words.split()
for i in range(len(stop_words)):
v=v.replace(' '+stop_words[i],'')
v=v.replace(' ','')
else:
for i in range(len(stop_words)):
v=v.replace(stop_words[i],'')
v=v.replace(' ','')
v=v.lower()
u
def punctuation_split(x):
if x == "'" or x == '`' or x == '~' or x == '!' or x == '@' or x == '#' or x == '$' or x == '%' or x == '^' or x == '&' or x == '*' or x == '(' or x == ')' or x == '-' or x == '_' or x == '=' or x == '+' or x == '[' or x == ']' or x == '{' or x == '}' or x == '|' or x == '\\' or x == '"' or x == ':' or x == ';' or x == '<' or x == '>' or x == ',' or x == '.' or x == '?' or x == '/':
return False
return True
u=list(filter(punctuation_split, b))
''.join(u)
def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\\'):
Split a string into a list of words, removing punctuation and stop words.
s=s.replace('-',' ')
s=s.replace('--',' ')
s=s.splitlines() #Collaborated with Kevin Phung
y=[]
for i in s:
c=i.split()
y.append(c)
z=[]
for j in range(len(y)):
z=z+y[j]
b=' '.join(z)
u=list(filter(punctuation_split, b))
v=''.join(u)
if stop_words==None:
v=v.replace(' ','')
elif isinstance(stop_words, str)== True:
stop_words=stop_words.split()
for i in range(len(stop_words)):
v=v.replace(' '+stop_words[i]+' ',' ')
else:
for i in range(len(stop_words)):
v=v.replace(' '+stop_words[i],'')
v=v.replace(' ','')
v=v.lower()
return(v.split())
wasteland =
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
tokenize(wasteland, stop_words='is the of and')
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
['this', 'way', 'that', 'things', 'will', 'end']
wasteland =
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
assert tokenize(wasteland, stop_words='is the of and') == \
['april','cruellest','month','breeding','lilacs','out','dead','land',
'mixing','memory','desire','stirring','dull','roots','with','spring',
'rain']
tokenize(wasteland, stop_words='is the of and')
tokenize('this and the this from and a a a')
def count_words(data):
Return a word count dictionary from the list of words in data.
word_dictionary={}
for i in data:
if i not in word_dictionary:
word_dictionary[i]=1
else:
word_dictionary[i]=word_dictionary[i]+1
return word_dictionary
assert count_words(tokenize('this and the this from and a a a')) == \
{'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
sorted
def sort_word_counts(wc):
Return a list of 2-tuples of (word, count), sorted by count descending.
x=sorted(wc, key=wc.get, reverse=True)
y=sorted(wc.values(), reverse=True)
return list(zip(x,y))
sort_word_counts(count_words(tokenize('this and a the this this and a a a')))
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
[('a', 4), ('this', 3), ('and', 2), ('the', 1)]
nnn=open('mobydick_chapter1.txt')
mobypenis=nnn.read()
swc=sort_word_counts(count_words(tokenize(mobypenis, 'the of and a to in is it that as')))
swc
assert swc[0]==('i',43)
assert len(swc)==848
ff=np.array(swc)
dd=ff[range(50),0]
dd
cc=ff[range(50),1]
cc
plt.figure(figsize=(10,10))
plt.scatter(cc, range(50))
plt.yticks(range(50), dd)
plt.title('Most Common Words in Moby Dick First Chapter')
plt.xlabel('Number of times word appears')
plt.tight_layout()
ff
assert True # use this for grading the dotplot
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Word counting
Step7: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
Step9: Write a function sort_word_counts that return a list of sorted word counts
Step10: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt
Step11: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
|
14,556 | <ASSISTANT_TASK:>
Python Code:
import AngularCatalog_class as ac
import ImageMask_class as imclass
from astropy.io import fits
from astropy.io import ascii
import numpy as np
import numpy.random as rand
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
mask_from_ranges = imclass.ImageMask.from_ranges([0, 1], [0, 0.5])
#Generate the randoms
ra, dec, completeness = mask_from_ranges.generate_random_sample(1e4)
#Plot
fig=plt.figure()
ax=fig.add_subplot(111)
ax.set_xlabel("RA (deg)")
ax.set_ylabel("Dec (deg)")
ax.scatter(ra, dec)
#Make the mask array
mask_array = np.identity(4)
print mask_array
#Make the ImageMask
mask_from_array = imclass.ImageMask.from_array(mask_array, [0,1], [0,1])
%%capture
## ^ Use to suppress lengthy output
#Generate randoms
ra, dec, completeness = mask_from_array.generate_random_sample(1e4)
#Plot the randoms
fig=plt.figure()
ax=fig.add_subplot(111)
ax.set_xlabel("RA (deg)")
ax.set_ylabel("Dec (deg)")
ax.scatter(ra, dec)
#Make the new array mask
mask_array2 = np.identity(4)
mask_array2[0,0] = 0.2
mask_array2[0, 3] = 0.2
print mask_array2
#Make the new mask
mask_from_array2 = imclass.ImageMask.from_array(mask_array2, [0,1], [0,1])
%%capture
## ^ Use to suppress lengthy output
#Generate randoms
ra2, dec2, completeness = mask_from_array2.generate_random_sample(1e4)
#Plot the randoms
fig=plt.figure()
ax=fig.add_subplot(111)
ax.set_xlabel("RA (deg)")
ax.set_ylabel("Dec (deg)")
ax.scatter(ra2, dec2)
#Make the mask
weight_file = 'hlsp_candels_hst_wfc3_gs-tot-sect33_f160w_v1.0_wht.fits'
mask_from_fits = imclass.ImageMask.from_FITS_weight_file(weight_file)
%%capture
## ^ Use to suppress lengthy output
#Generate randoms
ra, dec, completeness = mask_from_fits.generate_random_sample(1e5)
#Plot the randoms
fig=plt.figure()
fig.set_size_inches(7,7)
ax=fig.add_subplot(111)
ax.set_xlabel("RA (deg)")
ax.set_ylabel("Dec (deg)")
ax.scatter(ra, dec)
#Make the RAs and Decs
ras = rand.normal(loc=0.5, scale=0.2, size=int(1e3))
decs = rand.normal(loc=0, scale=0.2, size=int(1e3))
plt.scatter(ras, decs)
#Make the mask that we'll be using
immask = imclass.ImageMask.from_ranges([0.1, .9], [-0.4, 0.4])
#Make the catalog
cat = ac.AngularCatalog(ras, decs, image_mask=immask)
#Generate some randoms to showthe mask area
cat.generate_random_sample(number_to_make=2e4)
#Plot both the randoms and all the data (not just what's within the mask)
cat.scatterplot_points(sample="both", masked_data=False)
cat.scatterplot_points(sample="both", masked_data=True)
#Create an AngularCatalog with an ImageMask from a weight file
weight_file = 'hlsp_candels_hst_wfc3_gs-tot-sect33_f160w_v1.0_wht.fits'
data_file = "example_data.dat"
data = ascii.read(data_file)
#Only use the first 1000 points (it's random points, so it doesn't matter which 1000) to make
#an AngularCatalog (so we can see the randoms too on the plot)
cat_wt = ac.AngularCatalog(data['ra'][0:1000], data['dec'][0:1000], weight_file = weight_file)
cat_wt.generate_random_sample(number_to_make=1e4)
cat_wt.scatterplot_points(sample="both", masked_data=True)
#Make the AngularCatalog with an existing image mask
immask = imclass.ImageMask.from_ranges([0.1, .9], [-0.4, 0.4])
rand_cat_1 = ac.AngularCatalog.random_catalog(1e3, image_mask = immask)
rand_cat_1.scatterplot_points(sample="data")
#Make the AngularCatalog over a rectangular area
rand_cat_1 = ac.AngularCatalog.random_catalog(1e3, ra_range=[0, 0.5], dec_range=[0, 0.5])
rand_cat_1.scatterplot_points(sample="data")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ways to create an ImageMask
Step2: To see what the mask looks like, we generate some random points and plot them.
Step3: Simple enough. Note that if you manually change the completenesses in the ImageMask._mask, it will behave like from_array, which is to say "not the way you expect" (this is on the list of things to be fixed). See the next section.
Step4: The main thing to note here is that the binning isn't even. The mask also has a different orientation from the orientation of the array. The origin is in the lower left, at the minimum RA and Dec. To see this, we'll use a slightly different array to mask.
Step5: This clearly shows that the origin is in the lower left and also illustrates how variable completeness would be implemented in this version of Py2PAC. Again, this should be fixed so the bins are square (or at least rectangular) in future versions.
Step6: Ways to create an AngularCatalog
Step7: Now we need to make this into an AngularCatalog with some image mask. The options are to pass an already existing ImageMask instance or to give the constructor the location of a weight file from which to construct the mask.
Step8: The first plot shows all the data and the second shows just the data within the mask area (just to confirm that the mask is working).
Step9: AngularCatalogs with randomly generated points
|
14,557 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from synthetic import mackey_glass
import matplotlib.pyplot as plt
import theano
import theano.tensor as T
import numpy
floatX = theano.config.floatX
class SimpleRNN(object):
def __init__(self, input_dim, recurrent_dim):
w_xh = numpy.random.normal(0, .01, (input_dim, recurrent_dim))
w_hh = numpy.random.normal(0, .02, (recurrent_dim, recurrent_dim))
self.w_xh = theano.shared(numpy.asarray(w_xh, dtype=floatX), name='w_xh')
self.w_hh = theano.shared(numpy.asarray(w_hh, dtype=floatX), name='w_hh')
self.b_h = theano.shared(numpy.zeros((recurrent_dim,), dtype=floatX), name='b_h')
self.parameters = [self.w_xh, self.w_hh, self.b_h]
def _step(self, input_t, previous):
return T.tanh(T.dot(previous, self.w_hh) + input_t)
def __call__(self, x):
x_w_xh = T.dot(x, self.w_xh) + self.b_h
result, updates = theano.scan(self._step,
sequences=[x_w_xh],
outputs_info=[T.zeros_like(self.b_h)])
return result
data = numpy.asarray(mackey_glass(2000)[0], dtype=floatX)
plt.plot(data)
plt.show()
data_train = data[:1500]
data_val = data[1500:]
w_ho_np = numpy.random.normal(0, .01, (15, 1))
w_ho = theano.shared(numpy.asarray(w_ho_np, dtype=floatX), name='w_ho')
b_o = theano.shared(numpy.zeros((1,), dtype=floatX), name='b_o')
x = T.matrix('x')
my_rnn = SimpleRNN(1, 15)
hidden = my_rnn(x)
prediction = T.dot(hidden, w_ho) + b_o
parameters = my_rnn.parameters + [w_ho, b_o]
l2 = sum((p**2).sum() for p in parameters)
mse = T.mean((prediction[:-1] - x[1:])**2)
cost = mse + .0001 * l2
gradient = T.grad(cost, wrt=parameters)
lr = .3
updates = [(par, par - lr * gra) for par, gra in zip(parameters, gradient)]
update_model = theano.function([x], cost, updates=updates)
get_cost = theano.function([x], mse)
predict = theano.function([x], prediction)
get_hidden = theano.function([x], hidden)
get_gradient = theano.function([x], gradient)
for i in range(1001):
mse_train = update_model(data_train)
if i % 100 == 0:
mse_val = get_cost(data_val)
print 'Epoch {}: train mse: {} validation mse: {}'.format(i, mse_train, mse_val)
predict = theano.function([x], prediction)
prediction_np = predict(data)
plt.plot(data[1:], label='data')
plt.plot(prediction_np, label='prediction')
plt.legend()
plt.show()
def vector_to_params(v):
return_list = []
offset = 0
# note the global variable here
for par in parameters:
par_size = numpy.product(par.get_value().shape)
return_list.append(v[offset:offset+par_size].reshape(par.get_value().shape))
offset += par_size
return return_list
def set_params(values):
for parameter, value in zip(parameters, values):
parameter.set_value(numpy.asarray(value, dtype=floatX))
def f_obj(x):
values = vector_to_params(x)
set_params(values)
return get_cost(data_train)
def f_prime(x):
values = vector_to_params(x)
set_params(values)
grad = get_gradient(data_train)
return numpy.asarray(numpy.concatenate([var.flatten() for var in grad]), dtype='float64')
from scipy.optimize import fmin_bfgs
x0 = numpy.asarray(numpy.concatenate([p.get_value().flatten() for p in parameters]), dtype='float64')
result = fmin_bfgs(f_obj, x0, f_prime)
print 'train mse: {} validation mse: {}'.format(get_cost(data_train), get_cost(data_val))
x_t = T.vector()
h_p = T.vector()
preactivation = T.dot(x_t, my_rnn.w_xh) + my_rnn.b_h
h_t = my_rnn._step(preactivation, h_p)
o_t = T.dot(h_t, w_ho) + b_o
single_step = theano.function([x_t, h_p], [o_t, h_t])
def generate(single_step, x_t, h_p, n_steps):
output = numpy.zeros((n_steps, 1))
for output_t in output:
x_t, h_p = single_step(x_t, h_p)
output_t[:] = x_t
return output
output = predict(data_train)
hidden = get_hidden(data_train)
output = generate(single_step, output[-1], hidden[-1], n_steps=200)
plt.plot(output)
plt.plot(data_val[:200])
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We now define a class that uses scan to initialize an RNN and apply it to a sequence of data vectors. The constructor initializes the shared variables after which the instance can be called on a symbolic variable to construct an RNN graph. Note that this class only handles the computation of the hidden layer activations. We'll define a set of output weights later.
Step2: For visualization purposes and to keep the optimization time managable, we will train the RNN on a short synthetic chaotic time series. Let's first have a look at the data
Step3: To train an RNN model on this sequences, we need to generate a theano graph that computes the cost and its gradient. In this case, the task will be to predict the next time step and the error objective will be the mean squared error (MSE). We also need to define shared variables for the output weights. Finally, we also add a regularization term to the cost.
Step4: We now compile the function that will update the parameters of the model using gradient descent.
Step5: We can now train the network by supplying this function with our data and calling it repeatedly.
Step6: Since we're only looking at a very small toy problem here, the model probably already memorized the train data quite well. Let's find out by plotting the predictions of the network
Step7: Small scale optimizations of this type often benefit from more advanced second order methods. The following block defines some functions that allow you to experiment with off-the-shelf optimization routines. In this case we used BFGS.
Step8: Generating sequences
|
14,558 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import badfish as bf
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('data/airquality.csv', index_col=0)
mf = bf.MissFrame(df)
dir(mf)
df.isnull().sum()
mf.counts()
mf.counts(where = ['Ozone'],how = 'any',columns=['Solar.R','Wind','Temp'])
mf.counts(where=['Ozone','Temp'], how='any', columns=['Solar.R','Wind','Temp'])
mf.counts(where = ['Ozone','Temp'],how = 'all',columns=['Solar.R','Wind','Temp'])
mf.plot(kind='pattern', norm = False, threshold=0.0)
mf.pattern(columns = ['Ozone', 'Temp', 'Solar.R'], norm = False, threshold=0.0)
mf.corr(columns = ['Ozone', 'Temp','Wind'])
mf.corr()['Ozone']
mf.frequency_item_set?
itemsets, rules = mf.frequency_item_set(columns = ['Ozone','Temp','Wind'], support=0.01, confidence=0.0)
itemsets
rules
mf.cohort(group = ['Ozone'])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to convert the Pandas dataframe to Badfish's missframe.
Step2: A MissFrame converts your data to a boolean matrix where a missing cell indicates a True value while a filled cell is given a False value.
Step3: Lets quickly use Pandas isnull().sum() function to check how many missing values are present in the different columns.
Step4: All MissFrame methods contain the same structure of arguments.
Step5: Now let's make our query a tad more complicated.
Step6: Okay, so we've got 8 missing cells of Temp, 2 of Wind and Solar each when Ozone goes missing.
Step7: The how = 'any' or how = 'all' controls how the columns are used.
Step8: The pattern plot below gives a nice understanding of the amount of data missing with different combinations of samples. Blue tiles indicate the presence of data whereas red tiles indicate missing data.
Step9: A tabular function to show which columns seem to go missing together reports these correlations of missing data-
Step10: Or perhaps let's look at only the correlations of missing data of other columns with Ozone
Step11: One of the well known datamining techniques is Association Rule Algorithm. Priori to the association rule generation, frequent itemsets are generated based on the item-item relations from the large data set according to a certain support.
|
14,559 | <ASSISTANT_TASK:>
Python Code:
def _correlate(series: pd.Series, correlation_value: int, seed: int = 0):
Generates a correlated random variables from a given series.
# https://stats.stackexchange.com/questions/38856/how-to-generate-correlated-random-numbers-given-means-variances-and-degree-of
np.random.seed(seed)
value_error_term = 1 - correlation_value**2
error_terms = np.random.normal(0, value_error_term**0.5, len(series))
return series * correlation_value + error_terms
np.random.seed(18)
data = pd.DataFrame(np.random.normal(0, 1, (10000, 6)))
data[0] = (data[0] >= 0.0).astype(int)
data['constant'] = 1
data['var1'] = data[0]
data['var2'] = data[1]
data['var3'] = data[2]
data['collinear_var2a'] = data['var2']
data['collinear_var2b'] = _correlate(data['var2'], correlation_value=0.99)
data['random1'] = data[3]
data['random2'] = data[4]
data['random3'] = data[5]
data['target'] = (
data['var1'] * 0.1 +
data['var2'] * 5.0 +
data['var3'] * -0.5 +
(np.random.rand(len(data))-0.5) # Adding Noise
)
_ = data.plot.scatter('var2', 'collinear_var2b')
_ = data.plot.scatter('var2', 'collinear_var2a')
inference_data = data_preparation.InferenceData(
initial_data=data[[
'constant',
'collinear_var2a', 'collinear_var2b',
'var1', 'var2', 'var3',
'random1', 'random2', 'random3',
'target'
]],
target_column='target')
inference_data.data
naive_model = models.InferenceRidge(alpha=100)
naive_model.fit(inference_data, raise_on_data_error=False)
naive_model.get_results()[['effect']]
inference_data.address_low_variance(threshold=0, drop=True)
inference_data.address_collinearity_with_vif(vif_method='sequential',
vif_threshold=10,
drop=True)
less_naive_model = models.InferenceRidge(alpha=100)
less_naive_model.fit(inference_data, raise_on_data_error=False)
less_naive_model.get_results()[['effect']]
less_naive_model.fit_bootstrap(50, n_jobs=1, verbose=False)
less_naive_model.get_results()
less_naive_model.permutation_test(50, n_jobs=1, verbose=False)
less_naive_model.get_results()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulate some data
Step2: These are the collinear variables introduced and their relationship with var2.
Step3: Modelling
Step4: Fitting a model with no data preparation.
Step5: Reminding that our equaltion for y is
Step6: Addressing Collinearity with Variance Inflation Factor (VIF)
Step7: Collinearity is always tricky to address, the options usually are
Step8: Reminding that our equaltion for y is
|
14,560 | <ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
print(b.get_parameter(qualifier='ecc'))
print(b.get_parameter(qualifier='ecosw', context='component'))
print(b.get_parameter(qualifier='esinw', context='component'))
print(b.get_parameter(qualifier='ecosw', context='constraint'))
print(b.get_parameter(qualifier='esinw', context='constraint'))
b.add_dataset('mesh', times=np.linspace(0,1,11), columns=['volume'])
b.set_value('ecc', 0.2)
b.run_compute()
print(b['volume@primary@model'])
afig, mplfig = b['mesh01'].plot(x='times', y='volume', ylim=(4.18, 4.20), show=True)
b.remove_dataset('mesh01')
b.add_dataset('rv', times=np.linspace(0,1,51))
b.run_compute()
afig, mplfig = b['rv@model'].plot(show=True)
b.remove_dataset('rv01')
b.add_dataset('lc', times=np.linspace(0,1,51))
b.run_compute()
afig, mplfig = b['lc@model'].plot(show=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Relevant Parameters
Step3: Relevant Constraints
Step4: Influence on Meshes (volume conservation)
Step5: Influence on Radial Velocities
Step6: Influence on Light Curves (fluxes)
|
14,561 | <ASSISTANT_TASK:>
Python Code:
debug_flag = False
import datetime
import glob
import logging
import lxml
import os
import six
import xml
import xmltodict
import zipfile
# paper identifier
paper_identifier = "BostonGlobe"
archive_identifier = "BG_20171002210239_00001"
# source
source_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/proquest_hnp/data"
source_paper_path = "{}/{}".format( source_paper_folder, paper_identifier )
# uncompressed
uncompressed_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/uncompressed"
uncompressed_paper_path = "{}/{}".format( uncompressed_paper_folder, paper_identifier )
# make sure an identifier is set before you make a path here.
if ( ( archive_identifier is not None ) and ( archive_identifier != "" ) ):
# identifier is set.
source_archive_file = "{}.zip".format( archive_identifier )
source_archive_path = "{}/{}".format( source_paper_path, source_archive_file )
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, archive_identifier )
#-- END check to see if archive_identifier present. --#
%pwd
# current working folder
current_working_folder = "/home/jonathanmorgan/work/django/research/work/phd_work/data/article_loading/proquest_hnp/{}".format( paper_identifier )
current_datetime = datetime.datetime.now()
current_date_string = current_datetime.strftime( "%Y-%m-%d-%H-%M-%S" )
logging_file_name = "{}/research-data_load-{}-{}.log.txt".format( current_working_folder, paper_identifier, current_date_string )
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
filename = logging_file_name,
filemode = 'w' # set to 'a' if you want to append, rather than overwrite each time.
)
# init django
django_init_folder = "/home/jonathanmorgan/work/django/research/work/phd_work"
django_init_path = "django_init.py"
if( ( django_init_folder is not None ) and ( django_init_folder != "" ) ):
# add folder to front of path.
django_init_path = "{}/{}".format( django_init_folder, django_init_path )
#-- END check to see if django_init folder. --#
%run $django_init_path
# context_text imports
from context_text.article_coding.article_coding import ArticleCoder
from context_text.article_coding.article_coding import ArticleCoding
from context_text.article_coding.open_calais_v2.open_calais_v2_article_coder import OpenCalaisV2ArticleCoder
from context_text.collectors.newsbank.newspapers.GRPB import GRPB
from context_text.collectors.newsbank.newspapers.DTNB import DTNB
from context_text.models import Article
from context_text.models import Article_Subject
from context_text.models import Newspaper
from context_text.shared.context_text_base import ContextTextBase
# context_text_proquest_hnp
from context_text_proquest_hnp.models import Proquest_HNP_Object_Type
from context_text_proquest_hnp.proquest_hnp_newspaper_helper import ProquestHNPNewspaperHelper
# python_utilities
from python_utilities.logging.logging_helper import LoggingHelper
# init
my_logging_helper = LoggingHelper()
my_logging_helper.set_logger_name( "proquest_hnp-article-loading-{}".format( paper_identifier ) )
log_message = None
my_paper = ProquestHNPNewspaperHelper()
paper_instance = my_paper.initialize_from_database( paper_identifier )
my_paper.source_all_papers_folder = source_paper_folder
my_paper.destination_all_papers_folder = uncompressed_paper_folder
print( my_paper )
print( paper_instance )
my_paper = ProquestHNPNewspaperHelper()
my_paper.paper_identifier = paper_identifier
my_paper.source_all_papers_folder = source_paper_folder
my_paper.source_paper_path = source_paper_path
my_paper.destination_all_papers_folder = uncompressed_paper_folder
my_paper.destination_paper_path = uncompressed_paper_path
my_paper.paper_start_year = 1872
my_paper.paper_end_year = 1985
my_newspaper = Newspaper.objects.get( id = 6 )
my_paper.newspaper = my_newspaper
phnp_newspaper_instance = my_paper.create_PHNP_newspaper()
print( phnp_newspaper_instance )
# create folder to hold the results of decompressing paper's zip files.
did_uncomp_paper_folder_exist = my_paper.make_dest_paper_folder()
# decompress the files
my_paper.uncompress_paper_zip_files()
%cd $uncompressed_paper_path
%ls
# loop over files in the current archive folder path.
object_type_to_count_map = my_paper.process_archive_object_types( uncompressed_archive_path )
xml_folder_list = glob.glob( "{}/*".format( uncompressed_paper_path ) )
print( "folder_list: {}".format( xml_folder_list ) )
# build map of all object types for a paper to the overall counts of each
paper_object_type_to_count_map = my_paper.process_paper_object_types()
# put the raw output from above in a list
raw_object_type_list = [ 'A|d|v|e|r|t|i|s|e|m|e|n|t: 2114224', 'Feature|Article: 5271887', 'I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h: 249942', 'O|b|i|t|u|a|r|y: 625143', 'G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n: 1083164', 'S|t|o|c|k| |Q|u|o|t|e: 202776', 'N|e|w|s: 140274', 'I|l|l|u|s|t|r|a|t|i|o|n: 106925', 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y: 386421', 'E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c: 78993', 'Editorial|Commentary: 156342', 'C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t: 68356', 'Classified Advertisement|Advertisement: 291533', 'R|e|v|i|e|w: 86889', 'Table of Contents|Front Matter: 69798', 'Letter to the Editor|Correspondence: 202071', 'News|Legal Notice: 24053', 'News|Marriage Announcement: 41314', 'B|i|r|t|h| |N|o|t|i|c|e: 926', 'News|Military/War News: 3', 'U|n|d|e|f|i|n|e|d: 5', 'Article|Feature: 137526', 'Front Matter|Table of Contents: 11195', 'Commentary|Editorial: 3386', 'Marriage Announcement|News: 683', 'Correspondence|Letter to the Editor: 7479', 'Legal Notice|News: 1029', 'Advertisement|Classified Advertisement: 12163' ]
# output variable
master_object_type_list = None
# declare variables
#raw_object_type_list = None
raw_object_type = None
object_type_part_list = None
object_type_to_count_map = None
object_type_value = None
object_type_count_string = None
object_type_count = None
# loop
master_object_type_list = []
object_type_to_count_map = {}
for raw_object_type in raw_object_type_list:
# split on colon
object_type_part_list = raw_object_type.split( ":" )
# object type value - take the first thing, strip off spaces, and add it to list.
object_type_value = object_type_part_list[ 0 ]
object_type_value = object_type_value.strip()
# object type value count - item 2 (index 1)
object_type_count_string = object_type_part_list[ 1 ]
object_type_count_string = object_type_count_string.strip()
object_type_count = int( object_type_count_string )
# add value to list.
if ( object_type_value not in master_object_type_list ):
# add it.
master_object_type_list.append( object_type_value )
else:
# error.
print( "ERROR - object type value {} in list more than once. Hmmm.".format( object_type_value ) )
#-- END check to see if value already in list. --#
# add count to map.
if ( object_type_value not in object_type_to_count_map ):
# add count.
object_type_to_count_map[ object_type_value ] = object_type_count
else:
# error.
print( "ERROR - object type value {} already has count in map. Hmmm.".format( object_type_value ) )
#-- END check to see if value already in list. --#
#-- END loop over raw object types --#
# sort the list of object types
master_object_type_list.sort()
print( master_object_type_list )
news_object_type_list = []
news_object_type_list.append( 'Article|Feature' )
news_object_type_list.append( 'Feature|Article' )
news_object_type_list.append( 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y' )
# get list of all object types
master_object_type_list = my_paper.get_all_object_types()
print( "Object Types: {}".format( master_object_type_list ) )
# directory to work in.
uncompressed_archive_folder = "BG_20171002210239_00001"
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder )
print( 'Uncompressed archive folder: {}'.format( uncompressed_archive_path ) )
# build map of file types to lists of files of that type in specified folder.
object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path )
# which types do we want to preview?
#types_to_output = master_object_type_list
# NO - types_to_output = [ "Advertisement|Classified Advertisement" ]
# NO - types_to_output = [ "A|d|v|e|r|t|i|s|e|m|e|n|t" ]
# NO - types_to_output = [ 'Advertisement|Classified Advertisement' ]
# YES - types_to_output = [ 'Article|Feature' ]
# 0 - types_to_output = [ 'B|i|r|t|h| |N|o|t|i|c|e' ]
# 0 - types_to_output = [ 'Classified Advertisement|Advertisement' ]
# NO - types_to_output = [ 'Commentary|Editorial' ]
# NO - types_to_output = [ 'Correspondence|Letter to the Editor' ]
# NO - types_to_output = [ 'C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t' ]
# NO - types_to_output = [ 'E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c' ]
# 0 - types_to_output = [ 'Editorial|Commentary' ]
# 0 - types_to_output = [ 'Feature|Article' ]
# NO - types_to_output = [ 'Front Matter|Table of Contents' ]
# YES - types_to_output = [ 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y' ]
# NO - furniture, listings - types_to_output = [ 'G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n' ]
# NO - types_to_output = [ 'I|l|l|u|s|t|r|a|t|i|o|n' ]
# NO - types_to_output = [ 'I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h' ]
# 0 - types_to_output = [ 'Legal Notice|News' ]
# 0 - types_to_output = [ 'Letter to the Editor|Correspondence' ]
# NO - types_to_output = [ 'Marriage Announcement|News' ]
# NO - furniture, not actual articles - types_to_output = [ 'N|e|w|s' ]
# NO - types_to_output = [ 'News|Legal Notice' ]
# 0 - types_to_output = [ 'News|Marriage Announcement' ]
# 0 - types_to_output = [ 'News|Military/War News' ]
# NO - types_to_output = [ 'O|b|i|t|u|a|r|y' ]
# NO - types_to_output = [ 'R|e|v|i|e|w' ]
# NO - types_to_output = [ 'S|t|o|c|k| |Q|u|o|t|e' ]
# NO - types_to_output = [ 'Table of Contents|Front Matter' ]
# NO - types_to_output = [ 'Table Of Contents|Front Matter' ]
# NO - types_to_output = [ 'U|n|d|e|f|i|n|e|d' ]
types_to_output = news_object_type_list
# declare variables
xml_file_path_list = None
xml_file_path_count = None
xml_file_path_example_list = None
xml_file_path = None
xml_file = None
xml_dict = None
xml_string = None
# loop over types
for object_type in types_to_output:
# print type and count
xml_file_path_list = object_type_to_file_path_map.get( object_type, [] )
xml_file_path_count = len( xml_file_path_list )
xml_file_path_example_list = xml_file_path_list[ : 10 ]
print( "\n- {} - {} files:".format( object_type, xml_file_path_count ) )
for xml_file_path in xml_file_path_example_list:
print( "----> {}".format( xml_file_path ) )
# try to parse the file
with open( xml_file_path ) as xml_file:
# parse XML
xml_dict = xmltodict.parse( xml_file.read() )
#-- END with open( xml_file_path ) as xml_file: --#
# pretty-print
xml_string = xmltodict.unparse( xml_dict, pretty = True )
# output
print( xml_string )
#-- END loop over example file paths. --#
#-- END loop over object types. --#
# directory to work in.
uncompressed_archive_folder = "BG_20171002210239_00001"
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder )
print( 'Uncompressed archive folder: {}'.format( uncompressed_archive_path ) )
# build map of file types to lists of files of that type in specified folder.
object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path )
# which types do we want to preview?
types_to_output = news_object_type_list
# declare variables
xml_file_path_list = None
xml_file_path_count = None
xml_file_path_example_list = None
xml_file_path = None
xml_file = None
xml_dict = None
xml_string = None
# loop over types
for object_type in types_to_output:
# print type and count
xml_file_path_list = object_type_to_file_path_map.get( object_type, [] )
xml_file_path_count = len( xml_file_path_list )
xml_file_path_example_list = xml_file_path_list[ : 10 ]
print( "\n- {} - {} files:".format( object_type, xml_file_path_count ) )
for xml_file_path in xml_file_path_example_list:
print( "----> {}".format( xml_file_path ) )
# try to parse the file
with open( xml_file_path ) as xml_file:
# parse XML
xml_dict = xmltodict.parse( xml_file.read() )
#-- END with open( xml_file_path ) as xml_file: --#
# pretty-print
xml_string = xmltodict.unparse( xml_dict, pretty = True )
# output
print( xml_string )
#-- END loop over example file paths. --#
#-- END loop over object types. --#
# directory to work in.
uncompressed_archive_folder = "BG_20151210230044_00004"
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder )
print( 'Uncompressed archive folder: {}'.format( uncompressed_archive_path ) )
# build map of file types to lists of files of that type in specified folder.
object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path )
# which types do we want to preview?
types_to_output = news_object_type_list
# declare variables
xml_file_path_list = None
xml_file_path_count = None
xml_file_path_example_list = None
xml_file_path = None
xml_file = None
xml_dict = None
xml_string = None
# loop over types
for object_type in types_to_output:
# print type and count
xml_file_path_list = object_type_to_file_path_map.get( object_type, [] )
xml_file_path_count = len( xml_file_path_list )
xml_file_path_example_list = xml_file_path_list[ : 10 ]
print( "\n- {} - {} files:".format( object_type, xml_file_path_count ) )
for xml_file_path in xml_file_path_example_list:
print( "----> {}".format( xml_file_path ) )
# try to parse the file
with open( xml_file_path ) as xml_file:
# parse XML
xml_dict = xmltodict.parse( xml_file.read() )
#-- END with open( xml_file_path ) as xml_file: --#
# pretty-print
xml_string = xmltodict.unparse( xml_dict, pretty = True )
# output
print( xml_string )
#-- END loop over example file paths. --#
#-- END loop over object types. --#
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup - Imports
Step2: Setup - working folder paths
Step3: Setup - logging
Step4: Setup - virtualenv jupyter kernel
Step5: Setup - Initialize LoggingHelper
Step6: Setup - initialize ProquestHNPNewspaper
Step7: set up manually
Step8: If desired, add to database.
Step9: Find articles to be loaded
Step10: For each *.zip file in the paper's source folder
Step11: Work with uncompressed files
Step12: parse and load XML files
Step13: Processing 5752 files in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/BostonGlobe/BG_20171002210239_00001
Step14: XML file count
Step15: ['Advertisement|Classified Advertisement', 'Article|Feature', 'A|d|v|e|r|t|i|s|e|m|e|n|t', 'B|i|r|t|h| |N|o|t|i|c|e', 'Classified Advertisement|Advertisement', 'Commentary|Editorial', 'Correspondence|Letter to the Editor', 'C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t', 'Editorial|Commentary', 'E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c', 'Feature|Article', 'Front Matter|Table of Contents', 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y', 'G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n', 'I|l|l|u|s|t|r|a|t|i|o|n', 'I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h', 'Legal Notice|News', 'Letter to the Editor|Correspondence', 'Marriage Announcement|News', 'News|Legal Notice', 'News|Marriage Announcement', 'News|Military/War News', 'N|e|w|s', 'O|b|i|t|u|a|r|y', 'R|e|v|i|e|w', 'S|t|o|c|k| |Q|u|o|t|e', 'Table of Contents|Front Matter', 'U|n|d|e|f|i|n|e|d']
Step16: explore all known object types
Step17: files in archive BG_20171002210239_00001 - 1985
Step18: files in archive BG_20171002210239_00001 - 1976
|
14,562 | <ASSISTANT_TASK:>
Python Code:
import sys
print("python command used for this notebook:")
print(sys.executable)
import tensorflow as tf
print("tensorflow:", tf.__version__)
from tensorflow.keras.applications.resnet50 import preprocess_input, ResNet50
model = ResNet50(weights='imagenet')
from skimage.io import imread
from skimage.transform import resize
import cv2
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following checks that scikit-image is properly installed
Step2: Optional
|
14,563 | <ASSISTANT_TASK:>
Python Code:
print ("Hello" + ", World")
print(10 + 4)
import numpy as np # numpy モジュールのインポート
import matplotlib.pyplot as plt # pyplotモジュールのインポート
%matplotlib inline
# 平均 x = -2, y = -2 の2変量正規分布からデータを100個サンプリングする
mean = [-2,-2]
cov = [[1,0],[0,1]]
x1,y1 = np.random.multivariate_normal(mean, cov, 100).T
# サンプリングしたデータの xとy の値 10件を確認してみる
x1[:10], y1[:10]
# 今サンプリングしたデータを散布図で確認
plt.scatter(x1, y1, color="r", label="d1")
# 同様に 平均 x=2, y=2 のガウス分布から100個データをサンプリング
mean2 = [2,2]
cov2 = [[1,0],[0,1]]
x2,y2 = np.random.multivariate_normal(mean2, cov2, 100).T
plt.scatter(x2,y2, c="b", marker="x", label="d2")
# 両方のデータを1つの散布図で表示する
plt.scatter(x1,y1, c="r", label="d1")
plt.scatter(x2,y2, c="b", marker="x", label="d2")
plt.grid(True) # 枠線をつけてみる
plt.legend() # 凡例をつけてみる
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 正しく動作すれば,画面に
|
14,564 | <ASSISTANT_TASK:>
Python Code:
N = 10000 ;
MOD = 1000000007 ;
F =[0 ] * N ;
def precompute() :
F[1 ] = 2 ;
F[2 ] = 3 ;
F[3 ] = 4 ;
for i in range(4 , N ) :
F[i ] =(F[i - 1 ] + F[i - 2 ] ) % MOD ;
n = 8 ;
precompute() ;
print(F[n ] ) ;
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,565 | <ASSISTANT_TASK:>
Python Code:
from dkrz_forms import form_widgets
form_widgets.show_status('form-retrieval')
from dkrz_forms import form_handler, form_widgets
#please provide your last name - replacing ... below
MY_LAST_NAME = "ki"
form_info = form_widgets.check_and_retrieve(MY_LAST_NAME)
# To be completed
# tob be completed
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Please provide your last name
Step2: Get status information related to your form based request
Step3: Contact the DKRZ data managers for form related issues
|
14,566 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import torch
import pandas as pd
x = load_data()
px = pd.DataFrame(x.numpy())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,567 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q dm-sonnet
#@title Imports (tf, tfp with adjoint trick, etc)
import numpy as np
import tqdm as tqdm
import sklearn.datasets as skd
# visualization
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import kde
# tf and friends
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
import sonnet as snt
tf.enable_v2_behavior()
tfb = tfp.bijectors
tfd = tfp.distributions
def make_grid(xmin, xmax, ymin, ymax, gridlines, pts):
xpts = np.linspace(xmin, xmax, pts)
ypts = np.linspace(ymin, ymax, pts)
xgrid = np.linspace(xmin, xmax, gridlines)
ygrid = np.linspace(ymin, ymax, gridlines)
xlines = np.stack([a.ravel() for a in np.meshgrid(xpts, ygrid)])
ylines = np.stack([a.ravel() for a in np.meshgrid(xgrid, ypts)])
return np.concatenate([xlines, ylines], 1).T
grid = make_grid(-3, 3, -3, 3, 4, 100)
#@title Helper functions for visualization
def plot_density(data, axis):
x, y = np.squeeze(np.split(data, 2, axis=1))
levels = np.linspace(0.0, 0.75, 10)
kwargs = {'levels': levels}
return sns.kdeplot(x, y, cmap="viridis", shade=True,
shade_lowest=True, ax=axis, **kwargs)
def plot_points(data, axis, s=10, color='b', label=''):
x, y = np.squeeze(np.split(data, 2, axis=1))
axis.scatter(x, y, c=color, s=s, label=label)
def plot_panel(
grid, samples, transformed_grid, transformed_samples,
dataset, axarray, limits=True):
if len(axarray) != 4:
raise ValueError('Expected 4 axes for the panel')
ax1, ax2, ax3, ax4 = axarray
plot_points(data=grid, axis=ax1, s=20, color='black', label='grid')
plot_points(samples, ax1, s=30, color='blue', label='samples')
plot_points(transformed_grid, ax2, s=20, color='black', label='ode(grid)')
plot_points(transformed_samples, ax2, s=30, color='blue', label='ode(samples)')
ax3 = plot_density(transformed_samples, ax3)
ax4 = plot_density(dataset, ax4)
if limits:
set_limits([ax1], -3.0, 3.0, -3.0, 3.0)
set_limits([ax2], -2.0, 3.0, -2.0, 3.0)
set_limits([ax3, ax4], -1.5, 2.5, -0.75, 1.25)
def set_limits(axes, min_x, max_x, min_y, max_y):
if isinstance(axes, list):
for axis in axes:
set_limits(axis, min_x, max_x, min_y, max_y)
else:
axes.set_xlim(min_x, max_x)
axes.set_ylim(min_y, max_y)
#@title Dataset
DATASET_SIZE = 1024 * 8 #@param
BATCH_SIZE = 256 #@param
SAMPLE_SIZE = DATASET_SIZE
moons = skd.make_moons(n_samples=DATASET_SIZE, noise=.06)[0]
moons_ds = tf.data.Dataset.from_tensor_slices(moons.astype(np.float32))
moons_ds = moons_ds.prefetch(tf.data.experimental.AUTOTUNE)
moons_ds = moons_ds.cache()
moons_ds = moons_ds.shuffle(DATASET_SIZE)
moons_ds = moons_ds.batch(BATCH_SIZE)
plt.figure(figsize=[8, 8])
plt.scatter(moons[:, 0], moons[:, 1])
plt.show()
base_loc = np.array([0.0, 0.0]).astype(np.float32)
base_sigma = np.array([0.8, 0.8]).astype(np.float32)
base_distribution = tfd.MultivariateNormalDiag(base_loc, base_sigma)
class MLP_ODE(snt.Module):
Multi-layer NN ode_fn.
def __init__(self, num_hidden, num_layers, num_output, name='mlp_ode'):
super(MLP_ODE, self).__init__(name=name)
self._num_hidden = num_hidden
self._num_output = num_output
self._num_layers = num_layers
self._modules = []
for _ in range(self._num_layers - 1):
self._modules.append(snt.Linear(self._num_hidden))
self._modules.append(tf.math.tanh)
self._modules.append(snt.Linear(self._num_output))
self._model = snt.Sequential(self._modules)
def __call__(self, t, inputs):
inputs = tf.concat([tf.broadcast_to(t, inputs.shape), inputs], -1)
return self._model(inputs)
#@title Model and training parameters
LR = 1e-2 #@param
NUM_EPOCHS = 80 #@param
STACKED_FFJORDS = 4 #@param
NUM_HIDDEN = 8 #@param
NUM_LAYERS = 3 #@param
NUM_OUTPUT = 2
#@title Building bijector
solver = tfp.math.ode.DormandPrince(atol=1e-5)
ode_solve_fn = solver.solve
trace_augmentation_fn = tfb.ffjord.trace_jacobian_exact
bijectors = []
for _ in range(STACKED_FFJORDS):
mlp_model = MLP_ODE(NUM_HIDDEN, NUM_LAYERS, NUM_OUTPUT)
next_ffjord = tfb.FFJORD(
state_time_derivative_fn=mlp_model,ode_solve_fn=ode_solve_fn,
trace_augmentation_fn=trace_augmentation_fn)
bijectors.append(next_ffjord)
stacked_ffjord = tfb.Chain(bijectors[::-1])
transformed_distribution = tfd.TransformedDistribution(
distribution=base_distribution, bijector=stacked_ffjord)
#@title Training
@tf.function
def train_step(optimizer, target_sample):
with tf.GradientTape() as tape:
loss = -tf.reduce_mean(transformed_distribution.log_prob(target_sample))
variables = tape.watched_variables()
gradients = tape.gradient(loss, variables)
optimizer.apply(gradients, variables)
return loss
#@title Samples
@tf.function
def get_samples():
base_distribution_samples = base_distribution.sample(SAMPLE_SIZE)
transformed_samples = transformed_distribution.sample(SAMPLE_SIZE)
return base_distribution_samples, transformed_samples
@tf.function
def get_transformed_grid():
transformed_grid = stacked_ffjord.forward(grid)
return transformed_grid
evaluation_samples = []
base_samples, transformed_samples = get_samples()
transformed_grid = get_transformed_grid()
evaluation_samples.append((base_samples, transformed_samples, transformed_grid))
panel_id = 0
panel_data = evaluation_samples[panel_id]
fig, axarray = plt.subplots(
1, 4, figsize=(16, 6))
plot_panel(
grid, panel_data[0], panel_data[2], panel_data[1], moons, axarray, False)
plt.tight_layout()
learning_rate = tf.Variable(LR, trainable=False)
optimizer = snt.optimizers.Adam(learning_rate)
for epoch in tqdm.trange(NUM_EPOCHS // 2):
base_samples, transformed_samples = get_samples()
transformed_grid = get_transformed_grid()
evaluation_samples.append(
(base_samples, transformed_samples, transformed_grid))
for batch in moons_ds:
_ = train_step(optimizer, batch)
panel_id = -1
panel_data = evaluation_samples[panel_id]
fig, axarray = plt.subplots(
1, 4, figsize=(16, 6))
plot_panel(grid, panel_data[0], panel_data[2], panel_data[1], moons, axarray)
plt.tight_layout()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: FFJORD
Step2: FFJORD bijector
Step3: Next, we instantiate a base distribution
Step5: We use a multi-layer perceptron to model state_derivative_fn.
Step6: Now we construct a stack of FFJORD bijectors. Each bijector is provided with ode_solve_fn and trace_augmentation_fn and it's own state_derivative_fn model, so that they represent a sequence of different transformations.
Step7: Now we can use TransformedDistribution which is the result of warping base_distribution with stacked_ffjord bijector.
Step8: Now we define our training procedure. We simply minimize negative log-likelihood of the data.
Step9: Plot samples from base and transformed distributions.
|
14,568 | <ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Step6: Write out the graph for TensorBoard
Step7: Training
Step8: Sampling
|
14,569 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import ICA
from mne.preprocessing import create_eog_epochs, create_ecg_epochs
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# 1Hz high pass is often helpful for fitting ICA
raw.filter(1., 40., n_jobs=2, fir_design='firwin')
picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
n_components = 25 # if float, select n_components by explained variance of PCA
method = 'fastica' # for comparison with EEGLAB try "extended-infomax" here
decim = 3 # we need sufficient statistics, not all time points -> saves time
# we will also set state of the random number generator - ICA is a
# non-deterministic algorithm, but we want to have the same decomposition
# and the same order of components each time this tutorial is run
random_state = 23
ica = ICA(n_components=n_components, method=method, random_state=random_state)
print(ica)
reject = dict(mag=5e-12, grad=4000e-13)
ica.fit(raw, picks=picks_meg, decim=decim, reject=reject)
print(ica)
ica.plot_components() # can you spot some potential bad guys?
# first, component 0:
ica.plot_properties(raw, picks=0)
ica.plot_properties(raw, picks=0, psd_args={'fmax': 35.})
ica.plot_properties(raw, picks=[1, 2], psd_args={'fmax': 35.})
# uncomment the code below to test the inteactive mode of plot_components:
# ica.plot_components(picks=range(10), inst=raw)
eog_average = create_eog_epochs(raw, reject=dict(mag=5e-12, grad=4000e-13),
picks=picks_meg).average()
eog_epochs = create_eog_epochs(raw, reject=reject) # get single EOG trials
eog_inds, scores = ica.find_bads_eog(eog_epochs) # find via correlation
ica.plot_scores(scores, exclude=eog_inds) # look at r scores of components
# we can see that only one component is highly correlated and that this
# component got detected by our correlation analysis (red).
ica.plot_sources(eog_average, exclude=eog_inds) # look at source time course
ica.plot_properties(eog_epochs, picks=eog_inds, psd_args={'fmax': 35.},
image_args={'sigma': 1.})
print(ica.labels_)
ica.plot_overlay(eog_average, exclude=eog_inds, show=False)
# red -> before, black -> after. Yes! We remove quite a lot!
# to definitely register this component as a bad one to be removed
# there is the ``ica.exclude`` attribute, a simple Python list
ica.exclude.extend(eog_inds)
# from now on the ICA will reject this component even if no exclude
# parameter is passed, and this information will be stored to disk
# on saving
# uncomment this for reading and writing
# ica.save('my-ica.fif')
# ica = read_ica('my-ica.fif')
raw_copy = raw.copy().crop(0, 10)
ica.apply(raw_copy)
raw_copy.plot() # check the result
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_properties(ecg_epochs, picks=ecg_inds, psd_args={'fmax': 35.})
from mne.preprocessing.ica import corrmap # noqa
# We'll start by simulating a group of subjects or runs from a subject
start, stop = [0, raw.times[-1]]
intervals = np.linspace(start, stop, 4, dtype=np.float)
icas_from_other_data = list()
raw.pick_types(meg=True, eeg=False) # take only MEG channels
for ii, start in enumerate(intervals):
if ii + 1 < len(intervals):
stop = intervals[ii + 1]
print('fitting ICA from {0} to {1} seconds'.format(start, stop))
this_ica = ICA(n_components=n_components, method=method).fit(
raw, start=start, stop=stop, reject=reject)
icas_from_other_data.append(this_ica)
print(icas_from_other_data)
reference_ica = ica
reference_ica.plot_components()
reference_ica.plot_sources(eog_average, exclude=eog_inds)
icas = [reference_ica] + icas_from_other_data
template = (0, eog_inds[0])
fig_template, fig_detected = corrmap(icas, template=template, label="blinks",
show=True, threshold=.8, ch_type='mag')
eog_component = reference_ica.get_components()[:, eog_inds[0]]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Before applying artifact correction please learn about your actual artifacts
Step2: Define the ICA object instance
Step3: we avoid fitting ICA on crazy environmental artifacts that would
Step4: Plot ICA components
Step5: Component properties
Step6: we can see that the data were filtered so the spectrum plot is not
Step7: we can also take a look at multiple different components at once
Step8: Instead of opening individual figures with component properties, we can
Step9: Advanced artifact detection
Step10: We can take a look at the properties of that component, now using the
Step11: That component is showing a prototypical average vertical EOG time course.
Step12: These labels were used by the plotters and are added automatically
Step13: Note that nothing is yet removed from the raw data. To remove the effects of
Step14: Exercise
Step15: What if we don't have an EOG channel?
Step16: The idea behind corrmap is that artefact patterns are similar across subjects
Step17: Remember, don't do this at home! Start by reading in a collection of ICA
Step18: We use our original ICA as reference.
Step19: Investigate our reference ICA
Step20: Which one is the bad EOG component?
Step21: Indeed it looks like an EOG, also in the average time course.
Step22: Now we can run the CORRMAP algorithm.
Step23: Nice, we have found similar ICs from the other (simulated) runs!
|
14,570 | <ASSISTANT_TASK:>
Python Code:
lessons = {
"1": "Python is part of a bigger ecosystem (example: Jupyter Notebooks).",
"2": "Batteries Included refers to the well-stocked standard library.",
"3": "Built-ins inside __builtins__ include the basic types such as...",
"4": "__ribs__ == special names == magic methods (but not all are methods).",
"5": "3rd Party Python is where a lot of the action is!",
"6": "'Python fits your brain' means it gets out of your way once you learn it."
}
important_types = [{'Numeric': ["int", "float", "Decimal", "Fraction", "complex"],
'Collections': [{"Sequences": ["list", "range", "tuple"],
"Mappings": ['dict', 'set']}],
'Descriptors': ['property']},
{'Other types': ['function', 'class', 'generator']}]
for key, value in lessons.items(): # dict method to return all key:value pairs
print("{}.: {}".format(key, value), file=None) # this could be HTML to a file
if key == "3":
print()
for the_type in important_types[0]['Numeric']:
print(the_type)
for the_type in important_types[0]['Collections'][0]['Sequences']:
print(the_type)
for the_type in important_types[0]['Collections'][0]['Mappings']:
print(the_type)
print()
import random
class BatteryDead(Exception):
pass
class IgnitionKeyBroken(Exception):
pass
class Car:
def start(self):
as_luck_would_have_it = random.randint(0,10)
if as_luck_would_have_it == 10:
raise BatteryDead
elif as_luck_would_have_it == 0:
raise IgnitionKeyBroken
print("Car starts!")
try:
# might not work
my_car = Car()
my_car.start()
except BatteryDead:
print("Oops, need to charge battery")
except IgnitionKeyBroken:
print("Oops, your key just snapped")
from functools import wraps
def decorator(f):
@wraps(f)
def proxy(x):
# proxy
print("Look at me!")
return f(x)
return proxy
@decorator
def Sqr(x):
Square Dancer
return x * x
Sqr(10)
help(Sqr)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Continue to "doodle and daydream" as you find the time. Think of ways to describe your day as a Python program. Remember the story of The Car that Would Not Start.
Step3: We also learned about decorator syntax. Using a decorator, we're able to use a callable as an input to an object that provides a proxy output, likewise callable by the same name.
Step4: @wraps forwards the __doctstring__ and __name__ of the incoming f argument to the proxy being wrapped.
|
14,571 | <ASSISTANT_TASK:>
Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
raw.pick(['EEG 0{:02}'.format(n) for n in range(41, 60)])
# code lines below are commented out because the sample data doesn't have
# earlobe or mastoid channels, so this is just for demonstration purposes:
# use a single channel reference (left earlobe)
# raw.set_eeg_reference(ref_channels=['A1'])
# use average of mastoid channels as reference
# raw.set_eeg_reference(ref_channels=['M1', 'M2'])
raw.plot()
# add new reference channel (all zero)
raw_new_ref = mne.add_reference_channels(raw, ref_channels=['EEG 999'])
raw_new_ref.plot()
# set reference to `EEG 050`
raw_new_ref.set_eeg_reference(ref_channels=['EEG 050'])
raw_new_ref.plot()
# use the average of all channels as reference
raw_avg_ref = raw.copy().set_eeg_reference(ref_channels='average')
raw_avg_ref.plot()
raw.set_eeg_reference('average', projection=True)
print(raw.info['projs'])
for title, proj in zip(['Original', 'Average'], [False, True]):
fig = raw.plot(proj=proj, n_channels=len(raw))
# make room for title
fig.subplots_adjust(top=0.9)
fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')
raw.del_proj() # remove our average reference projector first
sphere = mne.make_sphere_model('auto', 'auto', raw.info)
src = mne.setup_volume_source_space(sphere=sphere, exclude=30., pos=15.)
forward = mne.make_forward_solution(raw.info, trans=None, src=src, bem=sphere)
raw_rest = raw.copy().set_eeg_reference('REST', forward=forward)
for title, _raw in zip(['Original', 'REST (∞)'], [raw, raw_rest]):
fig = _raw.plot(n_channels=len(raw), scalings=dict(eeg=5e-5))
# make room for title
fig.subplots_adjust(top=0.9)
fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Background
Step2: If a scalp electrode was used as reference but was not saved alongside the
Step3: By default,
Step4: .. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ
Step5: Notice that the new reference (EEG 050) is now flat, while the original
Step6: Creating the average reference as a projector
Step7: Creating the average reference as a projector has a few advantages
Step8: Using an infinite reference (REST)
|
14,572 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])
table['prior'] = 1/2, 1/2
table
table['likelihood'] = 3/4, 1/2
table
table['unnorm'] = table['prior'] * table['likelihood']
table
prob_data = table['unnorm'].sum()
prob_data
table['posterior'] = table['unnorm'] / prob_data
table
table2 = pd.DataFrame(index=[6, 8, 12])
from fractions import Fraction
table2['prior'] = Fraction(1, 3)
table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)
table2
def update(table):
Compute the posterior probabilities.
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return prob_data
prob_data = update(table2)
table2
table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table3['prior'] = Fraction(1, 3)
table3
table3['likelihood'] = Fraction(1, 2), 1, 0
table3
update(table3)
table3
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now I'll add a column to represent the priors
Step2: And a column for the likelihoods
Step3: Here we see a difference from the previous method
Step4: I call the result unnorm because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood
Step5: Notice that we get 5/8, which is what we got by computing $P(D)$ directly.
Step6: The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.
Step7: I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.
Step9: Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function
Step10: And call it like this.
Step11: Here is the final Bayes table
Step12: The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9.
Step13: The data is that Monty opened Door 3 and revealed a goat. So let's
Step14: Now that we have priors and likelihoods, we can use update to compute the posterior probabilities.
Step15: After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;
Step16: Exercise
Step17: Exercise
Step18: Exercise
|
14,573 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import nsfg
preg = nsfg.ReadFemPreg()
import thinkstats2 as ts
live = preg[preg.outcome == 1]
wgt_cdf = ts.Cdf(live.totalwgt_lb, label = 'weight')
import thinkplot as tp
tp.Cdf(wgt_cdf, label = 'weight')
tp.Show()
import random
random.random?
import random
thousand = [random.random() for x in range(1000)]
thousand_pmf = ts.Pmf(thousand, label = 'rando')
tp.Pmf(thousand_pmf, linewidth=0.1)
tp.Show()
t_hist = ts.Hist(thousand)
tp.Hist(t_hist, label = "rando")
tp.Show()
thousand_cdf = ts.Cdf(thousand, label='rando')
tp.Cdf(thousand_cdf)
tp.Show()
import scipy.stats
scipy.stats?
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
Step2: Display the CDF.
Step3: Find out how much you weighed at birth, if you can, and compute CDF(x).
Step4: Assuming that the PMF doesn't work very well, try plotting the CDF instead.
|
14,574 | <ASSISTANT_TASK:>
Python Code:
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_shape = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None,image_shape), name="inputs")
targets_ = tf.placeholder(tf.float32, (None,image_shape), name="targets")
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_shape)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name='decoded')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step2: Training
Step3: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step4: Checking out the results
|
14,575 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import os
import sys
import mimetypes
import email
import glob
mht_files = glob.glob(os.path.join(os.path.curdir, '*.mht'))
for filepath in mht_files:
# get the name of the file, e.g. ./31521derp.mht -> 31521derp
filename_base = os.path.split(filepath)[-1].split('.mht')[0]
# open mht file
with open(filepath, 'r') as f:
msg = email.message_from_file(f)
# loop over the parts in the file
for i, part in enumerate(msg.walk(), start=1):
print('chunk %g is type: '%i + part.get_content_type())
if part.get_content_maintype() == 'multipart':
print('content type is multipart, skipping chunk %g'%i)
continue
ext = mimetypes.guess_extension(part.get_content_type())
filename = filename_base + '_part-%03d%s'%(i, ext)
filename = os.path.join(os.path.curdir, filename)
print(filename)
with open(filename, 'wb') as fp:
fp.write(part.get_payload(decode=True))
html_files = glob.glob(os.path.join(os.path.curdir, '*part*.htm*'))
html_files
for filepath in html_files:
filename_base = os.path.split(filepath)[-1].split('_')[0]
# read in html, result is a list of pandas dataframes
input_html = pd.read_html(filepath, thousands='')
# the data of interest appears every three dataframes, starting from index
# two, the end is at -6 to clip the unnecessary data at the end.
# processed_html = input_html[2:-6:3]
# this seems to work better, because it checks if a decimal separator (,)
# exists in the string
processed_html = [x for x in input_html if ',' in str(x[0][0])]
# remove the index from the dataframes
processed_html_values = [x.iloc[0] for x in processed_html]
# concat the dataframes
df_processed_data = pd.concat(processed_html_values, axis=1)
# DECREPATED: index is only needed if you need the first tabel.
# add the index: the values of the first column of any (here the first) df
# in processed_html
#df_processed_data.index = processed_html[0][0].values
# write to file:
#filepath_output = os.path.join(os.path.curdir, filename_base + '.csv')
#df_processed_data.to_csv(filepath_output, encoding='utf-8')
# write transposed to file:
filepath_output = os.path.join(os.path.curdir, filename_base + '_transposed.csv')
df_processed_data.T.to_csv(filepath_output, encoding='utf-8')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ref
Step2: the next cell parses the mht-files, splits them by content type (html, jpg, etc.) and writes the output of the chunks to the hard disk
Step3: get the name of the stripped files with only html content
Step4: loop over files, clip the unnecessary data and store the csv files
|
14,576 | <ASSISTANT_TASK:>
Python Code:
import xray
ds = xray.open_dataset('https://motherlode.ucar.edu/repository/opendap/41f2b38a-4e70-4135-8ff8-dbf3d1dcbfc1/entry.das',
decode_times=False)
print(ds)
print(ds['th'])
th = ds['th'].values[0][0]
print(th)
print(ds['grid_type_code'])
print(ds['grid_type_code'].values[0])
grid_type = ds['grid_type'].values
print('The grid type is ', grid_type[0])
nx, ny = ds['Nx'].values[0], ds['Ny'].values[0]
print(nx, ny)
la1, lo1 = ds['La1'].values[0], ds['Lo1'].values[0]
print(la1, lo1)
latin1, latin2 = ds['Latin1'].values[0], ds['Latin2'].values[0]
print(latin1, latin2)
lov = ds['LoV'].values[0]
print(lov)
print(ds['Dx'])
print(ds['Dy'])
dx,dy = ds['Dx'].values[0],ds['Dy'].values[0]
print(dx,dy)
%matplotlib inline
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import matplotlib as mpl
proj = ccrs.LambertConformal(central_longitude=lov,standard_parallels=(latin1,latin2))
pc = ccrs.PlateCarree()
left,bottom = proj.transform_point(lo1,la1,pc)
print(left,bottom)
right,top = left + nx*dx,bottom + ny*dy
print(right,top)
#Define the figure
fig = plt.figure(figsize=(12, 12))
# Define the extents and add the data
ax = plt.axes(projection=proj)
extents = (left, right, bottom, top)
ax.contourf(th, origin='lower', extent=extents, transform=proj)
# Add bells and whistles
ax.coastlines(resolution='50m', color='black', linewidth=2)
ax.add_feature(ccrs.cartopy.feature.NaturalEarthFeature(category='cultural', name='admin_1_states_provinces_lines', scale='50m',facecolor='none'))
ax.add_feature(ccrs.cartopy.feature.BORDERS, linewidth='1', edgecolor='black')
ax.gridlines()
plt.show()
th.shape
th[0,0]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dimensions, Coordinates, Data Variables
Step2: potential temperature (th)
Step3: To Visualize the Data, We have to Decrypt the Projection
Step4: Google to the Rescue
Step5: What is grid_type_code of 5?
Step6: Uh oh! Polar Stereographic or Lambert Conformal??
Step7: La1 and Lo1
Step8: Latin1 and Latin2
Step9: The Central Meridian for the Lambert Conformal Projection, LoV
Step10: Dx and Dy
Step11: Units for Dx and Dy
Step12: Let's Review What We Have
Step13: Define the Lambert Conformal Projection with Cartopy
Step14: Lambert Conformal Grid Extents
Step15: Convert Corner from Lat/Lon PlatteCarre to LC
Step16: Derive Opposite Corner
Step17: Plot It Up!
Step18: Exercises for the Reader
|
14,577 | <ASSISTANT_TASK:>
Python Code:
def this_and_prev(iterable):
iterator = iter(iterable)
prev_item = None
curr_item = next(iterator)
for next_item in iterator:
yield (prev_item, curr_item)
prev_item = curr_item
curr_item = next_item
yield (prev_item, curr_item)
for i,j in this_and_prev( range(5) ):
print(i,j)
def this_and_next(iterable): # немного изменил код по сравнению с предыдущей функцией, если следующего значения нет, возвращаем ноль
iterator = iter(iterable)
curr_item = next(iterator)
for next_item in iterator:
yield (curr_item, next_item)
curr_item = next_item
yield (curr_item, None)
for i,j in this_and_next( range(5) ):
print(i,j)
def row_number(driver_id, input_data):
sorted_data = sorted(input_data, lambda x: x[0]) # сортируем список входных данных по дате
result = []
row_number = 0
while row_number <= range( 0, len(input_data) ):
row_data = {'row_number': row_number
, 'driver_id': driver_id
, 'start_timestamp': sorted_data[row_number][0]
, 'status': sorted_data[row_number][1]
}
row_number += 1
result.append(row_data)
return result
$row_number = Python::row_number(driver_id, input_data);
$raw = (
SELECT
driver_id
, start_timestamp
, status
FROM sample_table
);
$reduced = (
REDUCE $raw
ON driver_id
USING $row_number((start_timestamp, status))
);
SELECT * FROM $reduced;
def LEAD(driver_id, input_data):
sorted_data = sorted(input_data, lambda x: x[0]) # сортируем список входных данных по дате
result = []
row_number = 0
while row_number < len(input_data) - 1: # для всех состояний конкретного водителя, кроме финального, добавляем ещё одно значеник
row_data = {'row_number': row_number
, 'driver_id': driver_id
, 'start_timestamp': sorted_data[row_number][0]
, 'status': sorted_data[row_number][1]
, 'status_next': sorted_data[row_number + 1][1]
}
row_number += 1
result.append(row_data)
row_data = {'row_number': row_number
, 'driver_id': driver_id
, 'start_timestamp': sorted_data[row_number][0]
, 'status': sorted_data[row_number][1]
, 'status_next': None # если состояние водителя финальное, то ставим в следующий статус значение None
}
result.append(row_data)
return result
$orders_card = ( # здесь я предполагал, что исходная таблица имеет название sample_table, вытянули оттуда количество всех заказов, оплаченные картой
SELECT COUNT(*)
FROM sample_table
WHERE payment_type = 'card'
);
$orders_cash = ( # количество всех заказов, оплаченных наличными
SELECT COUNT(*)
FROM sample_table
WHERE payment_type = 'cash'
);
$orders_card_completed = ( # количество всех выполненных заказов, оплаченных картой
SELECT COUNT(*)
FROM sample_table
WHERE payment_type = 'card'
AND status = 'completed'
);
$orders_cash_completed = ( # количество всех выполненных заказов, оплаченных наличными
SELECT COUNT(*)
FROM sample_table
WHERE payment_type = 'cash'
AND status = 'completed'
);
print(orders_card_completed/orders_card, orders_cash_completed/orders_cash) # посчитали отношения, теперь их нужно сравнить
SELECT # перевели все данные в формат datetime
CONVERT(DATETIME, CONVERT(VARCHAR(30), timestamp), 120)
FROM sample_table;
$sample_table_completed = ( # взяли из таблицы только выполненные заказы
SELECT *
FROM sample_table
WHERE status = 'completed'
);
$rides_on_a_week = ( # сгруппировали табличку по водителям
SELECT driver_id, MIN(timestamp) AS first_trip, MAX(timestamp) AS last_trip, COUNT(id) AS count_trips
FROM sample_table_completed
GROUP BY driver_id
);
SELECT driver_id, count_trips / DATEDIFF(week, first_trip, last_trip) #для каждого водителя посчитали среднее его поездок в неделю
FROM rides_on_a_week
$first_last_trips = ( # для каждого клиента находим дату и время его первой и последней поездки
SELECT client_id, MIN(timestamp) AS first_trip, MAX(timestamp) AS last_trip
FROM sample_table_completed
GROUP BY client_id
);
$joined_table = ( # соединяем таблицы
SELECT *
FROM sample_table_completed
LEFT JOIN first_last_trips
ON sample_table_completed.client_id = first_last_trips.client_id
);
$clients_paid_first_cash = ( # ищем всех клиентов, которые первую поездку оплатили наличными
SELECT client_id
FROM joined_table
WHERE timestamp = first_trip AND status = 'cash'
);
$clients_paid_first_cash_then_card = ( # ищем всех клиентов, которые первую поездку оплатили картой, а последнюю - наличными
SELECT client_id
FROM joined_table
WHERE timestamp = last_trip AND status = 'card' AND (client_id IN clients_paid_first_cash)
);
$share_of_clients = ( # считаем долю
SELECT (COUNT(*)
FROM clients_paid_first_cash_then_card) / (COUNT(*) FROM clients_paid_first_cash)
);
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: По аналогии требуется написать функцию, которая будет возвращать текущее и следующее значения.
Step2: <h2>Problem 2. SQL / Python</h2>
Step3: <hr>
Step4: Кроме того, обращаю Ваше внимание на то, что в куске кода "reduce... on... using", возможно, есть ошибка - в аргументах функции row_number, скорее всего, пропущен аргумент driver_id, то есть, по моему мнению, правильным вариантом было бы row_naumber(driver_id, (start_timestamp, status))
Step5: Здесь я предполагал, что все водители активные, и считал общее количество поездок, затем делил на количество недель между их первой и последней поездкой в этой базе данных (возможно, стоит аккуратнее считать разницу дат между первой и последней поездкой)
Step6: Я предполагаю, что клиент перешёл на оплату картой, если он первую поездку оплатил наличными, а свою последнюю поездку картой. Все таблицы из предыдущих запросов предполагаются сохранёнными.
|
14,578 | <ASSISTANT_TASK:>
Python Code:
!pip install lightgbm
!pip install shap
%tensorflow_version 1.x
import lzma
from google.colab import drive
import numpy as np
import tensorflow as tf
import keras
from keras import backend as K
from keras.layers import Input, Dense
from keras.models import Model
import matplotlib.pyplot as plt
import lightgbm as lgb#t
import shap
import sklearn
from sklearn import svm
from sklearn import preprocessing
from sklearn import datasets
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_recall_curve
# from sklearn.metrics import plot_precision_recall_curve
from sklearn.metrics import average_precision_score
from sklearn.metrics import roc_curve, auc
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.decomposition import PCA
from sklearn.svm import SVC
#from scipy import interp
from sklearn.metrics import roc_auc_score
def READ_XZ (filename):
file = lzma.LZMAFile(filename)
type_bytes = file.read(-1)
type_array = np.frombuffer(type_bytes,dtype='float32')
return type_array
def Count(array,val):
count = 0.0
for e in range(array.shape[0]):
if array[e]>val :
count=count+1.0
return count / array.shape[0]
width=40
batch_size=200
ModelName = "Model_40_24_8_24_40_40"
config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 2} )
sess = tf.Session(config=config)
keras.backend.set_session(sess)
K.tensorflow_backend._get_available_gpus()
# this is our input placeholder
input_img = Input(shape=(width*width,))
# "encoded" is the encoded representation of the input
Layer1 = Dense(24*24, activation='relu')(input_img)
Layer2 = Dense(8*8, activation='relu')(Layer1)
Layer3 = Dense(24*24, activation='relu')(Layer2)
Layer4 = Dense(40*40, activation='relu')(Layer3)
Out = Dense(40*40, activation='softmax')(Layer4)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, Out)
autoencoder.compile(optimizer='adam', loss='mean_squared_error')
def NAME(eventtype,purpose,i,obs) :
return "./"+eventtype+"/"+purpose+"/"+obs+"."+str(i)+".bin.xz"
#
def EvalOnFile (InFileName,OutFileName):
data = READ_XZ (InFileName)
x_train = data.reshape(-1,width*width)
x_out = autoencoder.predict(x_train,200,use_multiprocessing=True)
diff = x_train - x_out
lrnorm = np.ones((diff.shape[0]))
for e in range(diff.shape[0]):
lrnorm[e] = np.linalg.norm(diff[e])
lrnorm.tofile(OutFileName)
print(lrnorm.shape)
BATCH_SIZE=512
def TrainOnFile (filename,testfilename,totalepochs):
data = READ_XZ (filename)
x_train = data.reshape(-1,width*width)
datatest = READ_XZ (testfilename)
x_test = datatest.reshape(-1,width*width)
autoencoder.fit(
x_train, x_train, epochs=totalepochs,
batch_size=BATCH_SIZE, shuffle=True,
validation_data=(x_test, x_test)
)
autoencoder.save(ModelName)
drive.mount('/gdrive')
%cd /gdrive
%cd /gdrive/My Drive/S2
!ls
!cp ./Model_40_24_8_24_40_40 ../Model_40_24_8_24_40_40.bak
!ls ../
# !tar -xf S2.tar
%cd /gdrive/My Drive/S2
autoencoder = keras.models.load_model(ModelName)
%cd /gdrive/My Drive/S2
#autoencoder = keras.models.load_model(ModelName)
#!ls ./TOP/TRAIN/*.*.bin.xz
#
for e in range(20):
for i in range(7):
TrainOnFile(NAME("QCD","TRAIN",i%7,"out"),NAME("QCD","TEST",i%3,"out"),10)
#
for i in range(3):
TrainOnFile(NAME("QCD","VAL",i%7,"out"),NAME("QCD","TEST",i%3,"out"),10)
#
#
for i in range(7) :
EvalOnFile(NAME("QCD","TRAIN",i,"out"),NAME("QCD","TRAIN",i,"loss"))
EvalOnFile(NAME("TOP","TRAIN",i,"out"),NAME("TOP","TRAIN",i,"loss"))
#
for i in range(3) :
EvalOnFile(NAME("QCD","TEST",i,"out"),NAME("QCD","TEST",i,"loss"))
EvalOnFile(NAME("TOP","TEST",i,"out"),NAME("TOP","TEST",i,"loss"))
EvalOnFile(NAME("QCD","VAL",i,"out"),NAME("QCD","VAL",i,"loss"))
EvalOnFile(NAME("TOP","VAL",i,"out"),NAME("TOP","VAL",i,"loss"))
#
def ReadLossMassNsub(eventtype,sampletype,i):
loss = np.fromfile(NAME(eventtype,sampletype,i,"loss"), dtype=float)
mass = READ_XZ(NAME(eventtype,sampletype,i,"mass"))
nsub = READ_XZ(NAME(eventtype,sampletype,i,"nsub")).reshape(-1,5)
#print(nsub.shape)
out = np.ones((mass.shape[0],7))
for i in range(mass.shape[0]):
out[i][0] = loss[i]
out[i][1] = mass[i]
out[i][2] = nsub[i][0]
out[i][3] = nsub[i][1]
out[i][4] = nsub[i][2]
out[i][5] = nsub[i][3]
out[i][6] = nsub[i][4]
#
return out
#
vars_qcd_train = ReadLossMassNsub("QCD","TRAIN",0)
vars_qcd_train = np.append (vars_qcd_train,ReadLossMassNsub("QCD","TRAIN",1),0)
vars_qcd_train = np.append (vars_qcd_train,ReadLossMassNsub("QCD","TRAIN",2),0)
vars_qcd_train = np.append (vars_qcd_train,ReadLossMassNsub("QCD","TRAIN",3),0)
vars_qcd_train = np.append (vars_qcd_train,ReadLossMassNsub("QCD","TRAIN",4),0)
vars_qcd_train = np.append (vars_qcd_train,ReadLossMassNsub("QCD","TRAIN",5),0)
vars_qcd_train = np.append (vars_qcd_train,ReadLossMassNsub("QCD","TRAIN",6),0)
vars_qcd_test = ReadLossMassNsub("QCD","TEST",0)
vars_qcd_test = np.append (vars_qcd_test,ReadLossMassNsub("QCD","TEST",1),0)
vars_qcd_test = np.append (vars_qcd_test,ReadLossMassNsub("QCD","TEST",2),0)
vars_qcd_val = ReadLossMassNsub("QCD","VAL",0)
vars_qcd_val = np.append (vars_qcd_val,ReadLossMassNsub("QCD","VAL",1),0)
vars_qcd_val = np.append (vars_qcd_val,ReadLossMassNsub("QCD","VAL",2),0)
vars_top_train = ReadLossMassNsub("TOP","TRAIN",0)
vars_top_train = np.append (vars_top_train,ReadLossMassNsub("TOP","TRAIN",1),0)
vars_top_train = np.append (vars_top_train,ReadLossMassNsub("TOP","TRAIN",2),0)
vars_top_train = np.append (vars_top_train,ReadLossMassNsub("TOP","TRAIN",3),0)
vars_top_train = np.append (vars_top_train,ReadLossMassNsub("TOP","TRAIN",4),0)
vars_top_train = np.append (vars_top_train,ReadLossMassNsub("TOP","TRAIN",5),0)
vars_top_train = np.append (vars_top_train,ReadLossMassNsub("TOP","TRAIN",6),0)
vars_top_test = ReadLossMassNsub("TOP","TEST",0)
vars_top_test = np.append (vars_top_test,ReadLossMassNsub("TOP","TEST",1),0)
vars_top_test = np.append (vars_top_test,ReadLossMassNsub("TOP","TEST",2),0)
vars_top_val = ReadLossMassNsub("TOP","VAL",0)
vars_top_val = np.append (vars_top_val,ReadLossMassNsub("TOP","VAL",1),0)
vars_top_val = np.append (vars_top_val,ReadLossMassNsub("TOP","VAL",2),0)
plt.hist(vars_qcd_test[:,0],100,(0.0,0.4),density=True,histtype='step')
plt.hist(vars_top_test[:,0],100,(0.0,0.4),density=True,histtype='step')
plt.show()
plt.hist(vars_qcd_test[:,1],100,(0.0,1000),density=True,histtype='step')
plt.hist(vars_top_test[:,1],100,(0.0,1000),density=True,histtype='step')
plt.show()
plt.hist(vars_qcd_test[:,2],100,(0.0,100),density=True,histtype='step')
plt.hist(vars_top_test[:,2],100,(0.0,100),density=True,histtype='step')
plt.show()
plt.hist(vars_qcd_test[:,3],100,(0.0,100),density=True,histtype='step')
plt.hist(vars_top_test[:,3],100,(0.0,100),density=True,histtype='step')
plt.show()
plt.hist(vars_qcd_test[:,4],100,(0.0,100),density=True,histtype='step')
plt.hist(vars_top_test[:,4],100,(0.0,100),density=True,histtype='step')
plt.show()
plt.hist(vars_qcd_test[:,5],100,(0.0,100),density=True,histtype='step')
plt.hist(vars_top_test[:,5],100,(0.0,100),density=True,histtype='step')
plt.show()
dx = (0.4 - 0.0) / 100.0
qcdeff = np.ones((100))
topeff = np.ones((100))
for i in range(100):
xval = i*dx
qcdeff[i]=1.0/(Count(vars_qcd_test[:,0],xval)+0.0000000001)
topeff[i]=Count(vars_top_test[:,0],xval)
plt.yscale('log')
plt.plot(topeff,qcdeff)
import sklearn
def prepare (qcd_vars,top_vars) :
out_x = np.append(qcd_vars,top_vars,0)
out_y = np.append(np.zeros((qcd_vars.shape[0]),dtype='float32'),np.ones((top_vars.shape[0]),dtype='float32'),0)
return sklearn.utils.shuffle ( out_x , out_y , random_state=0 )
train_x, train_y = prepare(vars_qcd_train,vars_top_train)
test_x, test_y = prepare(vars_qcd_test,vars_top_test)
val_x, val_y = prepare(vars_qcd_val,vars_top_val)
param = { 'objective':'binary' , 'metric':'auc,binary_logloss,binary_error' }
plt.hist(train_x[:,0],100,(0.0,0.4),density=True,histtype='step')
plt.hist(test_x[:,0],100,(0.0,0.4),density=True,histtype='step')
plt.show()
plt.hist(train_x[:,1],100,(0.0,1000),density=True,histtype='step')
plt.hist(test_x[:,1],100,(0.0,1000),density=True,histtype='step')
plt.show()
num_round = 100
#train_data = lgb.Dataset( train_x[:,0:0] , label=train_y )
#val_data = lgb.Dataset( val_x[:,0:0] , label=val_y )
train_data = lgb.Dataset( train_x[:,0].reshape((-1,1)) , label=train_y )
val_data = lgb.Dataset( val_x[:,0].reshape((-1,1)) , label=val_y )
bst = lgb.train(param, train_data, num_round, valid_sets=val_data)
pred_qcd_test = bst.predict(vars_qcd_test[:,0].reshape((-1,1)))
pred_top_test = bst.predict(vars_top_test[:,0].reshape((-1,1)))
epsilon = 0.0000001
num = 1000
dx = ( 1.0 + (epsilon*2) ) / num
qcdeff_loss = np.ones((num))
topeff_loss = np.ones((num))
for i in range(num):
xval = (i*dx) - epsilon
qcdeff_loss[i]=1.0/(Count(pred_qcd_test,xval)+epsilon)
topeff_loss[i]=Count(pred_top_test,xval)
plt.yscale('log')
plt.plot(topeff_loss,qcdeff_loss)
num_round = 100
train_data = lgb.Dataset( train_x[:,0:6] , label=train_y )
val_data = lgb.Dataset( val_x[:,0:6] , label=val_y )
bst = lgb.train(param, train_data, num_round, valid_sets=val_data)
pred_qcd_test = bst.predict(vars_qcd_test[:,0:6])
pred_top_test = bst.predict(vars_top_test[:,0:6])
epsilon = 0.0000001
num = 1000
dx = ( 1.0 + (epsilon*2) ) / num
qcdeff_all = np.ones((num))
topeff_all = np.ones((num))
for i in range(num):
xval = (i*dx) - epsilon
qcdeff_all[i]=1.0/(Count(pred_qcd_test,xval)+epsilon)
topeff_all[i]=Count(pred_top_test,xval)
plt.yscale('log')
plt.plot(topeff_all,qcdeff_all)
num_round = 100
train_data = lgb.Dataset( train_x[:,1:6] , label=train_y )
val_data = lgb.Dataset( val_x[:,1:6] , label=val_y )
bst = lgb.train(param, train_data, num_round, valid_sets=val_data)
pred_qcd_test = bst.predict(vars_qcd_test[:,1:6])
pred_top_test = bst.predict(vars_top_test[:,1:6])
epsilon = 0.0000001
num = 1000
dx = ( 1.0 + (epsilon*2) ) / num
qcdeff_noloss = np.ones((num))
topeff_noloss = np.ones((num))
for i in range(num):
xval = (i*dx) - epsilon
qcdeff_noloss[i]=1.0/(Count(pred_qcd_test,xval)+epsilon)
topeff_noloss[i]=Count(pred_top_test,xval)
plt.yscale('log')
plt.plot(topeff_noloss,qcdeff_noloss)
np.savetxt("topeff_loss",topeff_loss)
np.savetxt("qcdeff_loss",qcdeff_loss)
np.savetxt("topeff_all",topeff_all)
np.savetxt("qcdeff_all",qcdeff_all)
np.savetxt("topeff_noloss",topeff_noloss)
np.savetxt("qcdeff_noloss",qcdeff_noloss)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing packages and defining functions and variables
Step2: Defining autoencoder model, Training and evaluation functions
Step3: Mount google drive to access data
Step4: Check the files exist and make a copy of the autoencoder model for backup
Step5: CD to the main data directory and load the trained model
Step6: Train another round if required
Step7: Evaluation using the trained model
Step8: Read the important data
Step9: Plotting and checking
Step10: Plot $m_J$ (jet mass)
Step11: Plot jet $\tau_1$ (nsubjettiness)
Step12: Plot jet $\tau_2$ (nsubjettiness)
Step13: Plot jet $\tau_3$ (nsubjettiness)
Step14: Plot jet $\tau_4$ (nsubjettiness)
Step15: Plot ROC using only $\epsilon$
Step16: Combining variables
Step17: Decision trees using only autoencoder loss
Step18: Plot the ROC from the above model
Step19: Train BDT using all variables
Step20: Plot ROC using above model
Step21: Not using the autoencoder loss
Step22: Plot ROC for above model
|
14,579 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step5: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
Step6: Exercise
Step7: Training, Validation, Test
Step8: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step9: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step10: Embedding
Step11: LSTM cell
Step12: RNN forward pass
Step13: Output
Step14: Validation accuracy
Step15: Batching
Step16: Training
Step17: Testing
|
14,580 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
# YOUR CODE HERE
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y - x)
dy = x*(rho - z) - y
dz = x*y - beta*z
return np.array([dx, dy, dz])
print(lorentz_derivs(np.array([0.0, 1.0, 0.0]), 1, 1, 1, 1))
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
# YOUR CODE HERE
t = np.linspace(0, max_time, 5*max_time)
soln = odeint(lorentz_derivs, ic, t, args=(sigma, rho, beta), atol=1e-9, rtol=1e-8)
return np.array(soln), np.array(t)
print(solve_lorentz(np.array([0.0, 1.0, 0.0]), 2, 1, 1, 1))
assert True # leave this to grade solve_lorenz
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
# YOUR CODE HERE
plt.figure(figsize = (15,8))
np.random.seed(1)
k= []
for i in range(N):
data = (np.random.random(3)-0.5)*30
k.append(solve_lorentz(data, max_time, sigma, rho, beta))
for j in k:
x = [p[0] for p in j[0]]
z = [p[2] for p in j[0]]
color = plt.cm.hot((x[0] + z[0])/60+0.5)
plt.scatter(x, z, color = color)
plt.xlabel('$x(t)$')
plt.ylabel('$z(t)$')
plt.title('Lorentz System')
# print(plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0))
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
# YOUR CODE HERE
interact(plot_lorentz, max_time = [1,10], N = [1,50], sigma=[0.0,50.0], rho=[0.0,50.0], beta=fixed(8/3));
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Lorenz system
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with
|
14,581 | <ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
# tensorflow
import tensorflow as tf
print('Expected TensorFlow version is v1.3.0 or higher')
print('Your TensorFlow version:', tf.__version__)
# data manipulation
import numpy as np
import pandas as pd
# visualization
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = [12,8]
def make_noisy_data(m=0.1, b=0.3, n=100):
x = np.random.randn(n)
noise = np.random.normal(scale=0.01, size=len(x))
y = m * x + b + noise
return x, y
x_train, y_train = make_noisy_data()
plt.plot(x_train, y_train, 'b.')
# input and output
x = tf.placeholder(shape=[None], dtype=tf.float32, name='x')
y_label = tf.placeholder(shape=[None], dtype=tf.float32, name='y_label')
# variables
W = tf.Variable(tf.random_normal([1], name="W")) # weight
b = tf.Variable(tf.random_normal([1], name="b")) # bias
# actual model
y = W * x + b
loss = tf.reduce_mean(tf.square(y - y_label))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init) # initialize variables
for i in range(100): # train for 100 steps
sess.run(train, feed_dict={x: x_train, y_label:y_train})
x_plot = np.linspace(-3, 3, 101) # return evenly spaced numbers over a specified interval
# using the trained model to predict values for the training data
y_plot = sess.run(y, feed_dict={x: x_plot})
# saving final weight and bias
final_W = sess.run(W)
final_b = sess.run(b)
plt.scatter(x_train, y_train)
plt.plot(x_plot, y_plot, 'g')
print('W:', final_W, 'expected: 0.1')
print('b:', final_b, 'expected: 0.3')
x_dict = {'x': x_train}
train_input = tf.estimator.inputs.numpy_input_fn(x_dict, y_train,
shuffle=True,
num_epochs=None) # repeat forever
features = [tf.feature_column.numeric_column('x')] # because x is a real number
estimator = tf.estimator.LinearRegressor(features)
estimator.train(train_input, steps = 1000)
x_test_dict = {'x': np.linspace(-5, 5, 11)}
data_source = tf.estimator.inputs.numpy_input_fn(x_test_dict, shuffle=False)
predictions = list(estimator.predict(data_source))
preds = [p['predictions'][0] for p in predictions]
for y in predictions:
print(y['predictions'])
plt.scatter(x_train, y_train)
plt.plot(x_test_dict['x'], preds, 'g')
census_train_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
census_train_path = tf.contrib.keras.utils.get_file('census.train', census_train_url)
census_test_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test'
census_test_path = tf.contrib.keras.utils.get_file('census.test', census_test_url)
column_names = [
'age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country',
'income'
]
census_train = pd.read_csv(census_train_path, index_col=False, names=column_names)
census_test = pd.read_csv(census_train_path, index_col=False, names=column_names)
census_train_label = census_train.pop('income') == " >50K"
census_test_label = census_test.pop('income') == " >50K"
census_train.head(10)
census_train_label[:20]
train_input = tf.estimator.inputs.pandas_input_fn(
census_train,
census_train_label,
shuffle=True,
batch_size = 32, # process 32 examples at a time
num_epochs=None,
)
test_input = tf.estimator.inputs.pandas_input_fn(
census_test,
census_test_label,
shuffle=True,
num_epochs=1)
features, labels = train_input()
features
features = [
tf.feature_column.numeric_column('hours-per-week'),
tf.feature_column.bucketized_column(tf.feature_column.numeric_column('education-num'), list(range(25))),
tf.feature_column.categorical_column_with_vocabulary_list('sex', ['male','female']),
tf.feature_column.categorical_column_with_hash_bucket('native-country', 1000),
]
estimator = tf.estimator.LinearClassifier(features, model_dir='census/linear',n_classes=2)
estimator.train(train_input, steps=5000)
estimator.evaluate(test_input)
features = [
tf.feature_column.numeric_column('education-num'),
tf.feature_column.numeric_column('hours-per-week'),
tf.feature_column.numeric_column('age'),
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list('sex',['male','female'])),
tf.feature_column.embedding_column( # now using embedding!
tf.feature_column.categorical_column_with_hash_bucket('native-country', 1000), 10)
]
estimator = tf.estimator.DNNClassifier(hidden_units=[20,20],
feature_columns=features,
n_classes=2,
model_dir='census/dnn')
estimator.train(train_input, steps=5000)
estimator.evaluate(test_input)
def census_input_fn(path):
def input_fn():
dataset = (
tf.contrib.data.TextLineDataset(path)
.map(csv_decoder)
.shuffle(buffer_size=100)
.batch(32)
.repeat())
columns = dataset.make_one_shot_iterator().get_next()
income = tf.equal(columns.pop('income')," >50K")
return columns, income
return input_fn
csv_defaults = collections.OrderedDict([
('age',[0]),
('workclass',['']),
('fnlwgt',[0]),
('education',['']),
('education-num',[0]),
('marital-status',['']),
('occupation',['']),
('relationship',['']),
('race',['']),
('sex',['']),
('capital-gain',[0]),
('capital-loss',[0]),
('hours-per-week',[0]),
('native-country',['']),
('income',['']),
])
def csv_decoder(line):
parsed = tf.decode_csv(line, csv_defaults.values())
return dict(zip(csv_defaults.keys(), parsed))
tf.reset_default_graph()
census_input = census_input_fn(census_train_path)
training_batch = census_input()
with tf.Session() as sess:
features, high_income = sess.run(training_batch)
print(features['education'])
print(features['age'])
print(high_income)
train,test = tf.contrib.keras.datasets.mnist.load_data()
x_train,y_train = train
x_test,y_test = test
mnist_train_input = tf.estimator.inputs.numpy_input_fn({'x':np.array(x_train, dtype=np.float32)},
np.array(y_train,dtype=np.int32),
shuffle=True,
num_epochs=None)
mnist_test_input = tf.estimator.inputs.numpy_input_fn({'x':np.array(x_test, dtype=np.float32)},
np.array(y_test,dtype=np.int32),
shuffle=True,
num_epochs=1)
estimator = tf.estimator.LinearClassifier([tf.feature_column.numeric_column('x',shape=784)],
n_classes=10,
model_dir="mnist/linear")
estimator.train(mnist_train_input, steps = 10000)
estimator.evaluate(mnist_test_input)
estimator = tf.estimator.DNNClassifier(hidden_units=[256],
feature_columns=[tf.feature_column.numeric_column('x',shape=784)],
n_classes=10,
model_dir="mnist/DNN")
estimator.train(mnist_train_input, steps = 10000)
estimator.evaluate(mnist_test_input)
# Parameters
BATCH_SIZE = 128
STEPS = 10000
def build_cnn(input_layer, mode):
with tf.name_scope("conv1"):
conv1 = tf.layers.conv2d(inputs=input_layer,filters=32, kernel_size=[5, 5],
padding='same', activation=tf.nn.relu)
with tf.name_scope("pool1"):
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
with tf.name_scope("conv2"):
conv2 = tf.layers.conv2d(inputs=pool1,filters=64, kernel_size=[5, 5],
padding='same', activation=tf.nn.relu)
with tf.name_scope("pool2"):
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
with tf.name_scope("dense"):
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
with tf.name_scope("dropout"):
is_training_mode = mode == tf.estimator.ModeKeys.TRAIN
dropout = tf.layers.dropout(inputs=dense, rate=0.4, training=is_training_mode)
logits = tf.layers.dense(inputs=dropout, units=10)
return logits
def model_fn(features, labels, mode):
# Describing the model
input_layer = tf.reshape(features['x'], [-1, 28, 28, 1])
tf.summary.image('mnist_input',input_layer)
logits = build_cnn(input_layer, mode)
# Generate Predictions
classes = tf.argmax(input=logits, axis=1)
predictions = {
'classes': classes,
'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
}
if mode == tf.estimator.ModeKeys.PREDICT:
# Return an EstimatorSpec object
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
with tf.name_scope('loss'):
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits)
loss = tf.reduce_sum(loss)
tf.summary.scalar('loss', loss)
with tf.name_scope('accuracy'):
accuracy = tf.cast(tf.equal(tf.cast(classes,tf.int32),labels),tf.float32)
accuracy = tf.reduce_mean(accuracy)
tf.summary.scalar('accuracy', accuracy)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.train.get_global_step(),
learning_rate=1e-4,
optimizer='Adam')
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions,
loss=loss, train_op=train_op)
# Configure the accuracy metric for evaluation
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(
classes,
input=labels)
}
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions,
loss=loss, eval_metric_ops=eval_metric_ops)
# create estimator
run_config = tf.contrib.learn.RunConfig(model_dir='mnist/CNN')
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
# train for 10000 steps
estimator.train(input_fn=mnist_train_input, steps=10000)
# evaluate
estimator.evaluate(input_fn=mnist_test_input)
# predict
preds = estimator.predict(input_fn=test_input_fn)
# Run an experiment
from tensorflow.contrib.learn.python.learn import learn_runner
# Enable TensorFlow logs
tf.logging.set_verbosity(tf.logging.INFO)
# create experiment
def experiment_fn(run_config, hparams):
# create estimator
estimator = tf.estimator.Estimator(model_fn=model_fn,
config=run_config)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=test_input_fn,
train_steps=STEPS
)
# run experiment
learn_runner.run(experiment_fn,
run_config=run_config)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) Simple Linear Regression with low-level TensorFlow
Step2: Create training data
Step3: Plot the training data
Step4: The Model
Step5: The Loss and Optimizer
Step6: The Training Loop and generating predictions
Step7: Visualizing predictions
Step8: What is the final weight and bias?
Step9: 2) Simple Linear Regression with a canned estimator
Step10: Describe input feature usage
Step11: Build and train the model
Step12: Generating and visualizing predictions
Step13: 3) Playing with real data
Step14: Load the data
Step15: Input pipeline
Step16: Feature description
Step17: Evaluate the model
Step18: DNN model
Step19: Custom Input Pipeline using Datasets API
Step20: Try the input function
Step21: 4) Building a custom estimator to classify handwritten digits (MNIST)
Step22: tf.estimator.LinearClassifier
Step23: Examine the results with TensorBoard
Step24: A Custom Model
Step25: Runs estimator
Step26: Distributed tensorflow
|
14,582 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu")(inputs)
x2 = layers.Dense(64, activation="relu")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
# Prepare the training dataset.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(batch_size)
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * batch_size))
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * batch_size))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * batch_size))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))
return d_loss, g_loss, generated_images
import os
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print("\nStart epoch", epoch)
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 200 == 0:
# Print metrics
print("discriminator loss at step %d: %.2f" % (step, d_loss))
print("adversarial loss at step %d: %.2f" % (step, g_loss))
# Save one generated image
img = tf.keras.preprocessing.image.array_to_img(
generated_images[0] * 255.0, scale=False
)
img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: トレーニングループの新規作成
Step2: はじめに
Step3: ミニバッチの勾配を使用してカスタムトレーニングループでトレーニングします。
Step4: トレーニングループは以下のとおりです。
Step5: メトリックの低レベルの処理
Step6: トレーニングと評価のループは以下のとおりです。
Step7: tf.function でトレーニングステップをスピードアップ
Step8: 評価ステップでも同じように実行できます。
Step9: 次に、このコンパイルされたトレーニングステップでトレーニングループを再度実行します。
Step10: スピードアップしました。
Step11: これを使用する非常にシンプルなモデルを構築しましょう。
Step12: トレーニングステップは次のようになります。
Step13: まとめ
Step14: 次に、潜在的なベクトルを形状(28, 28, 1)(MNISTの数字を表す)の出力に変換するジェネレータネットワークを作成します。
Step15: ここで重要なのが、トレーニングループです。ご覧のとおり、非常に簡単です。トレーニングステップの関数は 17 行だけです。
Step16: 画像のバッチに対して繰り返し train_step を呼び出して、GAN をトレーニングします。
|
14,583 | <ASSISTANT_TASK:>
Python Code:
4*2
import os
# Load the os library
import os
# Load the request module
import urllib.request
# Import SSL which we need to setup for talking to the HTTPS server
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# Create a directory
os.mkdir('img_align_celeba')
# Now perform the following 10 times:
for img_i in range(1, 11):
# create a string using the current loop counter
f = '000%03d.jpg' % img_i
# and get the url with that string appended the end
url = 'https://s3.amazonaws.com/cadl/celeb-align/' + f
# We'll print this out to the console so we can see how far we've gone
print(url, end='\r')
# And now download the url to a location inside our new directory
urllib.request.urlretrieve(url, os.path.join('img_align_celeba', f))
help(os.listdir)
files = os.listdir('img_align_celeba')
[file_i for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i]
[file_i for file_i in os.listdir('img_align_celeba')
if '.jpg' in file_i and '00000' in file_i]
[file_i for file_i in os.listdir('img_align_celeba')
if '.jpg' in file_i or '.png' in file_i or '.jpeg' in file_i]
files = [file_i
for file_i in os.listdir('img_align_celeba')
if file_i.endswith('.jpg')]
print(files[0])
print(files[1])
print(files[-1])
print(files[-2])
import matplotlib.pyplot as plt
%matplotlib inline
# uncomment the lines to try them
# help(plt)
# plt.<tab>
plt.imread?
import numpy as np
# help(np)
# np.<tab>
# img = plt.imread(files[0])
# outputs: FileNotFoundError
print(os.path.join('img_align_celeba', files[0]))
plt.imread(os.path.join('img_align_celeba', files[0]))
files = [os.path.join('img_align_celeba', file_i)
for file_i in os.listdir('img_align_celeba')
if '.jpg' in file_i]
img = plt.imread(files[0])
# img.<tab>
img = plt.imread(files[0])
plt.imshow(img)
img.shape
# outputs: (218, 178, 3)
plt.figure()
plt.imshow(img[:, :, 0])
plt.figure()
plt.imshow(img[:, :, 1])
plt.figure()
plt.imshow(img[:, :, 2])
np.min(img), np.max(img)
2**32
img.dtype
img.astype(np.float32)
plt.imread(files[0])
print(np.random.randint(0, len(files)))
print(np.random.randint(0, len(files)))
print(np.random.randint(0, len(files)))
filename = files[np.random.randint(0, len(files))]
img = plt.imread(filename)
plt.imshow(img)
def plot_image(filename):
img = plt.imread(filename)
plt.imshow(img)
f = files[np.random.randint(0, len(files))]
plot_image(f)
plot_image(files[np.random.randint(0, len(files))])
def imcrop_tosquare(img):
Make any image a square image.
Parameters
----------
img : np.ndarray
Input image to crop, assumed at least 2d.
Returns
-------
crop : np.ndarray
Cropped image.
if img.shape[0] > img.shape[1]:
extra = (img.shape[0] - img.shape[1])
if extra % 2 == 0:
crop = img[extra // 2:-extra // 2, :]
else:
crop = img[max(0, extra // 2 + 1):min(-1, -(extra // 2)), :]
elif img.shape[1] > img.shape[0]:
extra = (img.shape[1] - img.shape[0])
if extra % 2 == 0:
crop = img[:, extra // 2:-extra // 2]
else:
crop = img[:, max(0, extra // 2 + 1):min(-1, -(extra // 2))]
else:
crop = img
return crop
def imcrop(img, amt):
if amt <= 0 or amt >= 1:
return img
row_i = int(img.shape[0] * amt) // 2
col_i = int(img.shape[1] * amt) // 2
return img[row_i:-row_i, col_i:-col_i]
#from scipy.<tab>misc import <tab>imresize
from scipy.misc import imresize
imresize?
square = imcrop_tosquare(img)
crop = imcrop(square, 0.2)
rsz = imresize(crop, (64, 64))
plt.imshow(rsz)
plt.imshow(rsz, interpolation='nearest')
mean_img = np.mean(rsz, axis=2)
print(mean_img.shape)
plt.imshow(mean_img, cmap='gray')
imgs = []
for file_i in files:
img = plt.imread(file_i)
square = imcrop_tosquare(img)
crop = imcrop(square, 0.2)
rsz = imresize(crop, (64, 64))
imgs.append(rsz)
print(len(imgs))
plt.imshow(imgs[0])
imgs[0].shape
data = np.array(imgs)
data.shape
data = np.concatenate([img_i[np.newaxis] for img_i in imgs], axis=0)
data.shape
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now press 'a' or 'b' to create new cells. You can also use the toolbar to create new cells. You can also use the arrow keys to move up and down.
Step2: After exectuing this cell, your kernel will have access to everything inside the os library which is a common library for interacting with the operating system. We'll need to use the import statement for all of the libraries that we include.
Step3: Using the os package, we can list an entire directory. The documentation or docstring, says that listdir takes one parameter, path
Step4: This is the location of the directory we need to list. Let's save it to a variable so that we can easier inspect the directory of images we just downloaded
Step5: We can also specify to include only certain files like so
Step6: or even
Step7: We could also combine file types if we happened to have multiple types
Step8: Let's set this list to a variable, so we can perform further actions on it
Step9: And now we can index that list using the square brackets
Step10: We can even go in the reverse direction, which wraps around to the end of the list
Step11: <a name="loading-an-image"></a>
Step12: Now we can refer to the entire module by just using plt instead of matplotlib.pyplot every time. This is pretty common practice.
Step13: This isn't python, so won't work inside of any python script files. This only works inside notebook. What this is saying is that whenever we plot something using matplotlib, put the plots directly into the notebook, instead of using a window popup, which is the default behavior. This is something that makes notebook really useful for teaching purposes, as it allows us to keep all of our images/code in one document.
Step14: Selecting a function from the dropdown and adding a ? at the end will bring up the function's documentation.
Step15: Here we see that it actually returns a variable which requires us to use another library, NumPy. NumPy makes working with numerical data a lot easier. Let's import it as well
Step16: Let's try loading the first image in our dataset
Step17: plt.imread will not know where that file is. We can tell it where to find the file by using os.path.join
Step18: Now we get a bunch of numbers! I'd rather not have to keep prepending the path to my files, so I can create the list of files like so
Step19: Let's set this to a variable, img, and inspect a bit further what's going on
Step20: <a name="rgb-image-representation"></a>
Step21: Let's break this data down a bit more. We can see the dimensions of the data using the shape accessor
Step22: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels. We can use the square brackets just like when we tried to access elements of our list
Step23: We use the special colon operator to 'say take every value in this dimension'. This is saying, 'give me every row, every column, and the 0th dimension of the color channels'.
Step24: The numbers are all between 0 to 255. What a strange number you might be thinking. Unless you are one of 10 types of people in this world, those that understand binary and those that don't. Don't worry if you're not. You are likely better off.
Step25: numpy arrays have a field which will tell us how many bits they are using
Step26: uint8
Step27: This is saying, let me see this data as a floating point number, meaning with decimal places, and with 32 bits of precision, rather than the previous data types 8 bits. This will become important when we start to work with neural networks, as we'll need all of those extra possible values!
Step28: to pick a random image from our list of files, we can use the numpy random module
Step29: This function will produce random integers between a range of values that we specify. We say, give us random integers from 0 to the length of files.
Step30: This might be something useful that we'd like to do often. So we can use a function to help us in the future
Step31: This function takes one parameter, a variable named filename, which we will have to specify whenever we call it. That variable is fed into the plt.imread function, and used to load an image. It is then drawn with plt.imshow. Let's see how we can use this function definition
Step32: or simply
Step34: We use functions to help us reduce the main flow of our code. It helps to make things clearer, using function names that help describe what is going on.
Step35: There are a few things going on here. First, we are defining a function which takes as input a single variable. This variable gets named img inside the function, and we enter a set of if/else-if conditionals. The first branch says, if the rows of img are greater than the columns, then set the variable extra to their difference and divide by 2. The // notation means to perform an integer division, instead of a floating point division. So 3 // 2 = 1, not 1.5. We need integers for the next line of code which says to set the variable crop to img starting from extra rows, and ending at negative extra rows down. We can't be on row 1.5, only row 1 or 2. So that's why we need the integer divide there. Let's say our image was 128 x 96 x 3. We would have extra = (128 - 96) // 2, or 16. Then we'd start from the 16th row, and end at the -16th row, or the 112th row. That adds up to 96 rows, exactly the same number of columns as we have.
Step36: <a name="resizing-images"></a>
Step37: Notice that you can hit tab after each step to see what is available. That is really helpful as I never remember what the exact names are.
Step38: The imresize function takes a input image as its first parameter, and a tuple defining the new image shape as rows and then columns.
Step39: Great! To really see what's going on, let's turn off the interpolation like so
Step40: Each one of these squares is called a pixel. Since this is a color image, each pixel is actually a mixture of 3 values, Red, Green, and Blue. When we mix those proportions of Red Green and Blue, we get the color shown here.
Step41: This is an incredibly useful function which we'll revisit later when we try to visualize the mean image of our entire dataset.
Step42: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets
Step43: Since all of the images are the same size, we can make use of numpy's array instead of a list.
Step44: <a name="the-batch-dimension"></a>
Step45: We could also use the numpy.concatenate function, but we have to create a new dimension for each image. Numpy let's us do this by using a special variable np.newaxis
|
14,584 | <ASSISTANT_TASK:>
Python Code:
mc_env = gym.make("MountainCar-v0")
mc_n_weights, mc_feature_vec = fourier_fa.make_feature_vec(
np.array([mc_env.low, mc_env.high]),
n_acts=3,
order=2)
mc_experience = linfa.init(lmbda=0.9,
init_alpha=1.0,
epsi=0.1,
feature_vec=mc_feature_vec,
n_weights=mc_n_weights,
act_space=mc_env.action_space,
theta=None,
is_use_alpha_bounds=True)
mc_experience, mc_spe, mc_ape = driver.train(mc_env, linfa, mc_experience,
n_episodes=400,
max_steps=200,
is_render=False)
fig, ax1 = pyplot.subplots()
ax1.plot(mc_spe, color='b')
ax2 = ax1.twinx()
ax2.plot(mc_ape, color='r')
pyplot.show()
def mc_Q_at_x(e, x, a):
return scipy.integrate.quad(
lambda x_dot: e.feature_vec(np.array([x, x_dot]), a).dot(e.theta),
mc_env.low[1],
mc_env.high[1])
def mc_Q_fun(x):
return mc_Q_at_x(mc_experience, x, 0)
sample_xs = np.arange(mc_env.low[0], mc_env.high[0],
(mc_env.high[0] - mc_env.low[0]) / 8.0)
mc_num_Qs = np.array( map(mc_Q_fun, sample_xs) )
mc_num_Qs
mc_sym_Q_s0 = fourier_fa_int.make_sym_Q_s0(
np.array([mc_env.low, mc_env.high]),
2)
mc_sym_Qs = np.array( [mc_sym_Q_s0(mc_experience.theta, 0, s0)
for s0 in sample_xs] )
mc_sym_Qs
mc_sym_Qs - mc_num_Qs[:,0]
# Credits: http://stackoverflow.com/a/1409496/5091738
def make_integrand(feature_vec, theta, s0, n_dim):
argstr = ", ".join(["s{}".format(i) for i in xrange(1, n_dim)])
code = "def integrand({argstr}):\n" \
" return feature_vec(np.array([s0, {argstr}]), 0).dot(theta)\n" \
.format(argstr=argstr)
#print code
compiled = compile(code, "fakesource", "exec")
fakeglobals = {'feature_vec': feature_vec, 'theta': theta, 's0': s0,
'np': np}
fakelocals = {}
eval(compiled, fakeglobals, fakelocals)
return fakelocals['integrand']
print make_integrand(None, None, None, 4)
for order in xrange(1,3):
for n_dim in xrange(2, 4):
print "\norder {} dims {}".format(order, n_dim)
min_max = np.array([np.zeros(n_dim), 3 * np.ones(n_dim)])
n_weights, feature_vec = fourier_fa.make_feature_vec(
min_max,
n_acts=1,
order=order)
theta = np.cos(np.arange(0, 2*np.pi, 2*np.pi/n_weights))
sample_xs = np.arange(0, 3, 0.3)
def num_Q_at_x(s0):
integrand = make_integrand(feature_vec, theta, s0, n_dim)
return scipy.integrate.nquad(integrand, min_max.T[1:])
num_Qs = np.array( map(num_Q_at_x, sample_xs) )
#print num_Qs
sym_Q_at_x = fourier_fa_int.make_sym_Q_s0(min_max, order)
sym_Qs = np.array( [sym_Q_at_x(theta, 0, s0) for s0 in sample_xs] )
#print sym_Qs
print sym_Qs / num_Qs[:,0]
np.arange(0, 1, 10)
import sympy as sp
a, b, x, f = sp.symbols("a b x f")
b_int = sp.Integral(1, (x, a, b))
sp.init_printing()
u_int = sp.Integral((1-a)/(b-a), (x, 0, 1))
u_int
(b_int / u_int).simplify()
b_int.subs([(a,0), (b,2)]).doit()
u_int.subs([(a,0), (b,2)]).doit()
(u_int.doit()*b).simplify()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's try some arbitrary thetas
Step2: If the bounds of the states are [0, n], the ratio between symbolic and numeric results is $1/n^{n_{dim}-1}$. Or this is at least what I think I see.
|
14,585 | <ASSISTANT_TASK:>
Python Code:
get_ipython().magic('load_ext cellevents')
get_ipython().magic('load_ext autoreload')
get_ipython().magic('autoreload 2')
from logcon import log
from xdrive import aws, server, apps
from xdrive.drive import Drive
import fabric.api as fab
from fabric.state import connections
apps.setdebug()
# create a key
import os
keyfile = os.path.join(os.path.expanduser("~"), ".aws/key.pem")
try:
key = aws.ec2.create_key_pair(KeyName="key")
with open(keyfile, "w") as f:
f.write(key.key_material)
except Exception as e:
log.warning(e)
# create a security group
try:
sec = aws.ec2.create_security_group(GroupName="simon",
Description="wordpress, jupyter, ssh")
sec.authorize_ingress(
IpPermissions=[dict(IpProtocol='tcp', FromPort=80, ToPort=80),
dict(IpProtocol='tcp', FromPort=443, ToPort=443),
dict(IpProtocol='tcp', FromPort=8888, ToPort=8888),
dict(IpProtocol='tcp', FromPort=22, ToPort=22)])
except Exception as e:
log.warning(e)
server.create("kate", itype="free", drive="fastai", drivesize=15)
apps.run_fastai()
fab.run("docker rm -f fastai")
server.terminate("kate")
server.create("sarah", itype="gpu", spotprice=.3, drive="fastai")
apps.run_fastai()
#apps.start_fastai()
server.terminate("sarah")
instance = server.create("sm")
aws.associate_address("sm")
server.wait_ssh()
apps.install_docker()
fab.sudo("service docker start")
apps.install_wordpress()
xdrive = Drive("fastai")
xdrive.connect("sm")
xdrive.disconnect()
# get a resource by name
aws.get("sm")
# get all resources (instances, volumes, snapshots)
aws.get(unique=False)
# show instances used
aws.get_instances()
# show python tasks running in containers
fab.env.host_string=aws.get("sm").public_ip_address
server.get_tasks("python")
# install python app in a container including config files from laptop
apps.install_python("meetups", configs=[".meetups.yaml", ".gmail.yaml"])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configuration
Step2: Setup programs and data using a free instance
Step3: Download stuff via ssh
Step4: All of the setup time so far has used free instances and free storage. Next step is to delete the container and terminate the instance. We will create a new container on the GPU.
Step5: Work with the programs and data using a GPU
Step6: If this is the first time running on GPU then run_fastai() creates a new container. Subsequently start_fastai() starts the existing container which retains all the settings from the last run. Once the notebook is available the ip address will be in the clipboard so you just ctrl-v into the browser address bar. Again, wait for the cell to complete as it will tell you when the notebook is available which can take a minute or so.
Step7: When you have finished working then call server.terminate("sarah"). This will saves the xdrive as a snapshot including all data and programs. When a spot instance is outbid then AWS sends a 2 minute termination notice. This will be captured and result in a call to server.terminate.
Step8: Create more servers
Step9: Work with an existing xdrive
Step10: Utilities
|
14,586 | <ASSISTANT_TASK:>
Python Code:
# imports
import numpy as np
import pandas as pd
import os
import cv2
import matplotlib.pyplot as plt
import skimage.feature
from tqdm import tqdm # nice progress bars
%matplotlib inline
# constants
TRAIN_PATH = '../data/Train/'
DOTTED_PATH = '../data/TrainDotted/'
OUT_PATH = '../output/'
ALL_FILE_NAMES = os.listdir(DOTTED_PATH) # all our training file names
ALL_FILE_NAMES = sorted(ALL_FILE_NAMES, key = lambda item: int(item.partition('.')[0]))
MISMATCHED_TRAIN = [3, 7, 9, 21, 30, 34, 71, 81, 89, 97, 151, 184, 215, 234, 242, 268, 290, 311, 331, 344, 380, 384, 406, 421, 469, 475, 490, 499, 507, 530, 531, 605, 607, 614, 621, 638, 644, 687, 712, 721, 767, 779, 781, 794, 800, 811, 839, 840, 869, 882, 901, 903, 905, 909, 913, 927, 946]
FILE_NAMES = []
for filename in ALL_FILE_NAMES:
if int(filename.partition('.')[0]) in MISMATCHED_TRAIN:
pass
else:
FILE_NAMES.append(filename) # create FILE_NAMES without MISMATCHED_TRAIN images
count_df = pd.DataFrame(index = FILE_NAMES, columns = ["adult_males", "subadult_males", "adult_females", "juveniles", "pups"]).fillna(0)
coordinates_df = pd.DataFrame(columns = ["filename", "y_coord", "x_coord", "category"]).fillna(0)
for filename in tqdm(FILE_NAMES):
img_dotted = cv2.imread(DOTTED_PATH + filename)
img_train = cv2.imread(TRAIN_PATH + filename)
img_diff = cv2.absdiff(img_train , img_dotted)
mask_1 = cv2.cvtColor(img_dotted, cv2.COLOR_BGR2GRAY)
mask_1[mask_1 < 20] = 0
mask_1[mask_1 > 0] = 255
mask_2 = cv2.cvtColor(img_train, cv2.COLOR_BGR2GRAY)
mask_2[mask_2 < 20] = 0
mask_2[mask_2 > 0] = 255
img_diff = cv2.bitwise_or(img_diff, img_diff, mask=mask_1)
img_diff = cv2.bitwise_or(img_diff, img_diff, mask=mask_2)
img_diff = cv2.cvtColor(img_diff, cv2.COLOR_BGR2GRAY)
blobs = skimage.feature.blob_log(img_diff, min_sigma=3, max_sigma=4, num_sigma=1, threshold=0.02)
for blob in blobs:
y, x, s = blob
b,g,r = img_dotted[int(y)][int(x)][:]
if r > 204 and g < 29 and b < 26: # RED
count_df["adult_males"][filename] += 1
new_row = pd.Series([filename, int(y), int(x), "adult_males"], index=["filename", "y_coord", "x_coord", "category"])
coordinates_df = coordinates_df.append(new_row, ignore_index=True)
elif r > 220 and g < 25 and b > 204: # MAGENTA
count_df["subadult_males"][filename] += 1
new_row = pd.Series([filename, int(y), int(x), "subadult_males"], index=["filename", "y_coord", "x_coord", "category"])
coordinates_df = coordinates_df.append(new_row, ignore_index=True)
elif 6 < r < 64 and 156 < g < 199 and b < 52: # GREEN
count_df["pups"][filename] += 1
new_row = pd.Series([filename, int(y), int(x), "pups"], index=["filename", "y_coord", "x_coord", "category"])
coordinates_df = coordinates_df.append(new_row, ignore_index=True)
elif r < 78 and 31 < g < 85 and 124 < b < 221: # BLUE
count_df["juveniles"][filename] += 1
new_row = pd.Series([filename, int(y), int(x), "juveniles"], index=["filename", "y_coord", "x_coord", "category"])
coordinates_df = coordinates_df.append(new_row, ignore_index=True)
elif 59 < r < 115 and 19 < g < 80 and b < 49: # BROWN
count_df["adult_females"][filename] += 1
new_row = pd.Series([filename, int(y), int(x), "adult_females"], index=["filename", "y_coord", "x_coord", "category"])
coordinates_df = coordinates_df.append(new_row, ignore_index=True)
count_df.to_csv(OUT_PATH + 'initial_count.csv')
coordinates_df.to_csv(OUT_PATH + 'initial_coordinates.csv')
def report_error(count_file):
# checking that the generated "initial_count.csv" matches "train.csv" true sea lion numbers
count_df = pd.read_csv(OUT_PATH + count_file, index_col=0)
true_count_df = pd.read_csv(TRAIN_PATH + 'train.csv')
categories = ["adult_males", "subadult_males", "adult_females", "juveniles", "pups"]
wrong_files_dict = {}
for filename, row in count_df.iterrows():
train_id = int(filename.partition('.')[0])
wrong_list = []
for category in categories:
predicted_val = int(row[category])
true_val = int(true_count_df[category][train_id])
if predicted_val != true_val:
wrong_list.append([category, predicted_val, true_val])
if len(wrong_list) != 0:
wrong_files_dict[int(filename.partition('.')[0])] = wrong_list
wrong_files_list = list(wrong_files_dict.keys())
wrong_files_list = sorted(wrong_files_list, key=int)
for img_id in wrong_files_list:
filename = str(img_id) + '.jpg'
wrong_categories = wrong_files_dict[img_id]
print(filename)
for item in wrong_categories:
category = item[0]
predicted_val = item[1]
true_val = item[2]
print(' ' + category + ': predicted=' + str(predicted_val) + ', True=' + str(true_val))
report_error('initial_count.csv')
def graph_coord_circles(FILE_NAMES, coord_file):
coordinates_df = pd.read_csv(OUT_PATH + coord_file)
for filename in FILE_NAMES:
new_df = coordinates_df.loc[coordinates_df['filename'] == filename]
dotted_img = cv2.imread(DOTTED_PATH + filename)
for index, row in new_df.iterrows():
if row['category'] == 'adult_males':
cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (0,0,255), 2)
elif row['category'] == 'subadult_males':
cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (250,10,250), 2)
elif row['category'] == 'pups':
cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (20,180,35), 2)
elif row['category'] == 'juveniles':
cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (180,60,30), 2)
elif row['category'] == 'adult_females':
cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (0,42,84), 2)
cv2.imwrite(OUT_PATH + str(filename.partition('.')[0]) + '_marked.jpg', dotted_img)
# uncomment the line below and run this cell to generate marked images for all the training files
# graph_coord_circles(FILE_NAMES, 'initial_coordinates.csv')
# first load in the data from initial_coordinates.csv
correct_coordinates_df = pd.read_csv(OUT_PATH + 'initial_coordinates.csv', index_col=0)
# getting list of good image ids
IMG_IDS = []
for filename in FILE_NAMES:
IMG_IDS.append(int(filename.partition('.')[0]))
# function to apply changes, and get corect coordinates and counts
def apply_all_changes():
changes_df = pd.read_csv('./changes.csv', index_col='img_id')
# getting all image ids
img_ids = list(changes_df.index)
for img_id in img_ids:
# first change new_coord_df
filename = str(img_id) + '.jpg'
mini_changes_df = changes_df.ix[int(img_id)] # only 1 row
coord_add_list = ast.literal_eval(mini_changes_df[0])
coord_remove_list = ast.literal_eval(mini_changes_df[1])
for coord_add in coord_add_list:
if len(coord_add) == 0:
continue
y_coord = int(coord_add[0])
x_coord = int(coord_add[1])
category = coord_add[2]
# changing new_coord_df to add coordinate
new_row = pd.Series([filename, y_coord, x_coord, category], index=["filename", "y_coord", "x_coord", "category"])
new_coord_df = new_coord_df.append(new_row, ignore_index=True)
for coord_remove in coord_remove_list:
if len(coord_remove) == 0:
continue
y_coord = coord_remove[0]
x_coord = coord_remove[1]
category = coord_remove[2]
# changing new_coord_df to remove coordinate
mask = (new_coord_df['filename'] == filename) & (new_coord_df['y_coord'] == y_coord) & (new_coord_df['x_coord'] == x_coord) & (new_coord_df['category'] == category)
new_coord_df= new_coord_df[~mask]
new_coord_df.to_csv(OUT_PATH + 'correct_coordinates.csv') # save correct coordinates
# next create a new file with correct counts of sea lions
new_counts_df = pd.DataFrame(index = IMG_IDS, columns = ["adult_males", "subadult_males", "adult_females", "juveniles", "pups"]).fillna(0)
for row in new_coord_df.iterrows():
filename = row[1]['filename']
file_id = int(filename.partition('.')[0])
category = row[1]['category']
new_counts_df[category][file_id] +=1
new_counts_df.to_csv(OUT_PATH + 'correct_train.csv',index_label='train_id')
apply_all_changes()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Due to the fact some images are mismatched in the training set, and will not work for this method (reference to datacanary's post), I removed those images from the entire list of training files.
Step2: Now I created two pandas dataframes that will later be saved to csv files. One of them (count_df), will record the number of each category of sea lion the method was able to discover in the image.
Step3: The other dataframe (coordinates_df), will record the coordinates of every discovered sea lion (y_coord, x_coord), the image the sea lion is from (filename), and the category of the sea lion.
Step4: Then I iterated over all the training files, and extracted the y_coord, x_coord, and category according to Radu's Method, and saved the count_df dataframe to initial_count.csv, and the coordinates_df dataframe to initial_coordinates.csv
Step5: Generating an Error Report
Step6: I noticed that there were many images with discrepancies between our generated counts and the train.csv counts. The only way to verify whether or not the generated counts were correct or the train.csv were correct, was to manually check the coordinates for the images with discrepencies.
Step7: Manually Checking Images
Step8: My changes to the images were recorded in the changes.csv file located in this repository. In changes.csv I made lists of coordinates to add (coord_add column of csv) and coordinates to remove (coord_remove column of csv) for every single image. To apply the changes from the changes.csv, coordinates must be added and removed from initial_coordinates.csv, and the counts of sea lions in each image readjusted.
|
14,587 | <ASSISTANT_TASK:>
Python Code:
def print1(a , n ) :
for i in range(0 , n + 1 ) :
print(a[i ] , end = "▁ ")
print("")
def sort(a , n ) :
for i in range(n , 0 , - 1 ) :
for j in range(n , n - i , - 1 ) :
if(a[j ] > a[j - 1 ] ) :
a[j ] , a[j - 1 ] = a[j - 1 ] , a[j ]
print1(a , n )
n = 7
a =[2 , 4 , 3 , 2 , 4 , 5 , 3 ]
sort(a , n - 1 )
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,588 | <ASSISTANT_TASK:>
Python Code:
from bokeh.io import output_notebook
from bokeh.plotting import *
from matmodlab2 import *
from numpy import *
import numpy as np
from plotting_helpers import create_figure
output_notebook()
%pycat ../matmodlab2/materials/mooney_rivlin.py
from sympy import Symbol, Matrix, Rational, symbols, sqrt
lam = Symbol('lambda')
F = Matrix(3, 3, [lam, 0, 0, 0, 1/sqrt(lam), 0, 0, 0, 1/sqrt(lam)])
B = Matrix(3, 3, F.dot(F.T))
Bsq = Matrix(3, 3, B.dot(B))
I = Matrix(3, 3, lambda i,j: 1 if i==j else 0)
I1 = B.trace()
I2 = ((B.trace()) ** 2 - Bsq.trace()) / 2
J = F.det()
X = J ** Rational(1, 3)
C1, C2, D1 = symbols('C10 C01 D1')
I1B = I1 / X ** 2
I2B = I2 / X ** 4
S = 2 / J * (1 / X ** 2 * (C1 + I1B * C2) * B - 1 / X ** 4 * C2 * Bsq) \
+ (2 / D1 * (J - 1) - 2 * (C1 * I1B + 2 * C2 * I2B) / 3) * I
(S[0,0] - S[1,1]).simplify()
# Hyperelastic parameters, D1 set to a large number to force incompressibility
parameters = {'D1': 1.e12, 'C10': 1e6, 'C01': .1e6}
# stretch to 300%
lam = linspace(.5, 3, 50)
# Set up the simulator
mps = MaterialPointSimulator('test1')
mps.material = MooneyRivlinMaterial(**parameters)
# Drive the *incompressible* material through a path of uniaxial stress by
# prescribing the deformation gradient.
Fij = lambda x: (x, 0, 0, 0, 1/sqrt(x), 0, 0, 0, 1/sqrt(x))
mps.run_step('F', Fij(lam[0]), frames=10)
mps.run_step('F', Fij(1), frames=1)
mps.run_step('F', Fij(lam[-1]), frames=20)
# plot the analytic solution and the simulation
p = create_figure(bokeh=True, x_axis_label='Stretch', y_axis_label='Stress')
C10, C01 = parameters['C10'], parameters['C01']
# analytic solution for true and engineering stress
s = 2*C01*lam - 2*C01/lam**2 + 2*C10*lam**2 - 2*C10/lam
# plot the analytic solutions
p.line(lam, s, color='blue', legend='True', line_width=2)
p.line(lam, s/lam, color='green', legend='Engineering', line_width=2)
lam_ = np.exp(mps.get('E.XX'))
ss = mps.get('S.XX') - mps.get('S.ZZ')
p.circle(lam_, ss, color='orange', legend='Simulation, True')
p.circle(lam_, ss/lam_, color='red', legend='Simulation, Engineering')
p.legend.location = 'top_left'
show(p)
# check the actual solutions
assert abs(amax(ss) - amax(s)) / amax(s) < 1e-6
assert abs(amin(ss) - amin(s)) < 1e-6
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a name='basic'></a>
Step2: <a name='verify'></a>
Step3: We now exercise the Mooney-Rivlin material model using Matmodlab
|
14,589 | <ASSISTANT_TASK:>
Python Code:
# Conformal Model, Amsterdam convention. Dorst et al. p. 361
from sympy import *
from galgebra.ga import Ga
from galgebra.mv import *
# from lt import *
# from sympy import *
cm3coords = (o,x,y,z,infty) = symbols('o 1 2 3 infty', real=True)
cm3g = '0 0 0 0 -1, 0 1 0 0 0, 0 0 1 0 0, 0 0 0 1 0, -1 0 0 0 0'
cm3 = Ga('o e_1 e_2 e_3 oo', g = cm3g, coords = cm3coords)
(eo, e1, e2, e3, eoo) = cm3.mv()
ep = eo - eoo/2 # ep^2 = +1 GACS 408
em = eo + eoo/2 # em^2 = -1
E = eo^eoo
Ga.dual_mode('Iinv+')
#cm3coords = (o,x,y,z,infty) = symbols('o x y z \infty', real=True)
#cm3 = Ga('o e_x e_y e_z \infty', g = cf3g, coords = cf3coords)
from IPython.display import display
def pt(arg): # R^3 vector --> conformal point.
if isinstance(arg,str): # Return general 3D point
v = cm3.mv(arg, 'vector') # General conformal vector
v = v + (v < eoo)*eo + (v < eo)*eoo # 3D part
v = eo + v + (v<v)*eoo/2
elif arg == 0:
v = eo
elif (arg < eoo) == 0: # Return point for 3D vector in arg
v = eo + arg + (arg<arg)*eoo/2
else: v = arg # arg already in conformal representation
return(v)
def tp(arg): # conformal point --> R^3 vector
if isinstance(arg,str): # Return general 3D vector
v = cm3.mv(arg, 'vector')
else: # Return 3D vector part of arg
v = arg
v = v + (v < eoo)*eo + (v < eo)*eoo
return(v)
def normalize(v):
if (v < eoo) == 0: # Normalize 3D vector
return(v/sqrt((v<v).scalar()))
else: # Normalize conformal vector: set eo coeff to 1.
return(-v/(v<eoo))
def scalar(arg):
return(cm3.mv(arg, 'scalar')) # Save user from typing all this
def round(*args): # args are conformal points
ans = args[0]
for i in range(1,len(args)):
ans = ans ^ args[i]
return(ans)
def flat(*args): # args are conformal points
return(round(*args) ^ eoo)
def line(p,q): # If q is 3D, line thru p parallel to q returned
return(flat(p,q))
def plane(p,q,r):
return(flat(p,q,r))
def circle(p,q,r):
return(round(p,q,r))
def sphere(p,q,r,s):
return(round(p,q,r,s))
def dualLine(p, B): # thru point p, orthogonal to 3D bivector B
return(p < (B*eoo)) # A vector
def dualPlane(p,n): # n: GA^3 normal vector
m = normalize(n)
if isinstance(p,(int, long, float)):
p = scalar(p) # Python scalar -> GAlgebra scalar
if (p!=0) and ((p<p)==0): # p: point on plane.
return(p < (m^eoo)) # a vector
else: # p: distance to origin.
return(m + (p*eoo)) # a vector
def dualSphere(c,rho): # c:center.
if isinstance(rho,(int, long, float)):
rho = scalar(rho) # Python scalar -> GAlgebra scalar
if (rho!=0) and ((rho<rho)==0): # rho: point on sphere
return(rho < (c ^ eoo))
else: # rho: radius.
return(c - (rho*rho*eoo)/2) # A vector
def dualCircle(c,rho,n): # c:center. rho:radius. n:normal vector
ds = dualSphere(c,rho)
dp = dualPlane(c,n)
return(ds^dp) # A BIvector
def translate(object,a3): # a3: 3D vector
return(1 - a3*eoo/2)*object*(1 + a3*eoo/2)
def rotate(object,itheta):
return(exp(-itheta/2)*object*exp(itheta/2))
def invert(p, norm=False): # GACS 513
ans = -(eo - eoo/2)*p*(eo - eoo/2)
if norm:
ans = normalize(ans)
return(ans)
# Reflect point p in hyperplane with normal 3D vector n.
def reflect(p,n):
return(-n*p*(n/norm2(n)))
# Can be considerably simplified: A Covariant Approach ..., 16
def dilate(p, alpha, norm = False): # Dilate by alpha (> 0)
ans = exp(E*ln(alpha)/2)*p*exp(-E*ln(alpha)/2)
if norm:
ans = normalize(ans)
return(ans)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h4>* Create direct representations of geometric objects *</h4>
Step2: <h4>* Create dual representations of geometric objects *</h4>
Step3: <h4>* Geometric operations *</h4>
|
14,590 | <ASSISTANT_TASK:>
Python Code:
audience1_name = "" #@param {type:"string"}
audience1_file_location = "" #@param {type:"string"}
audience1_size = 0#@param {type:"integer"}
audience2_name = "" #@param {type:"string"}
audience2_file_location = "" #@param {type:"string"}
audience2_size = 0 #@param {type:"integer"}
audience3_name = "" #@param {type:"string"}
audience3_file_location = "" #@param {type:"string"}
audience3_size = 0#@param {type:"integer"}
isUsingGDrive = False #@param {type:"boolean"}
import IPython
import plotly
import plotly.offline as py
import plotly.graph_objs as go
import math
import json
import numpy as np
import pandas as pd
import re
from scipy import spatial
from scipy.spatial import distance
from sklearn.cluster import KMeans
from google.colab import drive
from google.colab import auth
from sklearn import preprocessing
from sklearn.preprocessing import scale
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.preprocessing import MinMaxScaler
from apiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
from IPython.display import display
import matplotlib as mpl
py.init_notebook_mode(connected=False)
%matplotlib inline
py.init_notebook_mode(connected=False)
if (isUsingGDrive):
drive.mount('/gdrive')
df_1 = pd.read_csv(audience1_file_location,usecols=['Dimension','Audience','List distribution'])
df_1['List distribution'] = round(df_1['List distribution']*audience1_size)
df_2 = pd.read_csv(audience2_file_location,usecols=['Dimension','Audience','List distribution'])
df_2['List distribution'] = round(df_2['List distribution']*audience2_size)
if ((audience3_name != "") & (audience3_file_location != "") & (audience3_size > 0)):
audience3_enabled = True
df_3 = pd.read_csv(audience3_file_location,usecols=['Dimension','Audience','List distribution'])
df_3['List distribution'] = round(df_3['List distribution']*audience3_size)
else:
audience3_enabled = False
def plot3d(df, item_name_col, value_name_cols):
#add additional column if only 2 audiences presented
if len(value_name_cols) == 2:
df['no_audience'] = 0
value_name_cols.append('no_audience')
py.init_notebook_mode(connected=False)
trace_points = go.Scatter3d(
x=df[value_name_cols[0]],
y=df[value_name_cols[1]],
z=df[value_name_cols[2]],
#z=df[value_name_cols[2]] if len(value_name_cols) > 2 else 0,
text=df[item_name_col],
mode='markers',
marker=dict(
size=12,
line=dict(
color='rgb(0, 0, 0, 1)',
width=0.5
),
color=df.apply(lambda x: "rgba(" + str(int(x[value_name_cols[0]]*255))
+ ',' + str(int(x[value_name_cols[1]]*255))
+ ',' + str(int(x[value_name_cols[2]]*255)) + ',1)', axis=1),
opacity=1
)
)
trace_c1 = go.Scatter3d(
x=[1],
y=[0],
z=[0],
text=value_name_cols[0],
mode='text+markers',
marker=dict(
size=120,
line=dict(
color='rgb(255, 0, 0, 0.5)',
width=3
),
color='rgb(255, 0, 0, 0.5)',#'rgba(217, 217, 217, 0.14)
opacity=.5,
)
)
trace_c2 = go.Scatter3d(
x=[0],
y=[1],
z=[0],
text=value_name_cols[1],
mode='text+markers',
marker=dict(
size=120,
line=dict(
color='rgb(0, 255, 0, 0.5)',
width=3
),
color='rgb(0, 255, 0, 0.5)',#'rgba(217, 217, 217, 0.14)
opacity=.5,
)
)
trace_c3 = go.Scatter3d(
x=[0],
y=[0],
z=[1],
text=value_name_cols[2],
mode='text+markers',
marker=dict(
size=120,
line=dict(
color='rgb(0, 0, 255, 0.5)',
width=3
),
color='rgb(0, 0, 255, 0.5)',#'rgba(217, 217, 217, 0.14)
opacity=.5,
)
)
data = [trace_points, trace_c1,trace_c2,trace_c3]
layout = go.Layout(
margin=dict(
l=0,
r=0,
b=0,
t=0
)
)
fig = go.Figure(data=data, layout=layout)
#py.iplot(fig, filename='simple-3d-scatter')
py.iplot(data)
# Plot and embed in ipython notebook!
#py.iplot(data, filename='basic-scatter')
def configure_plotly_browser_state():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
plotly: 'https://cdn.plot.ly/plotly-1.5.1.min.js?noext',
},
});
</script>
'''))
def scalarToSigmod(scalar):#0-1 input
x = (scalar-.5)*8
return 1 / (1 + math.exp(-x))
def scalarToTanh(scalar):
x = (scalar-.5)*6
return (math.tanh(x)+1)/2
def calc_tfidf(df, label_col_name, transformation='tanh'):
transformer = TfidfTransformer(smooth_idf=True, norm='l1', use_idf=False)
X = df.copy()
y = X[label_col_name]
X = X.drop([label_col_name], axis=1)
tfidf = transformer.fit_transform(X)
#create pd with results
results = pd.DataFrame.from_records(tfidf.toarray() , columns=list(X.columns.values))
#transpose
results_transposed = results.T.reset_index()
results_transposed.columns = ["COMPARED_USERLIST_FULL_NAME"] + list(y)
results_transposed
#scale to 0-1
scaler = MinMaxScaler()
results_transposed[list(y)] = scaler.fit_transform(results_transposed[list(y)])
for col in list(y):
if transformation == 'sig':
results_transposed[col] = results_transposed.apply(lambda x: scalarToSigmod(x[col]), axis=1)
elif transformation == 'tanh':
results_transposed[col] = results_transposed.apply(lambda x: scalarToTanh(x[col]), axis=1)
return results_transposed
def process_report(report):
data=[]
columnHeader = report.get('columnHeader', {})
dimensionHeaders = columnHeader.get('dimensions', [])
metricHeaders = columnHeader.get('metricHeader', {}).get('metricHeaderEntries', [])
metricHeaders = [header['name'] for header in metricHeaders]
df_headers = dimensionHeaders + metricHeaders
for row in report['data']['rows']:
d = row['dimensions']
m = row['metrics'][0]['values']
data.append(d+m)
df = pd.DataFrame(data, columns=df_headers)
pivot = pd.pivot_table(df,
index=[df.columns[0]],
columns=['ga:segment'],
aggfunc='sum').T
df = pd.DataFrame(pivot.fillna(0).to_records())
return df[df.columns[1:]]
df_1['Segmento'] = audience1_name
df_2['Segmento'] = audience2_name
if (audience3_enabled):
df_3['Segmento'] = audience3_name
df_list = [df_1,df_2,df_3]
else:
df_list = [df_1,df_2]
df = pd.concat(df_list)
df = df.loc[df['Dimension'] != 'City']
df = df.loc[df['Dimension'] != 'Country']
df['Audience'] = df['Dimension'] + ' | ' + df['Audience']
df.drop(['Dimension'],axis=1,inplace=True)
df_pivot = pd.pivot_table(df, index=['Segmento'], columns=['Audience'],aggfunc='sum').fillna(0)
df_pivot.columns = df_pivot.columns.droplevel(level=0)
df_pivot.reset_index(level=[0],inplace=True)
cmi_df = calc_tfidf(df_pivot,'Segmento')
cmi_df.head()
def plot_3d(cmi_df):
configure_plotly_browser_state()
y = list(cmi_df.drop(['COMPARED_USERLIST_FULL_NAME'],axis=1).columns)
plot3d(cmi_df,'COMPARED_USERLIST_FULL_NAME',list(y))
def print_ordered_list(cmi_df):
vecs = [[1,0,0], [0,1,0], [0,0,1]]
segments = list(cmi_df.columns[1:])
cmi_df['vector'] = cmi_df[[*segments]].values.tolist()
for i in range(len(segments)):
data = []
col = 'distance_{}'.format(segments[i])
for row in cmi_df.iterrows():
euc = distance.euclidean(row[1]['vector'], vecs[i])
data.append(euc)
cmi_df[col] = data
for col in cmi_df.columns[-3:]:
display(cmi_df[['COMPARED_USERLIST_FULL_NAME', col]].sort_values(by=col, ascending=True))
plot_3d(cmi_df)
print_ordered_list(cmi_df)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Libs and configure Plotly
Step2: Mount Drive and read the Customer Match Insights CSVs
Step3: Define Plot Function
Step4: Define TF-IDF Function
Step5: Define GA API reporting functions
Step6: Run TF-IDF
Step7: Plot the results
|
14,591 | <ASSISTANT_TASK:>
Python Code:
# Figure 1
Image(url= "http://3.bp.blogspot.com/_UpN7DfJA0j4/TJtUBWPk0SI/AAAAAAAAABY/oWPMtmqJn3k/s1600/mnist_originals.png", width=200, height=200)
from __future__ import print_function # Use a function definition from future version (say 3.x from 2.7 interpreter)
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
import cntk as C
%matplotlib inline
# Select the right target device when this notebook is being tested:
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
# Test for CNTK version
if not C.__version__ == "2.0":
raise Exception("this lab is designed to work with 2.0. Current Version: " + C.__version__)
# Ensure we always get the same amount of randomness
np.random.seed(0)
C.cntk_py.set_fixed_random_seed(1)
C.cntk_py.force_deterministic_algorithms()
# Define the data dimensions
input_dim = 784
num_output_classes = 10
# Read a CTF formatted text (as mentioned above) using the CTF deserializer from a file
def create_reader(path, is_training, input_dim, num_label_classes):
return C.io.MinibatchSource(C.io.CTFDeserializer(path, C.io.StreamDefs(
labels = C.io.StreamDef(field='labels', shape=num_label_classes, is_sparse=False),
features = C.io.StreamDef(field='features', shape=input_dim, is_sparse=False)
)), randomize = is_training, max_sweeps = C.io.INFINITELY_REPEAT if is_training else 1)
# Ensure the training and test data is generated and available for this tutorial.
# We search in two locations in the toolkit for the cached MNIST data set.
data_found = False
for data_dir in [os.path.join("..", "Examples", "Image", "DataSets", "MNIST"),
os.path.join("data", "MNIST")]:
train_file = os.path.join(data_dir, "Train-28x28_cntk_text.txt")
test_file = os.path.join(data_dir, "Test-28x28_cntk_text.txt")
if os.path.isfile(train_file) and os.path.isfile(test_file):
data_found = True
break
if not data_found:
raise ValueError("Please generate the data by completing Lab1_MNIST_DataLoader")
print("Data directory is {0}".format(data_dir))
num_hidden_layers = 2
hidden_layers_dim = 400
#hidden_layers_dim = 50
input = C.input_variable(input_dim)
label = C.input_variable(num_output_classes)
def create_model(features):
with C.layers.default_options(init = C.layers.glorot_uniform(), activation = C.ops.relu):
#with C.layers.default_options(init = C.layers.glorot_uniform(), activation = C.ops.sigmoid):
h = features
for _ in range(num_hidden_layers):
h = C.layers.Dense(hidden_layers_dim)(h)
r = C.layers.Dense(num_output_classes, activation = None)(h)
#r = C.layers.Dense(num_output_classes, activation = C.ops.sigmoid)(h)
return r
z = create_model(input)
# Scale the input to 0-1 range by dividing each pixel by 255.
z = create_model(input/255.0)
loss = C.cross_entropy_with_softmax(z, label)
label_error = C.classification_error(z, label)
# Instantiate the trainer object to drive the model training
learning_rate = 0.2
lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch)
learner = C.sgd(z.parameters, lr_schedule)
trainer = C.Trainer(z, (loss, label_error), [learner])
# Define a utility function to compute the moving average sum.
# A more efficient implementation is possible with np.cumsum() function
def moving_average(a, w=5):
if len(a) < w:
return a[:] # Need to send a copy of the array
return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)]
# Defines a utility that prints the training progress
def print_training_progress(trainer, mb, frequency, verbose=1):
training_loss = "NA"
eval_error = "NA"
if mb%frequency == 0:
training_loss = trainer.previous_minibatch_loss_average
eval_error = trainer.previous_minibatch_evaluation_average
if verbose:
print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}%".format(mb, training_loss, eval_error*100))
return mb, training_loss, eval_error
# Initialize the parameters for the trainer
minibatch_size = 64
#minibatch_size = 512
num_samples_per_sweep = 60000
num_sweeps_to_train_with = 10
num_minibatches_to_train = (num_samples_per_sweep * num_sweeps_to_train_with) / minibatch_size
# Create the reader to training data set
reader_train = create_reader(train_file, True, input_dim, num_output_classes)
# Map the data streams to the input and labels.
input_map = {
label : reader_train.streams.labels,
input : reader_train.streams.features
}
# Run the trainer on and perform model training
training_progress_output_freq = 500
plotdata = {"batchsize":[], "loss":[], "error":[]}
for i in range(0, int(num_minibatches_to_train)):
# Read a mini batch from the training data file
data = reader_train.next_minibatch(minibatch_size, input_map = input_map)
trainer.train_minibatch(data)
batchsize, loss, error = print_training_progress(trainer, i, training_progress_output_freq, verbose=1)
if not (loss == "NA" or error =="NA"):
plotdata["batchsize"].append(batchsize)
plotdata["loss"].append(loss)
plotdata["error"].append(error)
# Compute the moving average loss to smooth out the noise in SGD
plotdata["avgloss"] = moving_average(plotdata["loss"])
plotdata["avgerror"] = moving_average(plotdata["error"])
# Plot the training loss and the training error
import matplotlib.pyplot as plt
plt.figure(1)
plt.subplot(211)
plt.plot(plotdata["batchsize"], plotdata["avgloss"], 'b--')
plt.xlabel('Minibatch number')
plt.ylabel('Loss')
plt.title('Minibatch run vs. Training loss')
plt.show()
plt.subplot(212)
plt.plot(plotdata["batchsize"], plotdata["avgerror"], 'r--')
plt.xlabel('Minibatch number')
plt.ylabel('Label Prediction Error')
plt.title('Minibatch run vs. Label Prediction Error')
plt.show()
# Read the training data
reader_test = create_reader(test_file, False, input_dim, num_output_classes)
test_input_map = {
label : reader_test.streams.labels,
input : reader_test.streams.features,
}
# Test data for trained model
test_minibatch_size = 512
num_samples = 10000
num_minibatches_to_test = num_samples // test_minibatch_size
test_result = 0.0
for i in range(num_minibatches_to_test):
# We are loading test data in batches specified by test_minibatch_size
# Each data point in the minibatch is a MNIST digit image of 784 dimensions
# with one pixel per dimension that we will encode / decode with the
# trained model.
data = reader_test.next_minibatch(test_minibatch_size,
input_map = test_input_map)
eval_error = trainer.test_minibatch(data)
test_result = test_result + eval_error
# Average of evaluation errors of all test minibatches
print("Average test error: {0:.2f}%".format(test_result*100 / num_minibatches_to_test))
out = C.softmax(z)
# Read the data for evaluation
reader_eval = create_reader(test_file, False, input_dim, num_output_classes)
eval_minibatch_size = 25
eval_input_map = {input: reader_eval.streams.features}
data = reader_test.next_minibatch(eval_minibatch_size, input_map = test_input_map)
img_label = data[label].asarray()
img_data = data[input].asarray()
predicted_label_prob = [out.eval(img_data[i]) for i in range(len(img_data))]
# Find the index with the maximum value for both predicted as well as the ground truth
pred = [np.argmax(predicted_label_prob[i]) for i in range(len(predicted_label_prob))]
gtlabel = [np.argmax(img_label[i]) for i in range(len(img_label))]
print("Label :", gtlabel[:25])
print("Predicted:", pred)
# Plot a random image
sample_number = 5
plt.imshow(img_data[sample_number].reshape(28,28), cmap="gray_r")
plt.axis('off')
img_gt, img_pred = gtlabel[sample_number], pred[sample_number]
print("Image Label: ", img_pred)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Goal
Step2: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
Step3: Data reading
Step4: <a id='#Model Creation'></a>
Step5: Network input and output
Step6: Multi-layer Perceptron setup
Step7: z will be used to represent the output of a network.
Step8: Training
Step9: Evaluation
Step10: Configure training
Step11: First let us create some helper functions that will be needed to visualize different functions associated with training.
Step12: <a id='#Run the trainer'></a>
Step13: Let us plot the errors over the different training minibatches. Note that as we iterate the training loss decreases though we do see some intermediate bumps.
Step14: Evaluation / Testing
Step15: Note, this error is very comparable to our training error indicating that our model has good "out of sample" error a.k.a. generalization error. This implies that our model can very effectively deal with previously unseen observations (during the training process). This is key to avoid the phenomenon of overfitting.
Step16: Let us test a small minibatch sample from the test data.
Step17: As you can see above, our model is much better. Do you see any mismatches?
|
14,592 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
import math
from IPython.display import HTML
HTML('../style/code_toggle.html')
import math
from matplotlib import rcParams
rcParams['text.usetex'] = True
#def trianglewave(x, T):
#
# This is a sawtooth, though
#
# return np.mod(x/T,1.)*np.logical_and(x>=0,x<=T)
def trianglewave(x, T):
T is the period.
return np.abs(2.*(np.mod(x/T,1.)-0.5))-0.5
def boxcar(x,a,b,amp):
return amp*np.logical_and(x>=a,x<=b)
def plottriboxconv(a, b, period):
# limits of boxcar Play arround with this
# a = -0.1
# b = 0.1
# Plotting range
xrange = [-2., 2.]
# Create functions
xpoints = 1000
# Resolution element
dx = (xrange[1]-xrange[0])/float(xpoints)
x = np.linspace(xrange[0], xrange[1], xpoints)
y = boxcar(x, a, b, 1.)
# boxcar will be normalised to 1. amp = 1./(b-a) works in the limit of many points, but here we do
# numberofpixelsinbox*dx*amplitude = y.sum *dx*amplitude = 1
# to take into account numerical effects
amp = float(xpoints)/((xrange[1]-xrange[0])* y.sum())
y = boxcar(x, a, b, 1./(b-a))
ycorr = boxcar(x, a, b, amp)
z = trianglewave(x, period)
result = np.convolve(ycorr,z,'same')
result = dx*result
# Start the plot, create a figure instance and a subplot
fig = plt.figure()
ax1 = fig.add_subplot(311)
fig.tight_layout()
plt.subplots_adjust(hspace = 0.6)
# Axis ranges
ax1.axis([xrange[0]+(b-a), xrange[1]-(b-a), z.min()-0.1*(z.max()-z.min()), z.max()+0.1*(z.max()-z.min())])
# Plot a grid
ax1.grid(True)
# Insert lines at x=0 and y=0
ax1.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax1.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
ax1.plot(x,z,'b-')
plt.title("Triangle wave", fontsize=14,color='black')
ax2 = fig.add_subplot(312, sharex=ax1)
# Axis ranges
ax2.axis([xrange[0]+(b-a), xrange[1]-(b-a), ycorr.min()-0.1*(ycorr.max()-ycorr.min()), \
ycorr.max()+0.1*(ycorr.max()-ycorr.min())])
# Plot a grid
ax2.grid(True)
# Insert lines at x=0 and y=0
ax2.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax2.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
e1 = int(math.ceil(xpoints*(a-xrange[0])/(xrange[1]-xrange[0])))
ax2.plot(x[:e1],y[:e1],'b-')
ax2.plot([a, a],[0., amp],'b--')
e2 = int(math.floor(xpoints*(b-xrange[0])/(xrange[1]-xrange[0])))
ax2.plot(x[e1:e2],y[e1:e2],'b-')
e3 = xpoints
ax2.plot(x[e2:],y[e2:],'b-')
ax2.plot([b, b],[0., amp],'b--')
plt.title("Rectangle function", fontsize=14,color='black')
ax3 = fig.add_subplot(313, sharex=ax2)
# Axis ranges: mask out border effects
rmin = result.min()
rmax = result.max()
# Just to make the result a bit more beautiful if the function is very flat
if (rmax - rmin) < 0.1:
rmin=rmin-0.1
rmax=rmax+0.1
ax3.axis([xrange[0]+(b-a), xrange[1]-(b-a), rmin-0.1*(rmax-rmin), rmax+0.1*(rmax-rmin)])
# Plot a grid
ax3.grid(True)
# Insert lines at x=0 and y=0
ax3.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax3.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
plr1 = int(xpoints*(b-a)/(xrange[1]-xrange[0]))
plr2 = int(xpoints*(1-(b-a)/(xrange[1]-xrange[0])))
ax3.plot(x[plr1:plr2],result[plr1:plr2],'b-')
plt.title("Triangle wave filtered with rectangle function", fontsize=14,color='black')
# first two arguments give the position of the rectangle, third the period of the Triangle
plottriboxconv(-0.1, 0.1, 1.0)
# <a id='math:fig:trifilt'></a><!--\label{math:fig:trifilt}-->
# first two arguments give the position of the rectangle, third the period of the Triangle
plottriboxconv(-0.5, 0.5, 1.0)
# <a id='math:fig:trifilt'></a><!--\label{math:fig:trifilt}-->
from matplotlib import rcParams
rcParams['text.usetex'] = True
def noisycosinewave(x, amplitude, T, sigma):
T is the period, sigma is the dispersion, amplitude the amplitude
return amplitude*np.cos(2.*math.pi*x/T)+np.random.normal(scale=sigma, size=x.size)
def boxcar(x,a,b,amp):
return amp*np.logical_and(x>=a,x<=b)
def plotcosboxconv(a, b, period, sigma):
# limits of boxcar Play arround with this
# a = -0.1
# b = 0.1
# Plotting range
xrange = [-2., 2.]
# Create functions
xpoints = 1000
# Resolution element
dx = (xrange[1]-xrange[0])/float(xpoints)
x = np.linspace(xrange[0], xrange[1], xpoints)
y = boxcar(x, a, b, 1.)
# boxcar will be normalised to 1. amp = 1./(b-a) works in the limit of many points, but here we do
# numberofpixelsinbox*dx*amplitude = y.sum *dx*amplitude = 1
# to take into account numerical effects
amp = float(xpoints)/((xrange[1]-xrange[0])* y.sum())
y = boxcar(x, a, b, 1./(b-a))
ycorr = boxcar(x, a, b, amp)
z = noisycosinewave(x, 1., period, sigma)
c = np.cos(2.*math.pi*x/period)
result = np.convolve(ycorr,z,'same')
result = dx*result
# Start the plot, create a figure instance and a subplot
fig = plt.figure()
ax1 = fig.add_subplot(411)
fig.tight_layout()
plt.subplots_adjust(hspace = 0.8)
# Axis ranges
ax1.axis([xrange[0]+(b-a), xrange[1]-(b-a), c.min()-0.1*(c.max()-c.min()), c.max()+0.1*(c.max()-c.min())])
# Plot a grid
ax1.grid(True)
# Insert lines at x=0 and y=0
ax1.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax1.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
ax1.plot(x,c,'b-')
plt.title("Original function (cos)", fontsize=14,color='black')
ax1 = fig.add_subplot(412)
# Axis ranges
ax1.axis([xrange[0]+(b-a), xrange[1]-(b-a), z.min()-0.1*(z.max()-z.min()), z.max()+0.1*(z.max()-z.min())])
# Plot a grid
ax1.grid(True)
# Insert lines at x=0 and y=0
ax1.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax1.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
ax1.plot(x,z,'b-')
plt.title("Noise added", fontsize=14,color='black')
ax2 = fig.add_subplot(413, sharex=ax1)
# Axis ranges
ax2.axis([xrange[0]+(b-a), xrange[1]-(b-a), ycorr.min()-0.1*(ycorr.max()-ycorr.min()), \
ycorr.max()+0.1*(ycorr.max()-ycorr.min())])
# Plot a grid
ax2.grid(True)
# Insert lines at x=0 and y=0
ax2.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax2.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
e1 = int(math.ceil(xpoints*(a-xrange[0])/(xrange[1]-xrange[0])))
ax2.plot(x[:e1],y[:e1],'b-')
ax2.plot([a, a],[0., amp],'b--')
e2 = int(math.floor(xpoints*(b-xrange[0])/(xrange[1]-xrange[0])))
ax2.plot(x[e1:e2],y[e1:e2],'b-')
e3 = xpoints
ax2.plot(x[e2:],y[e2:],'b-')
ax2.plot([b, b],[0., amp],'b--')
plt.title("Rectangle function", fontsize=14,color='black')
ax3 = fig.add_subplot(414, sharex=ax2)
# Axis ranges: mask out border effects
rmin = result.min()
rmax = result.max()
# Just to make the result a bit more beautiful if the function is very flat
if (rmax - rmin) < 0.1:
rmin=rmin-0.1
rmax=rmax+0.1
ax3.axis([xrange[0]+(b-a), xrange[1]-(b-a), rmin-0.1*(rmax-rmin), rmax+0.1*(rmax-rmin)])
# Plot a grid
ax3.grid(True)
# Insert lines at x=0 and y=0
ax3.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax3.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
plr1 = int(xpoints*(b-a)/(xrange[1]-xrange[0]))
plr2 = int(xpoints*(1-(b-a)/(xrange[1]-xrange[0])))
ax3.plot(x[plr1:plr2],result[plr1:plr2],'b-')
plt.title("Noisy function filtered with rectangle function", fontsize=14,color='black')
# first two arguments give the position of the rectangle, third the period of the Triangle
plotcosboxconv(-0.1, 0.1, 1.0, 2.5)
# <a id='math:fig:filtnoise'></a><!--\label{math:fig:filtnoise}-->
from matplotlib import rcParams
rcParams['text.usetex'] = True
def gausshermetian(x, amp, mu, sigma, h3, h4):
T is the period, sigma is the dispersion, amplitude the amplitude
y = (x-mu)/sigma
return amp*np.exp(-0.5*y**2)*(1+h3*(2*np.sqrt(2.)*y**3-3*np.sqrt(2.)*y)/np.sqrt(6.)+h4*(4*y**4-12*y**2+3)/np.sqrt(24))
#amplitude*np.cos(2.*math.pi*x/T)+np.random.normal(scale=sigma, size=x.size)
def boxcar(x,a,b,amp):
return amp*np.logical_and(x>=a,x<=b)
def plotskewedgaussobs(pos1, pos2, boxwidth, sigma, h3, h4):
# limits of boxcar Play arround with this
# a = -0.1
# b = 0.1
# Plotting range
xrange = [-2., 2.]
# Create functions
xpoints = 1000
# Resolution element
dx = (xrange[1]-xrange[0])/float(xpoints)
x = np.linspace(xrange[0], xrange[1], xpoints)
y = boxcar(x, pos1-boxwidth/2., pos1+boxwidth/2, \
1./boxwidth)+0.5*boxcar(x, pos2-boxwidth/2., pos2+boxwidth/2, 1./boxwidth)
# boxcar will be normalised to 1. amp = 1./(b-a) works in the limit of many points, but here we do
# numberofpixelsinbox*dx*amplitude = y.sum *dx*amplitude = 1
# to take into account numerical effects
z = gausshermetian(x, 1., 0., sigma, h3, h4)
result = np.convolve(y,z,'same')
result = dx*result
# Start the plot, create a figure instance and a subplot
fig = plt.figure()
ax1 = fig.add_subplot(311)
fig.tight_layout()
plt.subplots_adjust(hspace = 0.7)
# Axis ranges
ax1.axis([xrange[0]+boxwidth, xrange[1]-boxwidth, y.min()-0.1*(y.max()-y.min()), y.max()+0.1*(y.max()-y.min())])
# Plot a grid
ax1.grid(True)
# Insert lines at x=0 and y=0
ax1.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax1.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
ax1.plot(x,y,'b-')
plt.title("Original function, impulse", fontsize=14,color='black')
ax2 = fig.add_subplot(312, sharex=ax1)
# Axis ranges
ax2.axis([xrange[0]+boxwidth, xrange[1]-boxwidth, z.min()-0.1*(z.max()-z.min()), z.max()+0.1*(z.max()-z.min())])
# Plot a grid
ax2.grid(True)
# Insert lines at x=0 and y=0
ax2.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax2.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
ax2.plot(x,z,'b-')
plt.title("Instrumental function", fontsize=14,color='black')
ax3 = fig.add_subplot(313, sharex=ax2)
# Axis ranges: mask out border effects
rmin = result.min()
rmax = result.max()
ax3.axis([xrange[0]+boxwidth, xrange[1]-boxwidth, rmin-0.1*(rmax-rmin), rmax+0.1*(rmax-rmin)])
# Plot a grid
ax3.grid(True)
# Insert lines at x=0 and y=0
ax3.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')
ax3.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')
# Plot function
plr1 = int(xpoints*boxwidth/(xrange[1]-xrange[0]))
plr2 = int(xpoints*(1-boxwidth/(xrange[1]-xrange[0])))
ax3.plot(x[plr1:plr2],result[plr1:plr2],'b-')
plt.title("Image: original function filtered with instrumental function", fontsize=14,color='black')
# first two arguments give the position of the rectangle, third the period of the Triangle
plotskewedgaussobs(0.0, 1.0, 0.01, 0.1, 0.2, 0.1)
# <a id='math:fig:stars'></a><!--\label{math:fig:stars}-->
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step4: 2.5 Convolution<a id='math
Step5: Figure 2.5.1
Step7: Figure 2.5.2
Step9: Figure 2.5.3
|
14,593 | <ASSISTANT_TASK:>
Python Code:
# Load the software ("import the module" in python jargon)
from fermi_school_like import *
# Import matplotlib for plotting
from matplotlib import pyplot as plt
# This show the plots inline in the notebook
%matplotlib inline
# Define number of bins in our data
n_bins = 100
# Generate bins in x
bin_boundaries = np.linspace(1,100,100)
bins = Bins(bin_boundaries)
print(bins.boundaries)
# Create a generative model (just a simple line ax + b)
# Define its coefficients
a_true = 3.0
true_signal = Constant(a_true)
# Use it as generative process
data_generative_process = DataGenerativeProcess(true_signal)
# Prepare a likelihood analysis
# First we need data. Since this is an exercise, let's get the data
# from the generative model defined above.
# For each bin this generates a random number from a Poisson distribution
# with the average given by the integral of the model over the bins
data = data_generative_process.generate(bins)
# Let's plot the data
plt.bar(bins.centers, data, width = bins.widths, align='center')
plt.xlabel("x")
plt.ylabel("counts")
# Plot also the generative model
plt.plot(bins.centers, true_signal(bins.centers))
# Then we need to assume a model. In this case we know that it
# must be a line. Let's start from values close but not quite
# like the true value (which in a real analysis we wouldn't know)
a = 2.5
model = Constant(a)
# Then we need to decide a noise model
noise_model = 'Poisson'
# Now we can create a Likelihood analysis and perform
# its maximization
like = Likelihood(bins, data, model, noise_model)
# Find the Maximum Likelihood Estimate for our parameter:
a_mle = like.maximize()
# Print the MLE estimate. It should be close enough (but not exact. why?)
print("MLE estimate for a: %.3f " % a_mle)
# Now repeat the whole analysis (from data generation on) a certain number of times
# I prepared a convenience function to do that.
# This function regenerates some data from the same generative process
# used above, then it fits them and returns the list of MLE estimates
# for a (one for each iteration)
# Let's do it 1000 times
many_a_mle = like.generate_and_fit(data_generative_process, 1000)
# Now let's plot the MLEs for a
plt.plot(many_a_mle,'.')
plt.ylabel(r"a$_{MLE}$")
plt.xlabel("iteration")
# Plot the true value
plt.axhline(a_true, color='red',lw=2,linestyle='--')
# We can make an histogram of the MLE estimates
histogram = plt.hist(many_a_mle, 20)
plt.xlabel("a")
# plot the vertical like of the true value
plt.axvline(a_true, color='red', lw=2, linestyle='--', zorder=100)
# If you want an example of a biased estimator, let's use chi square
# in this case.
# As shown in the presentation, maximizing a likelihood with a
# Gaussian noise model is equivalent to minimize chi square
like.noise_model = 'gaussian'
many_a_mle_chi = like.generate_and_fit(data_generative_process, 1000)
# Now let's plot for example the maximum estimates for a
plt.plot(many_a_mle_chi,'.')
plt.ylabel(r"a$_{MLE}$")
plt.xlabel("iteration")
# Plot the true value
plt.axhline(a_true, color='red',lw=2,linestyle='--')
# Let's adjust the y range to include the points and the
# true value
plt.ylim([many_a_mle_chi.min(), a_true * 1.1])
histogram = plt.hist(many_a_mle_chi, 20)
# plot the vertical like of the true value
plt.axvline(a_true, color='red', lw=2, linestyle='--', zorder=100)
# Adjust the x range to include the true value
plt.xlim([many_a_mle_chi.min(), a_true * 1.1])
# Let's generate the model with a variable quantity of data
n_bins_to_try = [10,100,1000]
for n_bins in n_bins_to_try:
# Generate number of bins in x
# (NOTE: we are generating n_bins bins from 0 to n_bins)
bin_boundaries = np.linspace(1, n_bins, n_bins)
bins = Bins(bin_boundaries)
data = data_generative_process.generate(bins)
like = Likelihood(bins, data, model, 'poisson')
this_a_mle = like.generate_and_fit(data_generative_process, 1000)
# We can make an histogram of the MLE estimates
histogram = plt.hist(this_a_mle, 20, label='N = %i' % n_bins, histtype='step')
plt.xlabel("a")
# plot the vertical like of the true value
plt.axvline(a_true, color='red', lw=2, linestyle='--', zorder=100)
plt.legend()
# Let's prepare a grid in possible values for a,
# between the 80% and 120% of the true value
# (this is arbitrary)
a_s = np.linspace(a_true * 0.8, a_true * 1.2,300)
# Let's generate data and fit them
# Let's use a small quantity of data first
n_bins1 = 100
bin_boundaries1 = np.linspace(1,n_bins1,n_bins1)
bins1 = Bins(bin_boundaries1)
data1 = data_generative_process.generate(bins1)
like1 = Likelihood(bins1, data1, model, 'poisson')
a_mle1 = like1.maximize()
# This goes through all the a_s values and for each a compute
# L(D|a)
profile1 = like1.profile(a_s)
# Now let's do the same for a larger quantity of data
n_bins2 = 10000
bin_boundaries2 = np.linspace(1,n_bins2, n_bins2)
bins2 = Bins(bin_boundaries2)
data2 = data_generative_process.generate(bins2)
like2 = Likelihood(bins2, data2, model, 'poisson')
a_mle2 = like2.maximize()
profile2 = like2.profile(a_s)
plt.plot(a_s, profile1 - profile1.max(), label='Few data')
plt.plot(a_s, profile2 - profile2.max(), label='Many data')
plt.xlabel("a")
plt.ylabel("log. likelihood shifted to 0")
plt.ylim([-5,1])
plt.axvline(a_true, linestyle='--',lw=2, color='red')
# Let's find the values for which the likelihood changes by 0.5
# with respect to its maximum
negative_error1, positive_error1 = like1.get_errors(a_mle1)
negative_error2, positive_error2 = like2.get_errors(a_mle2)
# Let's replot the profiles
plt.plot(a_s, profile1 - profile1.max(), label='Few data')
plt.plot(a_s, profile2 - profile2.max(), label='Many data')
plt.xlabel("a")
plt.ylabel("log. likelihood shifted to 0")
plt.axvline(a_true, linestyle='--',lw=2, color='red')
# This is the horizontal line at -0.5
plt.axhline(-0.5, linestyle=':')
# Now plot the errors we have found, corresponding to the intersection
# between the profiles and the horizontal line at -0.5
plt.axvline(a_mle1 + negative_error1, color='blue',linestyle=':')
plt.axvline(a_mle1 + positive_error1, color='blue',linestyle=':')
plt.axvline(a_mle2 + negative_error2, color='green',linestyle=':')
plt.axvline(a_mle2 + positive_error2, color='green',linestyle=':')
# Let's adjust the limit of the plot to zoom in
plt.ylim([-3,1])
plt.xlim([a_mle1 + negative_error1 * 1.7, a_mle1 + positive_error1 * 1.7])
# Check the coverage of the confidence intervals produced with the likelihood profile
# technique
# The fraction of simulations when the interval contains the true value should be equal to the confidence
# level
# Go back to the small dataset
n_bins = 100
bin_boundaries = np.linspace(1,100,n_bins)
bins = Bins(bin_boundaries)
data = data_generative_process.generate(bins)
like = Likelihood(bins, data, model, 'poisson')
a_mle = like1.maximize()
# Number of simulations
n_sims = 1000
a_mles, a_mle_errors = like.generate_and_fit(data_generative_process, n_sims, compute_errors=True)
# Keep track of how many times the true value is inside the
# confidence interval, and which one are inside
n_inside = 0
inside = np.zeros(n_sims,bool)
# Save MLE value, negative and positive errors for easy plotting
mle_estimates = np.zeros(n_sims)
negative_errors = np.zeros(n_sims)
positive_errors = np.zeros(n_sims)
for i in range(n_sims):
a_mle = a_mles[i]
mle_estimates[i] = a_mle
negative_error = a_mle_errors[i][0]
positive_error = a_mle_errors[i][1]
lower_boundary = a_mle + negative_error
upper_boundary = a_mle + positive_error
if lower_boundary <= a_true <= upper_boundary:
n_inside += 1
inside[i] = True
# Need to do this because errorbar expects the negative and positive
# errors in two different lists (or arrays), and the negative error
# with positive sign (!)
negative_errors[i] = negative_error * -1
positive_errors[i] = positive_error
print("Fraction of simulations for which the 68 c.l. interval actually contains the true value: %.2f" %
(n_inside / float(n_sims)))
# Plot in gray all simulations where the true value was inside
plt.errorbar(np.arange(n_sims)[inside],mle_estimates[inside],
yerr=[negative_errors[inside], positive_errors[inside]],
fmt='.', capsize=0, color='green', alpha=0.2,
label='Truth inside confidence interval')
# replot in red the iterations where the true value
# was outside the confidence interval
outside = ~inside
plt.errorbar(np.arange(n_sims)[outside], mle_estimates[outside],
yerr=[negative_errors[outside], positive_errors[outside]],
fmt='.', capsize=0, color='red',alpha=0.5,
label='Truth outside confidence interval')
plt.axhline(a_true,color='red',linestyle='--', lw=2)
plt.xlabel("iteration")
plt.ylabel("a")
plt.legend(frameon=True, numpoints=1)
plt.ylim((mle_estimates-negative_errors).min() / 1.2, (mle_estimates+positive_errors).max() * 1.2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup our generative process
Step2: Likelihood analysis
Step3: Bias
Step4: The fact that the average the MLE value approaches the true value when the number of observations increases means that the MLE estimator is unbiased.
Step5: $\chi^2$ is biased in this case because it assumes the wrong noise model (gaussian) while our data have Poisson noise. If you were to use a larger value for a_true, then the situation will get better until the $\chi^2$ minimization would work as good as the Poisson likelihood maximization. The reason is that for large $n$ the Poisson distribution approaches the Gaussian distribution with $\sigma = \sqrt{n}$.
Step6: Errors on the Maximum Likelihood estimate
Step7: We see that the likelihood profile for the case where we have fewer data is much broader than the profile for the likelihood for the case of a larger quantity of data.
Step8: Coverage of confidence interval
|
14,594 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
ds = xr.tutorial.open_dataset("rasm").load()
ds
month_length = ds.time.dt.days_in_month
month_length
# Calculate the weights by grouping by 'time.season'.
weights = (
month_length.groupby("time.season") / month_length.groupby("time.season").sum()
)
# Test that the sum of the weights for each season is 1.0
np.testing.assert_allclose(weights.groupby("time.season").sum().values, np.ones(4))
# Calculate the weighted average
ds_weighted = (ds * weights).groupby("time.season").sum(dim="time")
ds_weighted
# only used for comparisons
ds_unweighted = ds.groupby("time.season").mean("time")
ds_diff = ds_weighted - ds_unweighted
# Quick plot to show the results
notnull = pd.notnull(ds_unweighted["Tair"][0])
fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(14, 12))
for i, season in enumerate(("DJF", "MAM", "JJA", "SON")):
ds_weighted["Tair"].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 0],
vmin=-30,
vmax=30,
cmap="Spectral_r",
add_colorbar=True,
extend="both",
)
ds_unweighted["Tair"].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 1],
vmin=-30,
vmax=30,
cmap="Spectral_r",
add_colorbar=True,
extend="both",
)
ds_diff["Tair"].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 2],
vmin=-0.1,
vmax=0.1,
cmap="RdBu_r",
add_colorbar=True,
extend="both",
)
axes[i, 0].set_ylabel(season)
axes[i, 1].set_ylabel("")
axes[i, 2].set_ylabel("")
for ax in axes.flat:
ax.axes.get_xaxis().set_ticklabels([])
ax.axes.get_yaxis().set_ticklabels([])
ax.axes.axis("tight")
ax.set_xlabel("")
axes[0, 0].set_title("Weighted by DPM")
axes[0, 1].set_title("Equal Weighting")
axes[0, 2].set_title("Difference")
plt.tight_layout()
fig.suptitle("Seasonal Surface Air Temperature", fontsize=16, y=1.02)
# Wrap it into a simple function
def season_mean(ds, calendar="standard"):
# Make a DataArray with the number of days in each month, size = len(time)
month_length = ds.time.dt.days_in_month
# Calculate the weights by grouping by 'time.season'
weights = (
month_length.groupby("time.season") / month_length.groupby("time.season").sum()
)
# Test that the sum of the weights for each season is 1.0
np.testing.assert_allclose(weights.groupby("time.season").sum().values, np.ones(4))
# Calculate the weighted average
return (ds * weights).groupby("time.season").sum(dim="time")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open the Dataset
Step2: Now for the heavy lifting
|
14,595 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
pd.__version__
nrg = pd.read_csv('energy_consumption.csv'); nrg.describe(include='all')
nrg.head()
nrg.dtypes
# https://docs.python.org/3/library/functions.html#type
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iat.html
type(nrg.iat[0,0])
nrg['date_time'] = pd.to_datetime(nrg['date_time'])
# https://stackoverflow.com/questions/29206612/difference-between-data-type-datetime64ns-and-m8ns
nrg['date_time'].dtype
nrg.head()
from timer import timeit
@timeit(repeat=3, number=10)
def convert_with_format(nrg, column_name):
return pd.to_datetime(nrg[column_name], format='%d/%m/%y %H:%M')
nrg['date_time'] = convert_with_format(nrg, 'date_time')
nrg['cost_cents'] = nrg['energy_kwh'] * 28; nrg.head()
# Create a function to apply the appropriate rate to the given hour:
def apply_rate(kwh, hour):
Calculates the cost of electricity for a given hour.
if 0 <= hour < 7:
rate = 12
elif 7 <= hour <= 17:
rate = 20
elif 17 <= hour <= 24:
rate = 28
else:
# +1 for error handling:
raise ValueError(f'Invalid datetime entry: {hour}')
return rate * kwh
# Not the best way:
@timeit(repeat=2, number = 10)
def apply_rate_loop(nrg):
Calculate the costs using a loop, and modify `nrg` dataframe in place.
energy_cost_list = []
for i in range(len(nrg)):
# Get electricity used and the corresponding rate.
energy_used = nrg.iloc[i]['energy_kwh']
hour = nrg.iloc[i]['date_time'].hour
energy_cost = apply_rate(energy_used, hour)
energy_cost_list.append(energy_cost)
nrg['cost_cents'] = energy_cost_list
apply_rate_loop(nrg)
@timeit(repeat=2, number=10)
def apply_rate_iterrows(nrg):
energy_cost_list = []
for index, row in nrg.iterrows():
energy_used = row['energy_kwh']
hour = row['date_time'].hour
energy_cost = apply_rate(energy_used, hour)
energy_cost_list.append(energy_cost)
nrg['cost_cents'] = energy_cost_list
apply_rate_iterrows(nrg)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The rows contains the electricity used in each hour for a one year period.
Step2: Both pandas and Numpy use the concept of dtypes as data types, and if no arguments are specified, date_time will take on an object dtype.
Step3: This will be an issue with any column that can't neatly fit into a single data type.
Step4: If you're curious about alternatives to the code above, check out pandas.PeriodIndex, which can store ordinal values indicating regular time periods.
Step5: Time for a timing decorator
Step6: One easily overlooked detail is that the datetimes in the energy_consumption.csv file are not in ISO 8601 format.
Step8: However, our hourly costs depend on the time of day.
Step10: Now for a computationally expensive and non-Pythonic loop
Step11: You can consider the above to be an “antipattern” in pandas for several reasons.
|
14,596 | <ASSISTANT_TASK:>
Python Code:
platform = 'lendingclub'
store = pd.HDFStore(
'/Users/justinhsi/justin_tinkering/data_science/lendingclub/{0}_store.h5'.
format(platform),
append=True)
loan_info = store['train_filtered_columns']
columns = loan_info.columns.values
# checking dtypes to see which columns need one hotting, and which need null or not
to_one_hot = []
to_null_or_not = []
do_nothing = []
for col in columns:
if loan_info[col].dtypes == np.dtype('O'):
print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict())
to_one_hot.append(col)
elif len(loan_info[col].isnull().value_counts(dropna=False)) > 1:
print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict())
to_null_or_not.append(col)
else:
print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict())
do_nothing.append(col)
standardized, eval_cols, mean_series, std_dev_series = data_prep.process_data_train(
loan_info)
regr = RandomForestRegressor(
n_estimators=20,
random_state=0,
max_features=10,
min_samples_split=20,
min_samples_leaf=10,
n_jobs=-1, )
regr.fit(standardized, eval_cols)
# dump the model
joblib.dump(regr, 'model_dump/model_0.2.0.pkl')
# joblib.dump((mean_series, std_dev_series), 'model_dump/mean_stddev.pkl')
regr.score(standardized, eval_cols)
now = time.strftime("%Y_%m_%d_%Hh_%Mm_%Ss")
# info to stick in detailed dataframe describing each model
model_info = {'model_version': '0.2.0',
'target': 'npv_roi_10',
'weights': 'None',
'algo_model': 'RF_regr',
'hyperparams': "n_estimators=20,random_state=0,max_features=10,min_samples_split=20,min_samples_leaf=10,n_jobs=-1",
'cost_func': 'sklearn default, which I think is mse',
'useful_notes': 'R2 score of .199350 (regr.score())',
'date': now}
model_info_df = pd.DataFrame(model_info, index = ['0.2.0'])
store.open()
store.append(
'model_info',
model_info_df,
data_columns=True,
index=True,
append=True,
)
store.close()
store.open()
test = store['test_filtered_columns']
train = store['train_filtered_columns']
loan_npv_rois = store['loan_npv_rois']
default_series = test['target_strict']
results = store['results']
store.close()
train_X, train_y = data_prep.process_data_test(train)
train_y = train_y['npv_roi_10'].values
test_X, test_y = data_prep.process_data_test(test)
test_y = test_y['npv_roi_10'].values
regr = joblib.load('model_dump/model_0.2.0.pkl')
regr_version = '0.2.0'
test_yhat = regr.predict(test_X)
train_yhat = regr.predict(train_X)
test_mse = np.sum((test_yhat - test_y)**2)/len(test_y)
train_mse = np.sum((train_yhat - train_y)**2)/len(train_y)
def eval_models(trials, port_size, available_loans, regr, regr_version, test, loan_npv_rois,
default_series):
results = {}
pct_default = {}
test_copy = test.copy()
for trial in tqdm_notebook(np.arange(trials)):
loan_ids = np.random.choice(
test_copy.index.values, available_loans, replace=False)
loans_to_pick_from = test_copy.loc[loan_ids, :]
scores = regr.predict(loans_to_pick_from)
scores_series = pd.Series(dict(zip(loan_ids, scores)))
scores_series.sort_values(ascending=False, inplace=True)
picks = scores_series[:900].index.values
results[trial] = loan_npv_rois.loc[picks, :].mean().to_dict()
pct_default[trial] = (default_series.loc[picks].sum()) / port_size
pct_default_series = pd.Series(pct_default)
results_df = pd.DataFrame(results).T
results_df['pct_def'] = pct_default_series
return results_df
# as per done with baseline models, say 3000 loans available
# , pick 900 of them
trials = 20000
port_size = 900
available_loans = 3000
model_results = eval_models(trials, port_size, available_loans, regr, regr_version, test_X, loan_npv_rois, default_series)
multi_index = []
for col in model_results.columns.values:
multi_index.append((col,regr_version))
append_results = model_results
append_results.columns = pd.MultiIndex.from_tuples(multi_index, names = ['discount_rate', 'model'])
try:
results = results.join(append_results)
except ValueError:
results.loc[:, (slice(None), slice('0.2.0','0.2.0'))] = append_results
results.sort_index(axis=1, inplace = True)
store.open()
store['results'] = results
model_info = store['model_info']
store.close()
results.describe()
model_info
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Until I figure out a good imputation method (e.g. bayes PCA), just drop columns with null still
Step2: straight up out of box elastic net with slightly tweaked alpha
Step3: Examine performance on test set
|
14,597 | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
14,598 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
import os
import pickle
import warnings; warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import pystan
import scipy
import seaborn as sns; sns.set_context('notebook')
from Bio import SeqIO
import tools
# load clean, normalised, indexed data
data = pd.read_csv(os.path.join("datasets", "normalised_array_data.tab"), sep="\t") # full dataset
#data = pd.read_csv("datasets/reduced_locus_data.tab", sep="\t") # reduced dataset
#data = data[:100] # uncomment this for debugging
# useful values
locus_tags = data['locus_tag'].unique()
ntags = len(locus_tags)
arrays = data['repXtrt'].unique()
narrays = len(arrays)
# Create output directory and filename to hold the fitted model
outdir = "model_fits"
os.makedirs(outdir, exist_ok=True)
outfile = os.path.join(outdir, 'full_model_fit.pkl')
# define unpooled stan model
treatment_model =
data {
int<lower=0> N;
int<lower=0> J;
int<lower=0> K;
int<lower=1, upper=J> tag[N];
int<lower=1, upper=K> array[N];
vector[N] t;
vector[N] x;
vector[N] y;
}
parameters {
vector[K] a;
vector[J] b;
vector[K] g;
vector[J] d;
real mu_a;
real mu_b;
real mu_g;
real mu_d;
real<lower=0> sigma;
real<lower=0,upper=100> sigma_a;
real<lower=0,upper=100> sigma_b;
real<lower=0,upper=100> sigma_g;
real<lower=0,upper=100> sigma_d;
}
transformed parameters{
vector[N] y_hat;
for (i in 1:N)
y_hat[i] = a[array[i]] + b[tag[i]] * x[i] + g[array[i]] * t[i] + d[tag[i]] * t[i] * x[i];
}
model {
sigma_a ~ uniform(0, 100);
a ~ cauchy(mu_a, sigma_a);
sigma_b ~ uniform(0, 100);
b ~ cauchy(mu_b, sigma_b);
sigma_g ~ uniform(0, 100);
g ~ cauchy(mu_g, sigma_g);
sigma_d ~ uniform(0, 100);
d ~ cauchy(mu_d, sigma_d);
y ~ normal(y_hat, sigma);
}
# relate python variables to stan variables
treatment_data_dict = {'N': len(data),
'J': ntags,
'K': narrays,
'tag': data['locus_tag_index'] + 1,
'array': data['repXtrt_index'] + 1,
't': data['treatment'],
'x': data['log_input'],
'y': data['log_output']}
# (1) USE THIS CELL TO RUN THE STAN FIT - takes a few hours on my laptop
#treatment_fit = pystan.stan(model_code=treatment_model,
# data=treatment_data_dict,
# iter=1000, chains=2,
# seed=tools.SEED)
# (2) USE THIS CELL TO SAVE THE STAN FIT TO A PICKLE FILE
#unpermutedChains = treatment_fit.extract()
#unpermutedChains_df = pd.DataFrame([dict(unpermutedChains)])
#pickle.dump(unpermutedChains_df, open(outfile, 'wb'))
# (3) USE THIS CELL TO DOWNLOAD THE STAN FIT FROM ZENODO: DOI:10.5281/zenodo.269638
# The file will not be downloaded if it already exists locally.
# The file is 0.5GB in size, so may take some time to download
import urllib.request
if not os.path.isfile(outfile):
zenodo_url = "https://zenodo.org/record/269638/files/full_model_fit.pkl"
response = urllib.request.urlretrieve(zenodo_url, outfile)
# (4) USE THIS CELL TO LOAD THE STAN FIT FROM A PICKLE FILE
# Import the previously-fit model
treatment_fit = pd.read_pickle(open(outfile, 'rb'))
# Get summary data for parameter estimates
# use 'fit' for the model fit directly, and 'df'for loaded pickled data
(estimates_by_probe, estimates) = tools.extract_variable_summaries(treatment_fit, 'df',
['a', 'b', 'g', 'd'],
[arrays, locus_tags, arrays, locus_tags],
data)
# Inspect the data, one row per experiment probe
estimates_by_probe.head()
# Inspect the data, one row per locus tag
estimates.head()
# Separate estimates for Sakai and DH10B into two different dataframes
sakai_estimates = tools.split_estimates(estimates, 'sakai')
dh10b_estimates = tools.split_estimates(estimates, 'dh10b')
# Visualise median values for parameter estimates of alpha and gamma
tools.boxplot_medians(estimates_by_probe, ['a', 'g'])
# Visualise median values for parameter estimates of beta and delta
tools.boxplot_medians(estimates, ['b', 'd'])
# Visualise median values for Sakai parameter estimates
tools.boxplot_medians(dh10b_estimates, ['b', 'd'])
# Visualise median values for Sakai parameter estimates
tools.boxplot_medians(sakai_estimates, ['b', 'd'])
# Plot estimated parameters for treatment effects against control effects for Sakai
fig, ax = plt.subplots(1, 1, figsize=(6,6))
ax.scatter(sakai_estimates['d_median'], sakai_estimates['b_median'], alpha=0.2)
ax.set_xlabel('delta (median)')
ax.set_ylabel('beta (median)');
# Label locus tags with positive effects for control and treatment
sakai_estimates = tools.label_positive_effects(sakai_estimates)
# Count locus tags in each of the positive groups
counts = [sum(sakai_estimates[col]) for col in ('trt_pos', 'ctl_pos', 'combined')]
print("treatment positive: {0}\ncontrol positive: {1}\nboth: {2}".format(*counts))
sakai_chromosome = sakai_estimates.loc[sakai_estimates['locus_tag'].str.startswith('ECs')]
sakai_pOSAK = sakai_estimates.loc[sakai_estimates['locus_tag'].str.startswith('pOSAK1')]
sakai_pO157 = sakai_estimates.loc[(sakai_estimates['locus_tag'].str.startswith('pO157')) |
(sakai_estimates['locus_tag'].str.startswith('ECp'))]
# Sakai chromosome
sakai_chromosome_annotated = tools.annotate_locus_tags(sakai_chromosome,
os.path.join('..', 'data', 'Sakai',
'GCF_000008865.1_ASM886v1_genomic.gbff'))
sakai_chromosome_annotated.sort_values('startpos', inplace=True)
#sakai_chromosome_annotated.head(15)
# pOSAK1
sakai_pOSAK_annotated = tools.annotate_locus_tags(sakai_pOSAK,
os.path.join('..', 'data', 'Sakai',
'GCF_000008865.1_ASM886v1_genomic.gbff'))
sakai_pOSAK_annotated.sort_values('startpos', inplace=True)
#sakai_pOSAK_annotated.head(15)
# pECp
sakai_pO157_annotated = tools.annotate_locus_tags(sakai_pO157,
os.path.join('..', 'data', 'Sakai',
'GCF_000008865.1_ASM886v1_genomic.gbff'))
sakai_pO157_annotated.sort_values('startpos', inplace=True)
#sakai_pO157_annotated.head(15)
# Regions of interest
regions = [('S-loop 71', 'ECs1276', 'ECs1288', 1.3),
('SpLE1', 'ECs1299', 'ECs1410', 1.5),
('S-loop 225', 'ECs4325', 'ECs4341', 1.5),
('S-loop 231', 'ECs4379', 'ECs4387', 1.3)]
annotations = {k:(tools.get_lt_index(v0, sakai_chromosome_annotated),
tools.get_lt_index(v1, sakai_chromosome_annotated), v2) for
k, v0, v1, v2 in regions}
# Plot genome-wide estimates of beta for Sakai and mark values that don't include the median beta in 50% CI
beta_thresh = np.median(sakai_chromosome_annotated['b_median'])
# Create figure with title to hold the plotted axis
fig = plt.figure(figsize=(20, 8))
ax = fig.add_subplot(1, 1, 1)
title = 'Estimates of beta for Sakai chromosome'
plt.title("{0} [threshold: {1:.2f}]".format(title, beta_thresh))
# Plot on the figure axes
tools.plot_parameter(sakai_chromosome_annotated, ax, 'b', beta_thresh, annotations=annotations);
# Regions of interest
regions = [('S-loop 71', 'ECs1276', 'ECs1288', 1),
('SpLE1', 'ECs1299', 'ECs1410', 1.8),
('S-loop 225', 'ECs4325', 'ECs4341', 1.8),
('S-loop 231', 'ECs4379', 'ECs4387', 1)]
annotations = {k:(tools.get_lt_index(v0, sakai_chromosome_annotated),
tools.get_lt_index(v1, sakai_chromosome_annotated), v2) for
k, v0, v1, v2 in regions}
# Plot genome-wide estimates of delta for Sakai and mark values that don't include zero in 50%CI
delta_thresh = np.median(sakai_chromosome_annotated['d_median'])
# Create figure with title to hold the plotted axis
fig = plt.figure(figsize=(20, 8))
ax = fig.add_subplot(1, 1, 1)
title = 'Estimates of delta for Sakai chromosome'
plt.title("{0} [threshold: {1:.2f}]".format(title, delta_thresh))
tools.plot_parameter(sakai_chromosome_annotated, ax, 'd', delta_thresh, annotations=annotations)
# Plot genome-wide estimates of beta for Sakai and mark values that don't include the median beta in 50% CI
beta_thresh = np.median(sakai_pOSAK_annotated['b_median'])
# Create figure with title to hold the plotted axis
fig = plt.figure(figsize=(20, 8))
ax = fig.add_subplot(1, 1, 1)
title = 'Estimates of beta for Sakai plasmid pOSAK'
plt.title("{0} [threshold: {1:.2f}]".format(title, beta_thresh))
tools.plot_parameter(sakai_pOSAK_annotated, ax, 'b', beta_thresh)
# Plot genome-wide estimates of delta for Sakai and mark values that don't include zero in 50% CI
delta_thresh = np.median(sakai_pOSAK_annotated['d_median'])
# Create figure with title to hold the plotted axis
fig = plt.figure(figsize=(20, 8))
ax = fig.add_subplot(1, 1, 1)
title = 'Estimates of delta for Sakai plasmid pOSAK'
plt.title("{0} [threshold: {1:.2f}]".format(title, beta_thresh))
tools.plot_parameter(sakai_pOSAK_annotated, ax, 'd', delta_thresh)
# Regions of interest
regions = [('StcE', 'pO157p01', 'pO157p01', 0.98),
('etp T2SS', 'pO157p02', 'pO157p14', 1)]
annotations = {k:(tools.get_lt_index(v0, sakai_pO157_annotated),
tools.get_lt_index(v1, sakai_pO157_annotated), v2) for
k, v0, v1, v2 in regions}
# Plot genome-wide estimates of beta for Sakai and mark values that don't include the median beta in 50% CI
beta_thresh = np.median(sakai_pO157_annotated['b_median'])
# Create figure with title to hold the plotted axis
fig = plt.figure(figsize=(20, 8))
ax = fig.add_subplot(1, 1, 1)
title = 'Estimates of beta for Sakai plasmid p0157'
plt.title("{0} [threshold: {1:.2f}]".format(title, beta_thresh))
tools.plot_parameter(sakai_pO157_annotated, ax, 'b', beta_thresh, annotations=annotations)
# Regions of interest
regions = [('StcE', 'pO157p01', 'pO157p01', 0.13),
('etp T2SS', 'pO157p02', 'pO157p14', 0.19)]
annotations = {k:(tools.get_lt_index(v0, sakai_pO157_annotated),
tools.get_lt_index(v1, sakai_pO157_annotated), v2) for
k, v0, v1, v2 in regions}
# Plot genome-wide estimates of delta for Sakai and mark values that don't include zero in 50% CI
delta_thresh = np.median(sakai_pO157_annotated['d_median'])
# Create figure with title to hold the plotted axis
fig = plt.figure(figsize=(20, 8))
ax = fig.add_subplot(1, 1, 1)
title = 'Estimates of delta for Sakai plasmid pO157'
plt.title("{0} [threshold: {1:.2f}]".format(title, beta_thresh))
tools.plot_parameter(sakai_pO157_annotated, ax, 'd', delta_thresh, annotations=annotations)
# Annotate the DH10B results
dh10b_annotated = tools.annotate_locus_tags(dh10b_estimates,
os.path.join('..', 'data', 'DH10B',
'GCF_000019425.1_ASM1942v1_genomic.gbff'))
dh10b_annotated.sort_values('startpos', inplace=True)
# Plot genome-wide estimates of beta for DH10B
beta_thresh = np.median(dh10b_estimates['b_median'])
# Create figure with title to hold the plotted axis
fig = plt.figure(figsize=(20, 8))
ax = fig.add_subplot(1, 1, 1)
title = 'Estimates of beta for DH10B',
plt.title("{0} [threshold: {1:.2f}]".format(title, beta_thresh))
tools.plot_parameter(dh10b_estimates, ax, 'b', beta_thresh)
# Plot genome-wide estimates of delta for DH10B
delta_thresh = np.median(dh10b_estimates['d_median'])
# Create figure with title to hold the plotted axis
fig = plt.figure(figsize=(20, 8))
ax = fig.add_subplot(1, 1, 1)
title = 'Estimates of delta for DH10B'
plt.title("{0} [threshold: {1:.2f}]".format(title, beta_thresh))
tools.plot_parameter(dh10b_estimates, ax, 'd', delta_thresh)
# Generate list of candidates with a positive effect under control or treatment.
candidates = sakai_estimates[sakai_estimates['ctl_pos'] | sakai_estimates['trt_pos']]
candidates = candidates[['locus_tag',
'b_median', 'ctl_pos',
'd_median', 'trt_pos']].sort_values(['ctl_pos', 'trt_pos', 'locus_tag'])
candidates.shape
# Inspect the data
candidates.head()
# Restrict candidates only to those with an effect on treatment/passage.
trt_only_positive = candidates.loc[candidates['trt_pos'] & ~candidates['ctl_pos']]
trt_only_positive.shape
# Annotated locus tags with functions from NCBI GenBank files
annotated = tools.annotate_locus_tags(trt_only_positive,
os.path.join('..', 'data', 'Sakai',
'GCF_000008865.1_ASM886v1_genomic.gbff'))
pd.options.display.max_rows = 115 # force to show all rows
annotated
# Write data to file in tab-separated format
outfile_annotated = os.path.join('datasets', 'trt_positive.tab')
annotated.to_csv(outfile_annotated, sep="\t")
# Create figure with no title or xticks to hold the plotted axes
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(20, 26))
# Add subplot for each result
# 1) Sakai chromosome
regions = [('S-loop 71', 'ECs1276', 'ECs1288', 1),
('SpLE1', 'ECs1299', 'ECs1410', 1.8),
('S-loop 225', 'ECs4325', 'ECs4341', 1.8),
('S-loop 231', 'ECs4379', 'ECs4387', 1)]
annotations = {k:(tools.get_lt_index(v0, sakai_chromosome_annotated),
tools.get_lt_index(v1, sakai_chromosome_annotated), v2) for
k, v0, v1, v2 in regions}
delta_thresh = np.median(sakai_chromosome_annotated['d_median'])
tools.plot_parameter(sakai_chromosome_annotated, ax1, 'd', delta_thresh, annotations=annotations,
label="a) Sakai chromosome")
# 2) pO157 plasmid
regions = [('StcE', 'pO157p01', 'pO157p01', 0.13),
('etp T2SS', 'pO157p02', 'pO157p14', 0.19)]
annotations = {k:(tools.get_lt_index(v0, sakai_pO157_annotated),
tools.get_lt_index(v1, sakai_pO157_annotated), v2) for
k, v0, v1, v2 in regions}
delta_thresh = np.median(sakai_pO157_annotated['d_median'])
tools.plot_parameter(sakai_pO157_annotated, ax2, 'd', delta_thresh, annotations=annotations,
label="b) Sakai pO157")
# 3) DH10B chromosome
delta_thresh = np.median(dh10b_estimates['d_median'])
tools.plot_parameter(dh10b_estimates, ax3, 'd', delta_thresh, label="c) DH10B chromosome")
# Save figure as pdf
plt.savefig("figure_1.pdf");
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building the model <a id="building"></a>
Step3: Stan model construction <a id="build_stan"></a>
Step4: <div class="alert-danger">
Step5: Extract the fit <a id="extract_stan"></a>
Step6: Inspecting the fit <a id="inspect_fit"></a>
Step7: <div class="alert-success">
Step8: it is clear that the median parameter estimates for DH10B are extremely restricted in their range
Step9: By contrast to the results for DH10B, the median parameter estimates for Sakai have many large value outliers, though the bulk of estimates are close to the values seen for DH10B
Step10: <br /><div class="alert-warning">
Step11: We can count the number of locus_tags in each of the groups
Step12: which indicates, with these assumptions, that
Step13: <div class="alert-success">
Step14: Identifying Sakai candidates <a id="candidates"></a>
Step15: We restrict this set to those genes that only have a credible effect on treatment/passage, identifying 115 genes with positive $\delta$ where the 50% CI does not include zero
Step16: We add a column with the functional annotation of each of the candidates that appear to have a positive selective effect under treatment conditions
Step17: Finally, we write this data out in tab-separated format
Step18: <a id="figure_1"></a>
|
14,599 | <ASSISTANT_TASK:>
Python Code:
def car_race_collision(n: int):
return n**2
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|